Copy
Our team recently published two blog posts:

Supervise Process, not Outcomes explains the limitations of today's machine learning (ML) paradigm as machine learning continues to advance. At a high level:
  1. ML systems today are often trained end-to-end, requiring large amounts of data about the goals we want them to accomplish.
  2. In the short term, this makes it hard to apply ML systems in important contexts we don't have a lot of data for.
  3. In the long term, this creates the risk that models optimize for feedback metrics that can be gamed or misspecified.
  4. What we need instead are ML systems that care about how things are done as much as the final outcome.

The Plan for Elicit describes how Elicit manifests this worldview.
  1. We're building a product for researchers because researchers care very much about the how. They care about high quality process, methodology, and transparency. 
  2. We're building Elicit compositionally by identifying the basic building blocks of research (e.g. search, summarization, classification). We operationalize them as language model tasks and connect them in the Elicit literature review workflow.
  3. Supporting literature review is just the first step. Over the next few years, we plan to support other research workflows, then to support each user creating custom workflows. Eventually, we want Elicit to help with critical reasoning beyond research. 

This is the roadmap: 

We're excited to have a guiding roadmap that lets us see years into the future. It's clear how an entire team could work on finding relevant papers and other data, another team on automated question-answering for that data, and yet another on integrating the non-lit-review tasks like brainstorming. 

AI systems are rapidly increasing in capability. But they will not help with high-quality reasoning by default. With Elicit, we are fighting to channel these powerful technologies towards good thinking, both for traditional researchers and for the rest of us. As we do that, we want to show how to build AI systems in a way that doesn't lead to them optimizing for the wrong thing.

So thank you for your support, feedback, and ideas. We need it not just for making research better (which would already be a huge accomplishment), but also for figuring out how we can make increasingly powerful artificial intelligence go well. 

If this resonates with you and you find yourself wondering why we have to wait until 2025 to do all of this, join us to make it happen sooner


Jungwon
If you don't want to get these emails, you can update your preferences or unsubscribe from this list.