Copy
Your January 2020 EA Newsletter    
 
Hello!

Welcome to a new year of the EA Newsletter.

We'll keep the intro brief, save to mention that some of the deadlines in this edition (for a few jobs, EA Global, and the EA Donor Lottery) are coming up rather soon.

If you plan to read this later, we recommend skimming now to make sure you don't miss anything! (We always mark deadlines in bold.)

— The EA Newsletter Team
 
Articles

News and updates from the world of effective altruism


Talks from EA Global: London 2019

The most recent EA Global conference featured dozens of talks from scientists, philosophers, charity leaders, an EA journalist, and a former U.S. diplomat.

You can see the recorded talks
on YouTube and listen to many of them as podcasts from EARadio.
 

Ten years of progress for farm animals

This month, Lewis Bollard’s animal welfare newsletter looks back on the last decade. While factory farming has continued to expand, Bollard sees a reason for hope:

The farm animal advocacy movement has achieved more progress over the last decade than it did in the entire prior century.”

Notable victories include cage-free pledges, the rise of plant-based and cultivated meat, major legislation in many countries, and the growth and globalization of the farm animal advocacy movement.
 
* * * * *

Meanwhile, if you’d like to review how the decade has gone for humanity, we point you to a link from a previous edition: Our World in Data’s 12 key metrics to understand the state of the world, which show historical progress in health, education, and prosperity.

Below, you can see the steady decline of global poverty:
 



Updates on AI alignment

EA Forum user Larks recently published his annual literature review covering the last year's research in artificial intelligence alignment, including descriptive tags for each paper and charity recommendations for donors who want to support this research.

As always, the review is too thorough to summarize, so we’d recommend just diving in if the topic interests you. If you don’t have time for the full post, you might still enjoy Larks’ thoughts on the state of the field.

* * * * *

In other AI news, Rohin Shah’s Alignment Newsletter recently published a bonus edition summarizing the views of four researchers (including Rohin) who think there’s a good chance that the alignment problem will be solved “by default.”

An excerpt:

“I think there were three main points of convergence.

"First, we were relatively unconvinced by the traditional arguments for AI risk, and find discontinuities relatively unlikely.

"Second, we were more optimistic about solving the problem in the future, when we know more about the problem and have more evidence about powerful AI systems.

"And finally, we were more optimistic that as we get more evidence of the problem in the future, the existing ML community will actually try to fix that problem.”

 

Did the Millennium Villages Project achieve its goals?

Jeffrey Sachs’ Millennium Villages Project, which aimed to sharply reduce poverty in rural villages through the simultaneous use of several “solutions” (including fertilizer, mosquito nets, health clinics, and toilet construction), was one of the most ambitious development ideas of the 2000s. 

But since it came to an end in 2015, academics have been debating whether the project actually reached its goals, and whether its results justify further investment.

Most recently, three researchers published a scathing review of the project’s impact in northern Ghana (note: paywalled), having found mostly “small or null” results:

The project did not appear to “break the poverty trap” nor to generate “cost-saving synergies” [...] We suggest that this was the combined result of poor design and implementation, redundancy of the interventions, and overly optimistic expectations.”

For those who don’t have journal access to view the article, we recommend this nuanced take on the project from development consultant Chris Barnett, who tries to determine whether it should be considered cost-effective.
 

New data on EA cause preferences

Rethink Charity has begun to publish results from the 2019 EA Survey. 

Respondents’ views on cause prioritization show that global poverty is still the most common “top priority” by far; climate change has risen to second place, with AI risk in third. 

However, another question reduced that collection of causes to a few broad categories. When asked to choose just one category, a plurality of respondents favored “long-term future/catastrophic and existential risk reduction” (which incorporates AI risk, biosecurity, and nuclear risk, among other causes). Global poverty came in second, with the other options — animal welfare, “meta”, and “other” — far behind. 

For more on these categories and past discussions on how to prioritize them, see the post’s introduction.
 

In other news:
For more EA-related stories, check out these email newsletters and podcasts.

Timeless Classic

Ideas that have shaped the way we think about doing good

This month, we have an excerpt of Larissa MacFarquhar’s Strangers Drowning, which profiles people who live lives of “extreme ethical commitment.”
 

"There is one circumstance in which the extremity of do-gooders looks normal, and that is war.

"In wartime — or in a crisis so devastating that it resembles war, such as an earthquake or a hurricane — duty expands far beyond its peacetime boundaries [...] In wartime, the line between family and strangers grows faint, as the duty to one’s own enlarges to encompass all the people who are on the same side. It’s usually assumed that the reason do-gooders are so rare is that it’s human nature to care only for your own. There’s some truth to this, of course. But it’s also true that many people care only for their own because they believe it’s human nature to do so. When expectations change, as they do in wartime, behavior changes, too.

"In war, what in ordinary times would be thought weirdly zealous becomes expected [...] People respond to this new moral regime in different ways: some suffer under the tension of moral extremity and long for the forgiving looseness of ordinary life; others feel it was the time when they were most vividly alive, in comparison with which the rest of life seems dull and lacking in purpose.

"In peacetime, selflessness can seem soft — a matter of too much empathy and too little self-respect. In war, selflessness looks like valor. In peacetime, a person who ignores all obligations, who isn’t civilized, who does exactly as he pleases — an artist who abandons duty for his art; even a criminal — can seem glamorous because he’s amoral and free. But in wartime, duty takes on the glamour of freedom, because duty becomes more exciting than ordinary liberty.

"This is the difference between do-gooders and ordinary people: for do-gooders, it is always wartime. They always feel themselves responsible for strangers — they always feel that strangers, like compatriots in war, are their own people. They know that there are always those as urgently in need as the victims of battle, and they consider themselves conscripted by duty."


[Emphasis added.]
 

While the passage doesn't describe every altruistic person, we see it as a fascinating portrayal of a particular altruistic mindset.
 

Jobs

Opportunities to work on some of the world's most pressing problems

80,000 Hours’ High-Impact Job Board features nearly 500 positions.

If you’re interested in policy or global development, you may also want to check Tom Wein’s list of social purpose job boards.

If you want to find out about new positions as they arise (or post a position yourself), check out the EA Job Postings group on Facebook.
 
Applications due very soon Effective Giving Faunalytics GiveWell  Global Priorities Institute, University of Oxford The Good Food Institute (all jobs remote) Machine Intelligence Research Institute OpenAI Open Philanthropy Project Ought

Announcements

Books, events, community projects, and more!


Apply to EA Global: San Francisco 2020

Applications for EA Global: San Francisco 2020 close at midnight PT on 31 January.

The conference will be held from 20-22 March. Content will be aimed at existing EA community members who already have a solid understanding of effective altruism, but would like to network, gain skills, master more complex problems, or move into new roles. Tickets are $500, but substantial financial aid is available.
 

Apply for a grant from EA Funds

The Long-Term Future Fund and EA Meta Fund are looking for applications to their next grant round, with a deadline of 31 January. To see what kinds of projects they’re looking for, see this EA Forum post.
 

Last chance to join the EA Donor Lottery

In a donor lottery, your donation is pooled with others, and earns you a chance of being able to recommend where the full pool of money goes (proportional to your share of the pool). This gives the winner an incentive to spend ample time on research, while saving other donors time they’d have spent conducting research for smaller personal donations. 

The latest donor lottery from EA Funds will close at 12:00 pm PST on Friday, 17 January. If you think you might want to participate, this is your last chance!
 

Toby Ord’s new book

Toby Ord, who founded Giving What We Can, has a new book available for pre-order: The Precipice: Existential Risk and the Future of Humanity.

From the publisher’s summary:

"If all goes well, human history is just beginning. Our species could survive for billions of years - enough time to end disease, poverty, and injustice, and to flourish in ways unimaginable today.

"But this vast future is at risk. With the advent of nuclear weapons, humanity entered a new age, where we face existential catastrophes - those from which we could never come back. Since then, these dangers have only multiplied, from climate change to engineered pathogens and artificial intelligence. If we do not act fast to reach a place of safety, it will soon be too late."


Ord recently spoke about the book’s themes at EA Global: London 2019. You can watch the talk on YouTube.

 
Organizational Updates

You can see updates from a wide range of EA-aligned organizations on the EA Forum. (Organizations submit updates, which we edit for clarity.)
 
 
We hope you found this edition useful!

If you’ve taken action because of the Newsletter and haven’t taken our impact survey, please do — it helps us improve future editions.

(Actions we'd love to hear about include donating to charity, applying to a job, or joining a community group.)


Finally, if you have feedback for us, positive or negative, let us know!

Aaron, Heidi, Michal, Pascal, and Sören
– The Effective Altruism Newsletter Team

The Effective Altruism Newsletter is a joint project between the Centre for Effective Altruism, the Effective Altruism Hub, and Rethink Charity.
Click here to access the full EA Newsletter archive
A community project of the Centre for Effective Altruism, a registered charity in England and Wales (Charity Number 1149828) – Centre for Effective Altruism, Littlegate House, St Ebbes Street, Oxford
OX1 1PT, United Kingdom
Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.