This short post is essentially a response to Ozzie Gooen's recent post Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits. (Not posted as a comment since it is long and a little tangential, and I would like to initiate a discussion about this somewhat different direction.)

I agree with many of the points raised by Ozzie, including the benefits of, and the special situation of EA with regards to, software engineering. However, I strongly disagree with him on what kinds of projects we should focus on. Most of his examples are of tools or infrastructure for other efforts. I think that as a community we should be even more ambitious - I think we should try to execute multiple tech-oriented R&D projects (not necessarily software-oriented) that can potentially have an unusually large direct impact. I list a few examples at the end of the post.

As Ozzie says, the EA community has a very large talent and funding pool, and this has been the situation for the past few years. I am honestly surprised that there are almost no public interest technology efforts affiliated with our community by now. We are all aware that tech is a driving force in the world, affecting all aspects of life (e.g. health, agriculture and transportation, among many others). On the other hand, almost all actors in tech are motivated by, or at least highly constrained by, financial considerations. This suggests that there are probably many low hanging fruits in tech where an altruistically minded organization could potentially have an unusually large impact. Therefore, I think that we should encourage many more individuals to found such efforts, and financially support promising projects.

As someone who considered founding such an organization, it seems to me that the funding situation in EA is currently very unfavorable to such efforts. Moreover, it seems that most of the discussion about tech in EA revolves around AI safety, leaving behind many  talented individuals who don't want to work on AI safety for one reason or another. Furthermore, focusing tech solely around AI safety seems very unhealthy from worldview diversification considerations.

I want to emphasize two considerations for such efforts:

  1. Optimize for impact. I think that a typical tech project is unlikely to be cost-effective in terms of impact. Therefore, I think that the leadership has to be motivated by impact, and that if a large amount of money or talent from the EA community are dedicated to a project, it should seem very promising.
  2. Don't have to be (solely) EA funded, non-profit or use EA talent. EAs can initiate such projects, but use money or talent outside of EA. I think that this "trick" makes many projects that otherwise don't seem so promising become much more promising, since the cost-effectiveness in terms of EA resources spent increases.

Here are a few examples that I am aware of that are at least somewhat related to the EA community. I want to be clear that I am not very familiar with some of these efforts and therefore don't necessarily endorse them or think that they are very impactful. I find this list to be embarrassingly short (partially due to my ignorance, but mostly because there are very few examples). Some other tech-oriented projects which seem promising, but are not affiliated with EA, are listed in my post on my career decision-making process.

  1. Karmel group at Google, led by Sella Nevo - consists of multiple teams working on several problems including flood forecasting and AI for traffic lights among others.
  2. Wave and Sendwave - see their AMA post on the EA forum.
  3. Convergent Research.
  4. New Science.
  5. Telis - see Kyle Fish's recent post on his work there.
  6. Multus.
  7. Alvea - see Kyle Fish's recent announcement post. (Added to list on 18 Feb 2022)

In summary, I think that there are many low hanging fruits in tech, which members of the EA community can work on. I am eager to see us expanding to these directions, and I humbly hope that this short post will contribute to the discussion about these ideas. If anyone wants to discuss these matters privately, feel free to email me (shaybm9@gmail.com) or send me a message on the EA forum.

51

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

I'm really happy to see this posted and to see more discussion on the topic.

However, I strongly disagree with him on what kinds of projects we should focus on. Most of his examples are of tools or infrastructure for other efforts. I think that as a community we should be even more ambitious - I think we should try to execute multiple tech-oriented R&D projects (not necessarily software-oriented) that can potentially have an unusually large direct impact.

This is a good point. My post had a specific frame in mind, of "Tech either for EAs or funded mostly by EAs".

"Tech startups created by EAs" is a very different category, and I didn't mean to argue that it should be less important. We've already seen several tech startups by EAs (FTX, Wave, as you mention); which is one reason why I was trying to draw attention on the other stuff. I've also been part of 3 EA startups earlier on that were more earning-to-give focused. (Guesstimate was one)

I didn't mean to argue that "Tech either for EAs or funded mostly by EAs" were more important than "Tech startups by EAs". The latter has been a big deal and probably will continue to be so, in large part because of opportunity costs (which I wrote about here).

funding situation in EA is currently very unfavorable to such efforts.

Most "Tech startups" already have an existing ecosystem of funding in the VC world. It's unclear to me where and how EA Funders could best encourage these sorts of projects. I could picture there being a EA VC soon; there are starting to be more potential things to fund.

Thanks for clarifying Ozzie!
(Just to be clear, this post is not an attack on you or on your position, both of which I highly appreciate :). Instead, I was trying to raise a related point, which seems extremely important to me and I was thinking about recently, and making sure the discussion doesn't converge to a single point).

With regards to the funding situation, I agree that many tech projects could be funded via traditional VCs, but some might not be, especially those that are not expected to be very financially rewarding or very risky (a few examples that come to mind are the research units of the HMOs in Israel, tech benefitting people in the developing world [e.g. Sella's teams at Google], basic research enabling applications later [e.g. research on mental health]). An EA VC which funds projects based mostly on expected impact might be a good idea to consider!

this post is not an attack on you or on your position

Thanks! I didn't mean to say it was, just was clarifying my position.

An EA VC which funds projects based mostly on expected impact might be a good idea to consider

Now that I think about it, the situation might be further along than you might expect. I think I've heard about small "EA-adjacent" VCs starting in the last few years.[1] There are definitely socially-good-focused VCs out there, like 50 Year VC.

Anthropic was recently funded for $124 Million as the first round. Dustin Moskovitz, Jaan Tallinn, and the Center for Emerging Risk Research all were funders (all longtermists). I assume this was done fairly altruistically.

I think Jaan has funded several altruistic EA projects; including ones that wouldn't have made sense just on a financial level.

https://pitchbook.com/profiles/company/466959-97?fbclid=IwAR040xC65lCV0ZW68DOXwI7K_RkSzyr7ZJa9HBs7R7C4ZkFGM5sC1Lec9Wk#team

https://www.radiofreemobile.com/anthropic-open-ai-mission-impossible/?fbclid=IwAR3iC0B-EKFD40Hf7DXEedI_tzFgqypT7_Pf4jSiUhPeKbHq_xFawHc-rpA

[1]: Sorry for forgetting the 1-2 right names here.

That's great, thanks!
I was aware of Anthropic, but not of the figures behind it.

Unfortunately, my impression is that most funding for such projects are around AI safety or longtermism (as I hinted in the post...). I might be wrong about this though, and I will poke around these links and names.

Relatedly, I would love see OPP/EA Funds fund (at least a seed round or equivalent) such projects, unrelated to AI safety and longtermism, or hear their arguments against that.

+1

I'd add MindEase to your list

Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 8m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It’s deadline time. Over the last decade, many of the world’s largest food companies — from McDonald’s to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just seven months left, it turns out that many are not. Walmart backtracked first, blaming both its customers and suppliers, who “have not kept pace with our aspiration to transition to a full cage-free egg supply chain.” Kroger soon followed suit. Others, like Target, waited until the last minute, when they could blame bird flu and high egg prices for their backtracks. Then there are those who have just gone quiet. Some, like Subway and Best Western, still insist they’ll be 100% cage-free by year’s end, but haven’t shared updates on their progress in years. Others, like Albertsons and Marriott, are sharing their progress, but have quietly removed their pledges to reach 100% cage-free. Opportunistic politicians are now getting in on the act. Nevada’s Republican governor recently delayed his state’s impending ban on caged eggs by 120 days. Arizona’s Democratic governor then did one better by delaying her state’s ban by seven years. US Secretary of Agriculture Brooke Rollins is trying to outdo them all by pushing Congress to wipe out all stat