Hide table of contents

Summary 

People should be aware that as well as starting projects solving a specific problem they could start/support ecosystems that allow many more projects to be created.

 

Ecosystems

What I mean by ecosystem is an organisation or project that supports a network of people to connect with other people/ideas/resources and potentially act as an incubator for other organisations and projects in that field. In the context of this post I mean EA related ecosystems.

Examples include:

And within EA:

I think there are often more attempts at starting specific projects in an area before there is an ecosystem set up to support projects. For example, there have been multiple AI safety related projects but there isn’t a central coordinating organisation for the various people/orgs involved. There is no CEA for people working on AI safety, that creates websites, discussion platforms, conferences, connects mentors, surveys members etc. There is a newsletter, forum and FB groups but they have mainly been set up as separate projects rather than as part of an organisation focused on building up an ecosystem to support the whole network.

I think even when there are (research) organisations in a space, the work of coordinating the wider network isn’t prioritised and falls between the gaps or people may assume that one of the larger organisations is doing that work already.


Benefits

I’ll outline what I think are some benefits of ecosystems, although I suspect most of the benefit will come from projects supported by the ecosystem than anything that directly happens.

Benefits of building ecosystems:

  • Acts as a contact point for people looking to hire/find info/find collaborators
  • Could provide a high value entry point for people in that field already
  • Could provide stronger reasons to stay engaged with EA, staying up to date with a group you’re interested in can be more motivating than multiple stop start projects
  • Creates connections between people with similar experiences
  • Provides feedback, resources and ideas for other projects

 

Supporting an ecosystem

To give a sense of what building an ecosystem might entail I’ll highlight some examples here (Mainly from this post):

  • Introductory space - website, Slack workspace, Facebook group, Google doc
  • Newsletters, podcasts
  • Member directories, mentorship/coaching schemes
  • Events - conferences, discussions, social
  • Discussion space - Forum, Slack, Facebook group
  • Research, job boards

One benefit of thinking in terms of building an ecosystem is that you can start to look for gaps in the landscape that are not being provided and potentially build them years before it otherwise would have been created.

 

Counter arguments

Potential reasons not to:

  • Sometimes there are really good projects out there that would do more good than focusing on building an ecosystem for the wider area, potentially they create more momentum or raise more awareness
  • Some people may be really good at executing certain projects and less good at ecosystem creation/support. This may be especially true in more established areas where you may need to get organisations and the community to buy into your vision of the wider ecosystem. If you don’t get a good sense of the community landscape, it is very likely that trying to create an ecosystem will fail
  • For some areas, it may reduce awareness if it was separated out from the wider community. For example, if there was an effective animal advocacy forum, would that mean less relevant content shared on the EA forum?

 

Missing Gaps

Here are some areas that I think currently lack a coordination organisation, or potentially even a go to person. If you know of an organisation/project or person who is doing this work let me know.

  • Global development
  • Longtermism
  • Existential risks
  • Emerging technology
  • Animal welfare
  • Policy/Civil Service (globally)
  • Academia (and subfields within academia)
  • Researchers
  • Journalists/media
  • Lawyers (Legal Priorities Project seems to be more of a research organisation)
  • Finance (globally)
  • Software/tech (globally)
  • Religious groups (I’m aware of people working on EA for Christians and also early work on Judaism and EA.

 

Thinking bigger

This is less related to the main argument but another consideration when thinking about creating an ecosystem is how encompassing you want to make it. By this I mean that you could create a space for people in London, UK, Europe or globally. I generally suggest for people to think globally if an organisation/person isn’t already in that space. For example if you want to make a group for people who want to discuss longtermism, it may not make sense to create one for each city before having a global space to have those conversations. It’s usually better to reach all the people who are interested in an idea rather than those that just happen to be in the locations where someone is an organiser. If you start globally it may then also be easier to set up local groups when there is a critical mass of people in one area that have been connected to each other via the global network.

 

Conclusion

I wrote this post not because I think everyone should work on ecosystem creation but because I think it should at least be considered as an option when people are thinking about what projects to get involved in, especially as it potentially leads to many more projects coming out of a well supported ecosystem.

If anyone is interested in setting up networks/communities in these spaces I’m always happy to chat in more detail about possible ways of doing that.

80

0
0

Reactions

0
0

More posts like this

Comments12
Sorted by Click to highlight new comments since:

We (AI Safety Support) are literally doing all these things

There is no CEA for people working on AI safety, that creates websites, discussion platforms, conferences, connects mentors, surveys members etc.


I don't blame DavidNash for not knowing about us. I did not know about EA Consultancy Network. So maybe what we need is a meta ecosystem for ecosystems? There is a slack group for local group organizer, and a local group directory at EA Hub. Similarly, it would be nice to have a dedicated chat some for ecosystem organizer, and a public directory somewhere.

CEA has said that they are currently not focusing on supporting this type of projects (source: privet conversation). So if someone want to set it up, just go for it! And let me know if I can help.

from 'Things CEA is not doing' forum post https://forum.effectivealtruism.org/posts/72Ba7FfGju5PdbP8d/things-cea-is-not-doing 

We are not actively focusing on:

...

  • Cause-specific work (such as community building specifically for effective animal advocacy, AI safety, biosecurity, etc.)

Thanks for the much improved source!

would be nice to have a dedicated chat some for ecosystem organizer

Do you think the EA group organiser slack could be that? In a sense, we're all organising a groups (whether it's a local group or a cause or career group). If you feel like the existing channels there are insufficient one could add a channel for cause or career groups/ecosystems. 

 I've been told a few time that I belong in the group organizers slack, but never actually felt at home there, because I feel like I'm doing something very different from most group organizers. 

The main requirement of such a chat is that it attracts other ecosystem organizers, which is a marketing problem more than a logistical problem. There are lots of platforms that would be adequate.

Making a separate ecosystem slack channel in the group organizer slack, and marketing it here, may work (30% chance of success), and since it is low effort, it seems worth a try.

A some what higher effort, but also higher expected payoff, would be to find all ecosystem organizers, contact them personally and invite them to a group call. Or invite them to fill in a when2meet for deciding when to have said group call. 

Both the call and a channel in the EA groups slack seem valuable, and if you actually get people together for a call you could use that as opportunity to ask whether they'd be interested in such an EA groups slack channel or another platform. 

Do you want to do this? Or someone else here? David Nash? 
(I currently lack time to organise things myself, but I'd probably join if someone else organised it)

I'm not going to lead this, but would be happy to join.

This sounds just like what we are doing at AI Safety Support
 

Good thinking! I think I have intuitively been trying to do this for the German EA community, and this post helped me become more clear about it, so thanks for writing it up! 

> Animal welfare

As I understand, Animal Advocacy Careers is trying to support the ecosystem of people working in that field (together with Animal Charity Evaluators and maybe Good Food Institute for alternative protein sources). Where do you see the gap there?

I think the Charter Cities Institute is as an example of an ecosystem coordinator — they really seem to be the go to people for the area.

Software Development / Technology:

This is somewhat starting here

(Nothing official, feel free to do something else instead. I'm only writing this in case someone else, like me, searches for "software" here)

Thanks David -  this is all very useful!

Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 8m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It’s deadline time. Over the last decade, many of the world’s largest food companies — from McDonald’s to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just seven months left, it turns out that many are not. Walmart backtracked first, blaming both its customers and suppliers, who “have not kept pace with our aspiration to transition to a full cage-free egg supply chain.” Kroger soon followed suit. Others, like Target, waited until the last minute, when they could blame bird flu and high egg prices for their backtracks. Then there are those who have just gone quiet. Some, like Subway and Best Western, still insist they’ll be 100% cage-free by year’s end, but haven’t shared updates on their progress in years. Others, like Albertsons and Marriott, are sharing their progress, but have quietly removed their pledges to reach 100% cage-free. Opportunistic politicians are now getting in on the act. Nevada’s Republican governor recently delayed his state’s impending ban on caged eggs by 120 days. Arizona’s Democratic governor then did one better by delaying her state’s ban by seven years. US Secretary of Agriculture Brooke Rollins is trying to outdo them all by pushing Congress to wipe out all stat