Hide table of contents

A few months ago I posted an advertisement for various EA infrastructure projects on the grounds that there are now many such free or discounted services, and there's very little ongoing way to bring them to the attention of newcomers or remind people who've been around longer and may have forgotten about them.

The trouble is, that post had the same issues all attempts to broadcast ideas in this space do: it sat on the front page for a few hours and then fell away - with the organisations I edited in later getting almost no exposure. So this post is essentially a repost of it, albeit with the suggestions that were edited in later from the start. Ideally I would hope a post like this could be pinned to the top of the forum, but in lieu of that, I'm wondering about posting something like this every N months. My reservations are a) the hassle and b) it being too spammy (or, conversely, that it ends up being a cheap way of karma farming - to counteract this I've left a comment below for counterbalancing downvotes if you upvote this post). Please let me know your thoughts in the comments: assuming nothing better is implemented, should I continue to do this?  If so, for what value of N/under what conditions?

Meanwhile, without further ado, here are the projects that you should check out (edit: bolded entries are new since originally posting):

Coworking/socialising

  • EA Gather Town - An always-on virtual meeting place for coworking, connecting, and having both casual and impactful conversations
  • EA Anywhere - An online EA community for everyone
  • EA coworking Discord - A Discord server dedicated to online coworking

Free or subsidised accommodation

  • CEEALAR/formerly the EA hotel - Provides free or subsidised serviced accommodation and board, and a moderate stipend for other living expenses.
  • NonLinear's EA house  database  - An  experiment by Nonlinear to try to connect EAs with extra space with EAs who could do good work if they didn't have to pay rent (or could pay less rent). 

Professional services

  • WorkStream Business Systems - a service dedicated to EAs, helping you improve your workflow, boost your bottom line and take control of your business
  • cFactual - a new, EA-aligned strategy consultancy with the purpose of maximising its counterfactual impact
  • Good Governance Project - helps EA organizations create strong boards by finding qualified and diverse professionals
  • Altruistic Agency - provides discounted tech support and development to organisations 
  • Tech support from Soof Golan
  • Legal advice from Tyrone Barugh - a practice under consideration with the primary aim of providing legal support to EA orgs and individual EAs, with that practice probably being based in the UK.
  • SEADS - Data Science services  to EA organizations
  • User-Friendly  - an EA-aligned marketing agency
  • Anti Entropy - offers services related operations for EA organizations
  • Arb - Our consulting work spans forecasting, machine learning, and epidemiology. We do original research, evidence reviews, and large-scale data pipelines.
  • Pineapple Operations - Maintains a public database of people who are seeking operations or Personal Assistant/Executive Assistant work (part- or full-time) within the next 6 months in the Effective Altruism ecosystem

Coaching

Financial and other material support

  • Nonlinear productivity fund - A ​low-barrier fund paying for productivity enhancing tools ​for top longtermists. Supported services and products include Coaching, Therapy, Sleep coaching, Medication management , Personal Assistants, Research Assistants, Virtual Assistants, Tutors (e.g. ML, CS, language), Asana, FocusMate, Zapier, etc., Productivity apps, A/C, dishwashers, etc, SAD Lamps
  • Effective Altruism Funds - Whether an individual, organisation or other entity, we’re eager to fund great ideas and great people.
  • Nonlinear fund - We incubate longtermist nonprofits by connecting founders with ideas, funding, and mentorship
  • Survival and Flourishing Fund - A “virtual fund”: we organize application submission and evaluation processes to help donors decide where to make donations. 
  • Open Philanthropy Project - a research and grantmaking foundation that aims to share its findings openly
  • Berkeley Existential Risk Initiative - Supports university research groups working to reduce x-risk, by providing them with free services and support.

As before, please let me know if I missed any in the comments. The guidelines remain as follows (though if there's something you think is essential to let people know about which doesn't strictly fit, feel free to suggest it - for eg the house database doesn't technically qualify, but seemed too valuable to omit):

  • The resource should be free to use, or at available at a substantial discount to relatively poor EAs
  • It should be aimed specifically at EAs
  • It should be for the direct benefit of the people using it, not just to 'enable them to do more good'
  • It should be available to people across the world (ie. not just a local EA group)
  • It should be a service or product that someone is putting ongoing work into (ie not just a list of tips, or Facebook/Discord/Slack groups with no purpose other than discussion of some EA subtopic)

Also, let me know if I should remove or edit any of the descriptions. I've lightly curated it and removed some broken links (FTX Foundation has disappeared for some reason) but haven't done any substantial checking.

Comments21
Sorted by Click to highlight new comments since:

Some low effort thoughts:

  • If this is meant as a living resource, maybe move the first 2-3 paragraphs to the bottom of the post, and leave just a one line explainer at the top, to make it easier to skim ("There are now more free or discounted services available to EAs and EA orgs. Here is an updated list, which is mostly a repost of [this].")
  • Maybe worth linking to your anti karma farming comment in the post so ppl can find it easier?

Other things that might belong here:

My impression is AISS does a bunch of things outside from the health consulting thing fwiw, like maintaining this and this.

If this is meant as a living resource, maybe move the first 2-3 paragraphs to the bottom of the post, and leave just a one line explainer at the top, to make it easier to skim

I think it would be confusing to do that at this stage, but if I do periodically make new versions of this post I'll try harder not to bury the lede :)

Maybe worth linking to your anti karma farming comment in the post so ppl can find it easier?

Good idea! Done.

I'll have a look through the other resources you mention when I have a bit more time.

I have an organization, WorkStream, focused on helping orgs be more effective. See my post about it here. We are currently running both management and operations fellowship programs (see more details here), designed to build community and provide access to relevant education to our org leaders. We start the programs on a quarterly basis, so it's an open application for anyone who wants to apply. It's not a free resource, but we do try to give scholarships when possible.

Could be worth adding these to the Community Infrastructure wiki description as well.

[anonymous]6
0
0

I like the idea!  Upvoted!

What is the reasoning for having "The resource should be free to use, or at available at a substantial discount to relatively poor EAs"  as a guideline (and limiting factor)?

It might be a bad guideline! But the thought was that there are thousands of useful services that aren't preferentially available to EAs, most of which can advertise themselves in the normal way. So this a) keeps the list much smaller and b) helps out organisations who are too targeted and/or not profit-seeking enough to advertise themselves.

[anonymous]4
0
1

Gotcha. I agree (like 60%) with this in the sense that I do want to see the "subsidized" services more heavily promoted. However, if I were the person seeing the list I'd also like to see (for-profit or "not subsidized") services that already work with EA projects or can in some way accommodate needs of EA projects really well and then I'd use my own judgement to decide what service is better suited for me given my own constraints (budget, time, etc.). Hope that makes sense.

(karma upvoted, agreement downvoted)

Can you think of any specific examples of such services? I really want this list not to get too long, both for the sake of my own time and because I think it becomes less useful the less focused it is, but could be persuaded that some company is just so amazingly useful that it should be on here against the guidelines (hence me wanting them to be guidelines rather than strict criteria).

I definitely wouldn't mind being reminded of this list once a quarter!

I don't necessarily disagree but from an organizational perspective: Is this not the sort of thing we have a wiki for? 

For me the difference is simply visibility. The forum has a lot of corners, especially for a newcomer (and come to that, for me, and I've been posting since it launched!), so having a bit of information tucked away is very different from having it appear in people's frontpage feed.

Let’s promote the wiki and make it more visible!

Yeah I say keep doing this, once a month tops/once a quarter minimum :)

I found this incredibly helpful. Along these lines, I've been searching for existing Earn to Give coaches. I know you can join the GWWC and One For the World communities and attend their events. The Life You Can Save also has a feature where you can reach out to an advisor.  But i'm more so curious to know if this is a formal coaching program for making a pledge, similar to what 80,000 hours has with their career coaching program.

There's a newer version of this post here. I suggest that this one be linked to at the top of this post so that people don't spend a bunch of time reading through this only to realize there is a more up-to-date list of resources. 

This isn't necessarily an infrastruture project, but seems relevant to the audience of this post: a compilation of resources for dealing with mental health issues around the alignment problem.

Thanks for collating and sharing this.

I'm not sure if this is the right place to brainstorm possible things missing from this list but one thing that comes to mind is tax or investment advice[1] , particularly for those who are earning to give. At least in the US[2], earning to give presents some unusual risks and opportunities, and it's safe to say most accounting and investment professionals have not worked with clients donating 30% or more of their income. Even if there weren't a discount, just maintaining a list of professionals with experience with specific jurisdictions could be helpful.


  1. Note that tax advice is different from tax preparation; the former is about arranging affairs so as to minimize taxes. ↩︎

  2. The situation is sadly even more complex in certain parts of the US, particularly New York. ↩︎

It sounds like a reasonable idea, though I'm worried that the post will stop functioning as advertising if the criteria are too broad - it feels like it's already a lot to go through. Maybe we could split it into multiple subjects? That does feel like it would be a step in the spammy direction though (not to mention an increasing headache to curate)

Arepo
-18
1
6
[comment deleted]1
0
0
More from Arepo
Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 3m read
 · 
Hi all, This is a one time cross-post from my substack. If you like it, you can subscribe to the substack at tobiasleenaert.substack.com. Thanks Gaslit by humanity After twenty-five years in the animal liberation movement, I’m still looking for ways to make people see. I’ve given countless talks, co-founded organizations, written numerous articles and cited hundreds of statistics to thousands of people. And yet, most days, I know none of this will do what I hope: open their eyes to the immensity of animal suffering. Sometimes I feel obsessed with finding the ultimate way to make people understand and care. This obsession is about stopping the horror, but it’s also about something else, something harder to put into words: sometimes the suffering feels so enormous that I start doubting my own perception - especially because others don’t seem to see it. It’s as if I am being gaslit by humanity, with its quiet, constant suggestion that I must be overreacting, because no one else seems alarmed. “I must be mad” Some quotes from the book The Lives of Animals, by South African writer and Nobel laureate J.M. Coetzee, may help illustrate this feeling. In his novella, Coetzee speaks through a female vegetarian protagonist named Elisabeth Costello. We see her wrestle with questions of suffering, guilt and responsibility. At one point, Elisabeth makes the following internal observation about her family’s consumption of animal products: “I seem to move around perfectly easily among people, to have perfectly normal relations with them. Is it possible, I ask myself, that all of them are participants in a crime of stupefying proportions? Am I fantasizing it all? I must be mad!” Elisabeth wonders: can something be a crime if billions are participating in it? She goes back and forth on this. On the one hand she can’t not see what she is seeing: “Yet every day I see the evidences. The very people I suspect produce the evidence, exhibit it, offer it to me. Corpses. Fragments of