Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
11
Joseph
15h
3
TLDR: Is it good that the EA 'bootcamps' tends to spend resources on thinking about career paths rather than developing useful skills? I have a vague impression that the various 'bootcamps' around effective altruism tend to focus on  something like "motivation, encouragement, and peer support for thinking about (and planning for) impactful career paths" rather than "gaining skills." I keep thinking that we have plenty of people involved in EA who are onboard with the general ideas and who want to contribute, but who lack specific skills. Is this a good thing? This is all pretty low confidence/exploratory, as I haven't participated in High Impact Professionals, or CEA's Bootcamps, only read about and heard about them. I'm mainly thinking about people management, budgeting, project management, and similar general professional skills; of course there is also broadly a need for more specific skills, like AI safety researcher.
There have been a lot of posts over the last couple of weeks, and when I've been putting together the Digests, I've seen several which seem criminally underrated.  I'm quick-taking to remind you of the 'customize feed' feature. The link is at the top of the frontpage - click it to decide how your frontpage weights posts on different topics. If Forum readers used this more, there would be less underrated posts (I think!). 
The positive media storm for Anthropic is bigger than I thought it would be.  Almost every major news network has featured them and almost all of it puts a halo on Amodei (which feels a bit icky but hey). And every 4th post on my linkedin is along the lines of "Claude hits no. 1 on App store" "the idea that no big tech has morals is dead," "my 3 year love affair with GPT Is over" "I made the switch to Claude and I'll never look back" As much as refusing the govt. contact might delay their IPO and give their valuation a temporary hit, they could hardly have hoped for a better PR flood. Every new user that switches more only helps them but hurts their biggest competitor. It's also good timing for them because right now their product is probably better than Open AI's which wasn't the case a year ago and might not be the case 6 months from now. It's still unclear whether this will be a good business decision as well as a "moral" one but I suspect it will.
8
Linch
1d
2
There are two common models of space colonization people sometimes allude to, neither of which I think is particularly likely.  Model 1 (“normal colonization”) is that space colonization will look something like Earth colonization, e.g. the way the first humans to expand to the Polynesian islands. So your boat (rover/ship/probe) hops to one island (planet), you build up a civilization, and then you send your probes onwards to the next couple of nearby planets, maybe saving up a bunch of resources if you've colonized nearby star systems (eg your galaxy) and need to send a bigger ship to more distant stars. So it looks like either orderly civilizational growth or an evolutionary process. I don't think this model is really likely because von Neumann probes will be really cheap relative to the carrying capacity of star systems. So I don't think the intuitive "slow waves of colonization" model makes a lot of sense on a galactic scale.  I don’t think my view here is particularly controversial. My impression is that while the first model is common in science fiction, nobody in the futurism/x-risk/etc field really believes it. Model 2 (“mad dash”) is that you race ahead as soon as you reach relativistic speeds. So as soon as your science and industry has advanced enough for your probes to reach appreciable fractions of c, you start blasting out von Neumann probes to the far reaches of the affectable universe. I think this model is more plausible, but still unlikely. A small temporal delay is worth it to develop more advanced spacefaring technology. My guess is that even if all you care about is maximizing space colonization, it still makes sense to delay some time before you launch your first "serious" interstellar space probe, rather than do it as soon as possible[1]. Whether you can reach the furthest galaxies is determined by something like[2]: total time to reach a galaxy = delay + distance/speed  So you want to delay and keep researching until the marginal spe
Thought to share some infographics on animal advocacy org expenses from the Stray Dog Institute's 2024 State of the Movement report, which I learned about via Moritz's excellent post.  Most org spending is in North America and Europe:  North American and European orgs accounted for most of the spend in sub-Saharan Africa and LATAM & the Caribbean, despite spending (say) only ~1% of their total expenses in SSA:  I don't have any good sense of how this Global North-dominated funding potentially skews priorities, but this drill down by animal category may be a start: As well as this drill down by intended outcome. Naively it seems that SSA's allocation looks like North America's for instance, except that the latter has a greater proportion of org spending going to increasing availability of animal-free products, which makes sense given relative wealth: For what it's worth, here's what the funding allocations look like for animal categories as a whole: mostly terrestrial animals, mostly farmed. I'd be keen to get takes from folks in the know on what seems underfunded here. Farmed insects jump out: just $135k out of $260m overall (~0.05%) seems nuts. I also wonder about the skewing of priorities due to outside funding. Moritz wrote which I agree with; another angle is Tom & Karthik's point that although it also isn't clear to me from the infographics above whether meaningful change in their sense would be reflected in the drill downs.