A

Arepo

5530 karmaJoined

Participation
1

Sequences
4

EA advertisements
Courting Virgo
EA Gather Town
Improving EA tech work

Comments
795

Topic contributions
18

I'm confused by the strong negative reaction to this comment. I guess it's about the CoGi funding, which does sound like I was wrong. But it seems to be true that there's no option to directly apply for funding for a new project (NickLaing mentions the GH funding circle, but they completed one round last year and their website doesn't currently imply there would be any more). 

I think this helps explain the decline of GHD in the OP - AIM's charity list notwithstanding, no-one in the movement is incentivised to come up with practical ideas in the field.

Arepo
*-2
0
10
1

Last I heard it was something like 10% of their GCR budget.

It's also basically impossible to apply for GHD funding. I recently decided to put my money where my mouth is and get involved in an early stage GHD project, but there's basically no EA-aligned funder who's willing to let you approach them. 

SFF are exclusively longtermist, EA GHD as mentioned basically shut down, and Givewell and CoGi don't accept unsolicited applications. So as far as I can see if you think you have an idea in the GHD space and need funding for it you basically have to look outside the EA world (someone tell me if I missed something!)

It seems like, considering how intelligent and creative our species is, we should expect that, even in very dire conditions, we would be able to re-build civilization.

That shouldn't necessarily be the primary concern. Though it also seems that people who've studied our ability to rebuild civilisation are substantially more pessimistic.

I think the simple answer is that it's become less prioritised by the central orgs (the EA GHD fund is on indefinite hiatus, GHD is a diminishing part of CoGi's budget, 80k moved away from it almost entirely, Rethink seem to have shifted towards animal welfare, CEA seem to have an increasingly longtermist/AI focus, etc). This gives a top-down cultural impetus away from the subject, and just means there's less money in it.

It's also, for better or worse, as an evidence-oriented field, a subject that's harder to have amateur conversations about. I've been consistently supportive of it in my time here, but have had very little to contribute to conversations about what actually works, and felt that there was little value in contributing to any others.

I would love to see this reverse - I think EA is much richer for spanning multiple cause areas, and especially those which are well-evidenced. I don't have any good solutions though :\

I agree that the average college student encountering EA today should focus on issues related to AI safety

 

I broadly nodded along to your OP, but strong disagree here. There are tonnes of people working in AI safety, to the extent that it's already hypercompetitive and the marginal value of one more person getting in the long queue for such a job seems very low. 

Meanwhile I continue to find the case for AI safety, as least as envisioned by EA doomers, highly speculative. That's not to say it shouldn't get any attention, but there's a far better evidenced path from e.g. 'nuclear bombs or major pandemics cause the fall of civilisation' than from 'LLMs cause the fall of civilisation'.

And if you're sufficiently pessimistic on the doomer narrative, we're all screwed and there's likely at least as much EV in short-term improvement of the lives of existing beings as in fighting an impossible struggle to prevent AGI from ever being developed. So there's a credence window in which AI safety as top priority belongs. That window might be reasonably wide, but I don't think it's anywhere near wide enough to justify abandoning all other causes.

I'm still highly sceptical of neglectedness as anything but a first-pass heuristic for how prioritisation organisations might spend their early research - firstly because there are so many ways a field can be 'not neglected' and still highly leveraged (e.g. Givewell and Giving What We Can were only able to have the comparative impact they did because the global health field had been vigorously researched but no-one had systemically done the individual-level prioritisation they did with the research results); secondly because it encourages EA to reject established learning in a way I find dangerously hubristic (FTX weren't irresponsible - they just took a neglected approach to fundraising!).

If we must keep using this heuristic, it helps to introduce supporting heuristics like the one you mention; to which I'd add 'look at the amount of input relative to the amount of input needed to solve the issue'. Climate has had far more input than AI safety, but it's unclear to me whether the proportion of input it's had to what it needs is higher.

Sure, so we agree?

Ah, sorry, I misunderstood that as criticism.

Do you think that forecasting like this will hurt the information landscape on average? 

I'm a big fan of the development e.g. QRI's process of making tools that make it increasingly easy to translate natural thoughts into more usable forms. In my dream world, if you told me your beliefs it would be in the form of a set of distributions that I could run a monte carlo sim on, having potentially substituted my own opinions if I felt differently confident than you (and maybe beyond that there's still neater ways of unpacking my credences that even better tools could reveal).

Absent that, I'm a fan of forecasting, but I worry that overnormalising the naive I-say-a-number-and-you-have-no-idea-how-I-reached-it-or-how-confident-I-am-in-it form of it might get in the way of developing it into something better.

I weakly disagree here. I am very much in the "make up statistics and be clear about that" camp.

 

I'm sympathetic to that camp, but I think it has major epistemic issues that largely go unaddressed:

  • It systemically biases away from extreme probabilities (it's hard to assert < than , for e.g., but many real-world probabilities are and post-hoc credences look like they should have been below this)
  • By focusing on very specific pathways towards some outcome, it diverts attention towards easily definable issues, and hence away from the prospects of more complex pathways of causing the same or value-equivalent outcomes.[1] 
  • It strongly emphasises point credence estimates over distributions, the latter of which are IMO well worth the extra effort, at least whenever you're broadcasting your credences to the rest of the world.

 

By the way, I find this a strange remark:

Seems like a lot of specific, quite technical criticisms.

This sounds like exactly the sort of criticism that's most valuable of a project like this! If their methodology were sound it might be more valuable to present a more holistic set of criticisms and some contrary credences, but David and titotal aren't exactly nitpicking syntactic errors - IMO they're finding concrete reasons to be deeply suspicious of virtually every step of the AI 2027 methodology.

  1. ^

    For e.g. I think it's a huge concern that the EA movement have been pulling people away from nonextinction global catastrophic work because they focused for so long on extinction being the only plausible way we could fail to become interstellar, subject to the latter being possible. I've been arguing for years now, that the extinction focus is too blunt a tool, at least for the level of investigation the question has received from longtermists and x-riskers.

I would strongly push back on the idea that a world where it's unlikely and we can't change that is uninteresting. In that world, all the other possible global catastrophic risks become far more salient as potential flourishing-defeaters.

Load more