Bio

I currently lead EA funds.

Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.

Unless explicitly stated otherwise, opinions are my own, not my employer's.

You can give me positive and negative feedback here.

Comments
431

Topic contributions
6

This is cool. Do you have a sense of the contractor rate range?

Why would I listen to you? You don't even have an English degree.

calebp
33
11
6
4

In my opinion, one of the main things that EA / rationality / AI safety communities have going for them is that they’re extremely non-elitist about ideas. If you have a “good idea” and you write about it on one of the many public forums, it’s extremely likely to be read by someone very influential. And insofar as it’s actually a good idea, I think it’s quite likely to be taken up and implemented, without all the usual status games that might get in the way in other fields.

I think from the inside they feel the same. Have you spoken to people who in your view have drifted? If so how did they describe how it felt?

The flip side of “value drift” is that you might get to dramatically “better” values in a few years time and regret locking yourself into a path where you’re not able to fully capitalise on your improved values. 

Unfortunately I feel that culturally these spaces (EEng/CE) are not very transmissible to EA-ideas and the boom in ML/AI has caused significant self-selection of people towards hotter topics.

Fwiw I have some EEE background from undergrad and I spend some time doing fieldbuilding with this crowd and I think a lack of effort on outreach is more predictive of the lack of relevant people at say EAGs as opposed to AI risk messaging not landing well with this crowd.

I have updated upwards a bit on whistleblowers being able to make credible claims on IE. I do think that people in positions with whistleblowing potential should probably try and think concretely about what they should do, what they'd need to see to do it, and who specifically they'd get in contact with, and what evidence might be compelling to them (and have a bunch of backup plans).

a. An intelligence explosion like you're describing doesn't seem very likely to me. It seems to imply a discontinuous jump (as opposed to regular acceleration), and also implies that this resulting intelligence would have profound market value, such that the investments would have some steeply increased ROI at this point. 

I'm not exactly sure what you mean by discontinuous jump. I expect the usefulness of AI systems to be pretty "continuous" inside AI companies and "discontinuous" outside AI companies. If you think that:
1. model release cadence will stay similar 
2. but, capabilities will accelerate
3. then you should also expect external AI progress to be more "discontinuous" than it currently is. 

I gave some reasons why I don't think AI companies will want to externally deploy their best models (like less benefit from user growth), so maybe you disagree with that, or do you disagree with 1,2, or 3?

b. This model also implies that it might be feasible for multiple actors, particularly isolated ones, to make an "intelligent explosion." I'd naively expect there to be a ton of competition in this area, and I'd expect that competition would greatly decrease the value of the marginal intelligence gain (i.e. cheap LLMs can do much of the work that expensive LLMs do). I'd naively expect that if there are any discontinuous gains to be made, they'll be made by the largest actors.

I do think that more than one actor (e.g. 3 actors) may be trying to IE at the same time, but I'm not sure why this is implied by my post. I think my model isn't especially sensitive to single vs multiple competing IEs, but possible you're seeing something I'm not. I don't really follow

competition would greatly decrease the value of the marginal intelligence gain (i.e. cheap LLMs can do much of the work that expensive LLMs do)

Do you expect competition to increase dramatically from where we are at rn? If not then I think current level of competition empirically do lead to people investing a lot in AI development so I'm not sure I quite follow your line of reasoning.

Maybe it's obvious, but there are lots of situations where believing an incorrect fact is less harmful than learning the incorrect fact too slowly. "Real" bayesians who are maximising EV won't have this issue, as they'll be happy to make decisions on the basis of things they think are unlikely to be true. 

A more realistic choice is between being the kind of person who doesn't respond quickly to changes in the world, or being someone who might respond quickly enough but is sometimes (maybe even often) wrong.

Load more