I recently graduated with a master's in Information Science. Before making a degree switch, I was a Ph.D. student in Planetary Science where I used optimization models to physically characterize asteroids (including potentially hazardous ones).
Historically, my most time-intensive EA involvement has been organizing Tucson Effective Altruism — the EA university group at the University of Arizona. If you are a movement builder, let's get in touch!
Career-wise, I am broadly interested in capital generation, x/s-risk reduction, and earning-to-give for animal welfare. Always happy to chat about anything EA!
Career-related:
Other:
In-depth critiques are super time and labor intensive to write, so I sincerely appreciate your effort here! I am pessimistic, but I hope this post gets wider coverage.
While I don't understand some of the modeling-based critiques here from the cursory read, it was illuminating to learn about the the basic model set up, the lack of error bars for parameters that the model is especially sensitive to, and the assumptions that so tightly constrain the forecast's probability space. I am least sympathetic to the "they made guesstimates here and there" line of critique; forecasting seems inherently squishy, so I do not think it is fair to compare it to physics.
Another critique, and one that I am quite sympathetic to, is that the METR trend specifically shows "there's an exponential trend with doubling time between ~2 -12 months on automatically-scoreable, relatively clean + green-field software tasks from a few distributions" (source). METR is especially clear about the drawbacks of their task suite in their RE-bench paper.
I know this is somewhat of meme in the Safety community at this point (and annoyingly intertwined with the stochastic parrots critique), but I think "are models generalizing?" still remains an important and unresolved question. If LLMs are adopting poor learning heuristics and not generalizing, AI2027 is predicting a weaker kind of "superhuman" coder — one that can reliably solve software tasks with clean feedback loops but will struggle on open-ended tasks!
Anyway, thanks again for checking the models so thoroughly and the write-up!
we may take action up to and including building new features into the forum’s UI, to help remind users of the guidelines.
Random idea: for new users and/or users with less than some threshold level of karma and/or users who use the forum infrequently, Bulby pops up with a little banner that contains a tl;dr on the voting guidelines. Especially good if the banner pops up when a user hovers their cursor over the voting buttons.
There is going to be a Netflix series on SBF titled The Altruists, so EA will be back in the media. I don't know how EA will be portrayed in the show, but regardless, now is a great time to improve EA communications. More specifically, being a lot more loud about historical and current EA wins — we just don't talk about them enough!
A snippet from Netflix's official announcement post:
Are you ready to learn about crypto?
Julia Garner (Ozark, The Fantastic Four: First Steps, Inventing Anna) and Anthony Boyle (House of Guinness, Say Nothing, Masters of the Air) are set to star in The Altruists, a new eight-episode limited series about Sam Bankman-Fried and Caroline Ellison.
Graham Moore (The Imitation Game, The Outfit) and Jacqueline Hoyt (The Underground Railroad, Dietland, Leftovers) will co-showrun and executive produce the series, which tells the story of Sam Bankman-Fried and Caroline Ellison, two hyper-smart, ambitious young idealists who tried to remake the global financial system in the blink of an eye — and then seduced, coaxed, and teased each other into stealing $8 billion.
The next existential catastrophe is likelier than not to wipe off all animal sentience from the planet
Intuitively seems very unlikely.
Thanks, great post!
A few follow-up questions and pushbacks:
How would introduction of cultivated meat affect flexitarian dietary choices? Flexitarians eat a combination of animal- and plant-based meat. When cultivated meat becomes commercially viable, would flexitarians replace the former or the latter with cultivated meat?
If the answer is a yes to any of these, I think that is a point in favor of cultivated meat. I expect cultural change to be a significant driver of reduced animal consumption, and this cultural change will only be possible if there is a stable class of consumers who normalize consumption of animal-free products.
To draw a historical parallel, when industrial chicken farming developed in the second half of the 20th century, people didn't eat less of other meats; they just ate chicken in addition.
Is this true? It seems that as chicken did displace beef consumption by 40% (assuming consumption ~ supply) or am I grossly misunderstanding the chart above?
Pete Buttigieg just published a short blogpost called We Are Still Underreacting on AI.
He seems to believe that AI will be cause major changes in the next 3-5 years and thinks that AI poses "terrifying challenges," which make me wonder if he is privately sympathetic to the transformative AI hypothesis. If yes, he might also take catastrophic risks from AI quite seriously. While not explicitly mentioned, at the end of his piece, he diplomatically affirms:
Even if Buttigieg doesn't win, he will probably find himself on the presidential cabinet and could be quite influential on AI policy. The international response to AI depends a lot on which side wins the 2028 election.