AB

Aaron Bergman

1634 karmaJoined Nov 2017Working (0-5 years)Maryland, USA
aaronbergman.neocities.org/

Bio

Participation
4

I graduated from Georgetown University in December, 2021 with degrees in economics, mathematics and a philosophy minor. There, I founded and helped to lead Georgetown Effective Altruism. Over the last few years recent years, I've interned at the Department of the Interior, the Federal Deposit Insurance Corporation, and Nonlinear, a newish longtermist EA org.

I'm now doing research thanks to an EA funds grant, trying to answer hard, important EA-relevant questions. My first big project (in addition to everything listed here) was helping to generate this team Red Teaming post.

Blog: aaronbergman.net

How others can help me

  • Suggest action-relevant, tractable research ideas for me to pursue
  • Give me honest, constructive feedback on any of my work
  • Introduce me to someone I might like to know :)
  • Convince me of a better marginal use of small-dollar donations than giving to the Fish Welfare Initiative, from the perspective of a suffering-focused hedonic utilitarian.
  • Offer me a job if you think I'd be a good fit
  • Send me recommended books, podcasts, or blog posts that there's like a >25% chance a pretty-online-and-into-EA-since 2017 person like me hasn't consumed
    • Rule of thumb standard maybe like "at least as good/interesting/useful as a random 80k podcast episode"

How I can help others

  • Open to research/writing collaboration :)
  • Would be excited to work on impactful data science/analysis/visualization projects
  • Can help with writing and/or editing
  • Discuss topics I might have some knowledge of
    • like: math, economics, philosophy (esp. philosophy of mind and ethics), psychopharmacology (hobby interest), helping to run a university EA group, data science, interning at government agencies

Comments
135

Topic contributions
1

I made a custom GPT that is just normal, fully functional ChatGPT-4, but I will donate any revenue this generates[1] to effective charities. 

Presenting: Donation Printer 

  1. ^

    OpenAI is rolling out monetization for custom GPTs:

    Builders can earn based on GPT usage

    In Q1 we will launch a GPT builder revenue program. As a first step, US builders will be paid based on user engagement with their GPTs. We'll provide details on the criteria for payments as we get closer.

Yeah you're right, not sure what I missed on the first read

This doesn't obviously point in the direction of relatively and absolutely fewer small grants, though. Like naively it would shrink and/or shift the distribution to the left - not reshape it.

[This comment is no longer endorsed by its author]Reply

Yeah but my (implicit, should have made explicit lol) question is “why this is the case?”

Like at a high level it’s not obvious that animal welfare as a cause/field should make less use of smaller projects than the others. I can imagine structural explanations (eg older field -> organizations are better developed) but they’d all be post hoc.

Interesting that the Animal Welfare Fund gives out so few small grants relative to the Infrastructure and Long Term Future funds (Global Health and Development has only given out 20 grants, all very large, so seems to be a more fundamentally different type of thing(?)). Data here.

A few stats:

  • The 25th percentile AWF grant was $24,250, compared to $5,802 for Infrastructure and $7,700 for LTFF (and median looks basically the same).
  • AWF has only made just nine grants of less than $10k, compared to 163 (Infrastructure) and 132 (LTFF).

Proportions under $threshold 

fundprop_under_1kprop_under_2500prop_under_5kprop_under_10k
Animal Welfare Fund0.0000.0040.0120.036
EA Infrastructure Fund0.0200.0860.1940.359
Global Health and Development Fund0.0000.0000.0000.000
Long-Term Future Fund0.0070.0680.1630.308

Grants under $threshold 

fundnunder_2500under_5kunder_10kunder_25kunder_50k
Animal Welfare Fund250139243248
EA Infrastructure Fund4543988163440453
Global Health and Development Fund2000057
Long-Term Future Fund4292970132419429

Summary stats (rounded)

fundnmedianmeanq1q3total
Animal Welfare Fund250$50,000$62,188$24,250$76,000$15,546,957
EA Infrastructure Fund454$15,319$41,331$5,802$45,000$18,764,097
Global Health and Development Fund20$900,000$1,257,005$297,925$1,481,630$25,140,099
Long-Term Future Fund429$23,544$44,624$7,700$52,000$19,143,527

In their most straightforward form (“foundation models”), language models are a technology which naturally scales to something in the vicinity of human-level (because it’s about emulating human outputs), not one that naturally shoots way past human-level performance

  • i.e. it is a mistake-in-principle to imagine projecting out the GPT-2—GPT-3—GPT-4 capability trend into the far-superhuman range

Surprised to see no pushback on this yet. I do not think this is true; I've come around to thinking that Eliezer is basically right that the limit of next token prediction on human generated text is superintelligence. Now how this latent ability manifests is a hard question, but it's there to be used by the model for its own ends or elicited by humans for ours, or both.

Also worth adding (guessing this point has been made before) that non human-generated text (e.g. regression outputs from a program) are in the training data, so merely predicting those gets you superhuman performance in some domains.

For others considering whether/where to donate: RP is my current best guess of "single best charity to donate to all things considered (on the margin - say up to $1M)."

FWIW I have a manifold market for this (which is just one source of evidence - not something I purely defer to. Also I bet in the market so grain of salt etc). 

Strongly, strongly, strongly agree. I was in the process of writing essentially this exact post, but am very glad someone else got to it first. The more I thought about it and researched, the more it seemed like convincingly making this case would probably be the most important thing I would ever have done. Kudos to you.

A few points to add

  1. Under standard EA "on the margin" reasoning, this shouldn't really matter, but I analyzed OP's grants data and found that human GCR has been consistently funded 6-7x more than animal welfare (here's my tweet thread this is from) Image
  2. @Laura Duffy's (for Rethink Priorities) recently published risk aversion analysis basically does a lot of the heavy lifting here (bolding mine):

Spending on corporate cage-free campaigns for egg-laying hens is robustly[8] cost-effective under nearly all reasonable types and levels of risk aversion considered here. 

  1. Using welfare ranges based roughly on Rethink Priorities’ results, spending on corporate cage-free campaigns averts over an order of magnitude more suffering than the most robust global health and development intervention, Against Malaria Foundation. This result holds for almost any level of risk aversion and under any model of risk aversion.

I also want to emphasize this part, because it's the kind of serious engagement with suffering that EA still fails to to do enough of 

I experienced "disabling"-level pain for a couple of hours, by choice and with the freedom to stop whenever I want. This was a horrible experience that made everything else seem to not matter at all.

A single laying hen experiences hundreds of hours of this level of pain during their lifespan, which lasts perhaps a year and a half - and there are as many laying hens alive at any one time as there are humans. How would I feel if every single human were experiencing hundreds of hours of disabling pain? 

A single broiler chicken experiences fifty hours of this level of pain during their lifespan, which lasts 4-6 weeks. There are 69 billion broilers slaughtered each year. That is so many hours of pain that if you divided those hours among humanity, each human would experience about 400 hours (2.5 weeks) of disabling pain every year. Can you imagine if instead of getting, say, your regular fortnight vacation from work or study, you experienced disabling-level pain for a whole 2.5 weeks? And if every human on the planet - me, you, my friends and family and colleagues and the people living in every single country - had that same experience every year? How hard would I work in order to avert suffering that urgent?

Every single one of those chickens are experiencing pain as awful and all-consuming as I did for tens or hundreds of hours, without choice or the freedom to stop. They are also experiencing often minutes of 'excruciating'-level pain, which is an intensity that I literally cannot imagine. Billions upon billions of animals. The numbers would be even more immense if you consider farmed fish, or farmed shrimp, or farmed insects, or wild animals.

If there were a political regime or law responsible for this level of pain - which indeed there is - how hard would I work to overturn it? Surely that would tower well above my other priorities (equality, democracy, freedom, self-expression, and so on), which seem trivial and even borderline ridiculous in comparison.

[On mobile; sorry for the formatting]

Given my quick read and especially the bit below, it seems like the title is at least a bit misleading.

Quote: “To be clear: this document is not a detailed vindication of any particular class of philanthropic interventions. For example, although we think that contractualism supports a sunnier view of helping the global poor than funding x-risk projects, contractualism does not, for all our argument implies, entail that many EA-funded global poverty interventions are morally preferable to all other options (some of which are probably high-risk, high-reward longshots).”

I think a reasonable person would conclude from the title “If Contractualism, Then AMF” essentially the opposite of this more nuanced clarification.

Perhaps it’s reasonable to infer that “Then AMF” really means “then the cluster of beliefs that leads GiveWell to strongly recommend AMF are indeed true (even if ex post it turns out that deworming or something was better)” but even this doesn’t seem to be what you are arguing (given the quote above).

LessWrong has a new feature/type of post called "Dialogues". I'm pretty excited to use it, and hope that if it seems usable, reader friendly, and generally good the EA Forum will eventually adopt it as well.

Load more