Aaron Bergman

2711 karmaJoined Working (0-5 years)Washington, DC, USA
aaronbergman.neocities.org/

Bio

Participation
4

I graduated from Georgetown University in December, 2021 with degrees in economics, mathematics and a philosophy minor. There, I founded and helped to lead Georgetown Effective Altruism. Over the last few years recent years, I've interned at the Department of the Interior, the Federal Deposit Insurance Corporation, and Nonlinear.

Blog: aaronbergman.net

How others can help me

  • Give me honest, constructive feedback on any of my work
  • Introduce me to someone I might like to know :)
  • Offer me a job if you think I'd be a good fit
  • Send me recommended books, podcasts, or blog posts that there's like a >25% chance a pretty-online-and-into-EA-since 2017 person like me hasn't consumed
    • Rule of thumb standard maybe like "at least as good/interesting/useful as a random 80k podcast episode"

How I can help others

  • Open to research/writing collaboration :)
  • Would be excited to work on impactful data science/analysis/visualization projects
  • Can help with writing and/or editing
  • Discuss topics I might have some knowledge of
    • like: math, economics, philosophy (esp. philosophy of mind and ethics), psychopharmacology (hobby interest), helping to run a university EA group, data science, interning at government agencies

Comments
176

Topic contributions
1

Assuming we're not radically mistaken about our own subjective experience, it really seems like pleasure is good for the being experiencing it (aside from any function or causal effects it may have).

In fact, pleasure without goodness in some sense seems like an incoherent concept. If a person was to insist that they felt pleasure but in no sense was this a good thing, I would say that they are mistaken about something, whether it be the nature of their own experience or the usual meaning of words.

Some people, I think, concede the above but want to object that lower-case goodness in the sense described is distinct from some capital-G objective Goodness out there in the world.

But sentient beings are a perfectly valid element of the world/universe, and so goodness for a given being simply implies goodness at large (all else equal of course). There's no spooky metaphysical sense in which it's written into the stars; it is simply directly implied by the  facts about what some things are like to some sentient beings.

I'd add that the above logic holds fine, and with even more rhetorical and ethical force, in the case of suffering.

Now if you accept the above, here's a simple thought experiment: consider two states of the world, identical in every way except in world A you're experiencing a terrible stomach ache and in world B you're not.

The previous argument implies that there is simply more badness in world A, full stop.

Much more to be said ofc but I'll leave it there :)

I'm continually unsure how best to label or characterize my beliefs. I recently switched from calling myself a moral realist (usually with some "but its complicated" pasted on) to an "axiological realist."

I think some states of the world are objectively better than others, pleasure is inherently good and suffering is inherently bad, and that we can say things like "objectively it would be better to promote happiness over suffering"

But I'm not sure I see the basis for making some additional leap to genuine normativity; I don't think things like objective ordering imply some additional property which is strongly associated with phrases like "one must" or "one should". 

Of course the label doesn't matter a ton, but I'm curious both what people think of as the appropriate label for such a set of beliefs and what people think of it on the merits.

(For those interested, I recorded a podcast on this with @sarahhw and @AbsurdlyMax a while back)

This is incredibly good and generous of you, but also I suspect that even on purely altruistic grounds it makes more sense to save the money for yourself and become slightly less risk averse as a result?

I don’t have a good model or rigorous justification for this, just an intuition 

Was sent a resource in response to this quick take on effectively opposing Trump that at a glance seems promising enough to share on its own: 

From A short to-do list by the Substack Make Trump Lose Again:

  1. Friends in CA, AZ, or NM: Ask your governor to activate the national guard (...)
  2. Friends in NC: Check to see if your vote in the NC Supreme Court race is being challenged (...)
  3. Friends everywhere: Call your senators and tell them to vote no on HR 22 (...)
  4. Friends everywhere: If you’d like to receive personalized guidance on what opportunities are best suited to your skills or geographic area, we are excited to announce that our personalized recommendation form has been reopened! Fill it in here!

Bolding is mine to highlight the 80k-like opportunity. I'm abusing the block quote a bit by taking out most of the text, so check out the actual post if interested! 

There's also a volunteering opportunities page advertising "A short list of high-impact election opportunities, continuously updated" which links to a notion page that's currently down.

Is there a good list of the highest leverage things a random US citizen (probably in a blue state) can do to cause Trump to either be removed from office or seriously constrained in some way? Anyone care to brainstorm?

Like the safe state/swing state vote swapping thing during the election was brilliant - what analogues are there for the current moment, if any?

~30 second ask: Please help @80000_Hours figure out who to partner with by sharing your list of Youtube subscriptions via this survey

Unfortunately this only works well on desktop, so if you're on a phone, consider sending this to yourself for later. Thanks!

Sharing https://earec.net, semantic search for the EA + rationality ecosystem. Not fully up to date, sadly (doesn't have the last month or so of content). The current version is basically a minimal viable product! 

On the results page there is also an option to see EA Forum only results which allow you to sort by a weighted combination of karma and semantic similarity thanks to the API!

Final feature to note is that there's an option to have gpt-4o-mini "manually" read through the summary of each article on the current screen of results, which will give better evaluations of relevance to some query (e.g. "sources I can use for a project on X") than semantic similarity alone.

 

Still kinda janky - as I said, minimal viable product right now. Enjoy and feedback is welcome!

Thanks to @Nathan Young for commissioning this! 

Christ, why isn’t OpenPhil taking any action, even making a comment or filing an amicus curiae?

I certainly hope there’s some legitimate process going on behind the scenes; this seems like an awfully good time to spend whatever social/political/economic/human capital OP leadership wants to say is the binding constraint.

And OP is an independent entity. If the main constraint is “our main funder doesn’t want to pick a fight,” well so be it—I guess Good Ventures won’t sue as a proper donor the way Musk is; OP can still submit some sort of non-litigant comment. Naively, at least, that could weigh non trivially on a judge/AG

Reranking universities by representation in EA survey per undergraduate student, which seems relevant to figuring out what CB strategies are working (obviously plenty of confounders). Data from 1 minute of googling + LLMs so grain of salt

There does seem to be a moderate positive correlation here so nothing shocking IMO.

Same chart as above but by original order 

Load more