I’m working on impact markets – markets to trade nonexcludable goods. (My profile.)
I have a conversation menu and a Calendly for you to pick from!
If you’re also interested in less directly optimific things – such as climbing around and on top of boulders or amateurish musings on psychology – then you may enjoy some of the posts I don’t cross-post from my blog, Impartial Priorities.
Pronouns: Ideally she or they. I also still go by Denis and Telofy in various venues.
GoodX needs: advisors/collaborators for marketing, and funding. The funding can be for our operation or for retro funding of other impactful projects on our impact markets. We're a PBC and seek SAFE investments over donations.
I’m happy to do calls, give feedback, or go bouldering together, also virtually. You can book me on Calendly.
I used the comment field in the form to note that a field in the form was marked as optional when it was actually mandatory. That comment got automatically published here, and out of context it made no sense whatsoever. I think it would've been clearer to not automatically transfer this form feedback here (some people might've even assumed that it's private feedback).
Thanks for writing all of this up in one place!
One of my gripes with the community has long been that maximization is core to EA and we're still really clueless about what it implies and most of the community (outside RP, QURI, etc. and some researchers) seems to have given up on figuring it out.
I feel like we're like this one computer science professor I had who seemed a bit senile and only taught the sort of things that haven't lost relevance in the last 30 years because he hadn't kept up with anything that happened since the 80s. He probably had good personal and neurological reasons for that, but we don't, right?
I haven't read any EA introductory materials in a few years, but I hope they contain articles about expected value maximization along with articles on how EV is usually largely unknowable due to cluelessness and often undefined due to Pasadena games. That stochastic dominance is arguably a much better approach to prioritization but that Christian Tarsney is so far more or less the only one who has bothered to look into it. That there is perhaps a way forward to figure out what's the best thing to do if we funded some big world-modeling efforts based on software like Squiggle but that hardly anyone outside RP and QURI (and Convergence?) currently bothers to do anything about it. (I've dabbled a bit in these fields but my personal fit doesn't seem to be great.)
Maybe there's even a way to scalably adjust for personal fit whatever recommendations this big model effort might yield. Maybe there are some common archetypes/personas plus quizzes that tell people which ones they are closest to.
Arguably this can wait until this whole AI thing is under control (if that's even possible), but few people will want to work on AI safety, so maybe it doesn't have to wait?
My takeaway has been mostly that I don't have a clue and so will go with some sort of momentary best guess gives me enough fulfillment, enjoyment, and safety. I've written more about it here.
That said, EA has had a great effect on my mental health.
I used to be a crying wreck because of all the suffering in the world. I spread myself thin trying to help everyone. I felt guilty about the majority of terrible things in the world that I was powerless to prevent. (Suicidal too, except that would've been self-defeating.)
Then EA came along and gave me an excuse to “pick my battles,” i.e. focus on a few things where I could make a big difference, taking into account my skills and temperament. Now, if someone went, “Hey, you should become a politician to prevent X and Y,” I could go, “No, I wouldn't be good at that and hate every second of it and it would come at a great cost to A, which I'm already doing.” EA, for the first time, allowed me to set boundaries.
EA also gave me an appreciation for the (perhaps, plausibly, who knows really) great absolute impact that I can have despite the minimal impact that I have (perhaps, plausibly, who knows really) relative to the totality of the suffering in the world. That made it much easier to find fulfillment.
Hiii! I found this list of “Crucial questions for longtermists” to be quite impressive. It is also listed as part of “A central directory for open research questions,” which is broader than your question.
I met Marisa at EAG London in 2019. We had approximately weekly calls afterwards during the lockdown that I greatly enjoyed. That and all the virtual events helped me connect with the rest of the EA world – probably more so than in-person events. Sadly, I missed one of our calls, which prompted me to set up a comprehensive reminder solution that I use to this day.
When we supported each other again in the context of some job applications a few years later, I learned that she had just survived a very difficult phase of her life. Then, as now, I wish I had known and had been able to support her in some fashion.
When you lose hope in humanity and x-risk reduction seems pointless, she’s the sort of existence proof that keeps you going.
I love this research! Thank you so much for doing it!
My gut reaction to the results is that it's odd that humans are so high up in terms of their capacity for welfare. Just as an uninformative prior, I would've expected us to be somewhere in the middle. Less confidently, I would've expected a similar number of orders of magnitude deviation from the human baseline in either direction, within reason. E.g. +/- ~.5 OOM.
Plus, we are humans, so there's a risk that we're biased in our favor. It could be simply a bias from our ability to emphasize with other humans. But it could also be the case that there are countless more markers of sentience that humans don't have (but many other sentient animals do) that we are prone to overlook.
Have you investigated what the sources of this effect might be? There might be any number of biases at work as I mentioned, but perhaps our lives have become so comfy most of the time that we perceive slight problems very strongly (e.g., a disapproving gaze). If then something really bad happens, it feels enormously bad?
(I've in the past explicitly assumed that most beings with a few (million) neurons have a roughly human capacity for welfare – not because I thought that was likely but because I couldn't tell in which direction it was off. Do you maybe already have a defense of the results for people like me?)
In any case, I'll probably just adopt your results into my thinking now. I don't expect them to change my priorities much given all the other factors.
Thank you again! <3
Update: When I mentioned this to a friend on a hike, I came up with two ways in which the criteria might be amended to include nonhuman ones: (1) In may cases, we probably have a theory for why a particular behavior or feature is likely to be indicative of conscious experience. Understanding this mechanism, we can look for other systems that might implement the same mechanism, sort of how the eyes of humans, eagles, and flies are very different but we infer that they are probably all for the purpose of vision. (2) Maybe a number of animals that show certain known criteria for consciousness also share suspiciously consistently some other features. One could then investigate whether these features are also indicative of consciousness and whether there are other animals that have these new features at the expense of the older, known ones. (The analysis could cluster features that usually co-occur to not overweight causally related features in cases where many of them are observable.)
We have sympathies towards both movements, and consider ourselves to take the middle path. We race forward and accelerate as quickly as possible while mentioning safety.
Mentioning safety is a waste of resources that you could direct toward attaching propulsion to asteroids to get them here faster.
In fact, asteroids will inevitably hit earth earlier or later, and if they kill humanity, clearly they are superior to humanity. The true masters of our future lightcone are the asteroids. That which can be destroyed by asteroids ought to be destroyed by asteroids.
True progress is in speeding the inevitable. Resistance is futile.
This post is also a great info hazard. It risks causing impostors with sub-146 IQs (2009 LW survey) to feel adequate!
My current practical ethics
The question often comes up how we should make decisions under epistemic uncertainty and normative diversity of opinion. Since I need to make such decisions every day, I had to develop a personal system, however inchoative, to assist me.
A concrete (or granite) pyramid
My personal system can be thought of like a pyramid.
The ground floor
The ground floor of principles and heuristics is really the most interesting part for anyone who has to act in the world, so I won't further explain the top two floors.
The principles and heuristics should be expected to be messy. That is, I think, because they are by necessity the result of an intersubjective process of negotiation and moral trade (positive-sum compromise) with all the other agents and their preferences. (This should probably include acausal moral trades like Evidential Cooperation in Large Worlds.)
It should also be expected to be messy because these principles and heuristics have to satisfy all sorts of awkward criteria:
Three types of freedom
But really that leaves us still a lot of freedom (for better or worse):
These suggest a particular stance toward other activists:
Very few examples
In my experience, principles and heuristics are best identified by chatting with friends and generalizing from their various intuitions.
Various non-consequentialist ethical theories can come in handy here to generate further useful principles and heuristics. That is probably because they are attempts at generalizing from the intuitions of certain authors, which puts them almost on par (to the extent to which these authors are relateable to you) with generalizations from the intuitions of your friends.
(If you find my writing style hard to read, you can ask Claude to rephrase the message into a style that works for you.)