I’m working on impact markets – markets to trade nonexcludable goods. (My profile.)
I have a conversation menu and a Calendly for you to pick from!
If you’re also interested in less directly optimific things – such as climbing around and on top of boulders or amateurish musings on psychology – then you may enjoy some of the posts I don’t cross-post from my blog, Impartial Priorities.
Pronouns: Ideally she or they. I also still go by Denis and Telofy in various venues.
GoodX needs: advisors/collaborators for marketing, and funding. The funding can be for our operation or for retro funding of other impactful projects on our impact markets. We're a PBC and seek SAFE investments over donations.
I’m happy to do calls, give feedback, or go bouldering together, also virtually. You can book me on Calendly.
They model that, and after, I think, 1661 iterations of the human-AI trade game, the human-AI trade game accumulates enough wealth for humans that it would've been self-defeating for the humans to defect like that. I think it's still a Nash equilibrium but one where the humans give up perfectly good gains from trade. (Plus blockchain tech can make it hard to confiscate property.)
What has been your personal take-away from this line of thinking? This “standard case” is far from my own thinking, though I agree with the conclusion. Is it also far from your own thinking?
My take:
So what I'm afraid will happen is that an artificial RL agent will seek out resources first elsewhere in our solar system and then elsewhere in the galaxy (something that would be difficult for bio-humans), will run into communication delays due to the lightspeed limit, and will hence split into countless copies, each potentially capable of suffering. Soon they'll be separated so far that even updates on what it means to be value-aligned would travel for a long time, so there'll be moral “drift” in countless directions.
What I would find reassuring is:
Human extinction also seems bad on the basis that it contradicts the self-preservation drive that many/most humans have. Peaceful disenfranchisement may be less concerning depending on the details. But at the moment it seems random where we're headed in the coming years because hardly anyone in power is trying to steer these things in any sensible way. Again more time would be helpful.
Basic rights for AIs (and standing in court!) could also provide them with a legal recourse where they currently have to resort to threats, making the transition more likely to go smoothly, like you argue in another post. Currently we're nowhere close to having those. Again more time would be helpful.
I used the comment field in the form to note that a field in the form was marked as optional when it was actually mandatory. That comment got automatically published here, and out of context it made no sense whatsoever. I think it would've been clearer to not automatically transfer this form feedback here (some people might've even assumed that it's private feedback).
Thanks for writing all of this up in one place!
One of my gripes with the community has long been that maximization is core to EA and we're still really clueless about what it implies and most of the community (outside RP, QURI, etc. and some researchers) seems to have given up on figuring it out.
I feel like we're like this one computer science professor I had who seemed a bit senile and only taught the sort of things that haven't lost relevance in the last 30 years because he hadn't kept up with anything that happened since the 80s. He probably had good personal and neurological reasons for that, but we don't, right?
I haven't read any EA introductory materials in a few years, but I hope they contain articles about expected value maximization along with articles on how EV is usually largely unknowable due to cluelessness and often undefined due to Pasadena games. That stochastic dominance is arguably a much better approach to prioritization but that Christian Tarsney is so far more or less the only one who has bothered to look into it. That there is perhaps a way forward to figure out what's the best thing to do if we funded some big world-modeling efforts based on software like Squiggle but that hardly anyone outside RP and QURI (and Convergence?) currently bothers to do anything about it. (I've dabbled a bit in these fields but my personal fit doesn't seem to be great.)
Maybe there's even a way to scalably adjust for personal fit whatever recommendations this big model effort might yield. Maybe there are some common archetypes/personas plus quizzes that tell people which ones they are closest to.
Arguably this can wait until this whole AI thing is under control (if that's even possible), but few people will want to work on AI safety, so maybe it doesn't have to wait?
My takeaway has been mostly that I don't have a clue and so will go with some sort of momentary best guess gives me enough fulfillment, enjoyment, and safety. I've written more about it here.
That said, EA has had a great effect on my mental health.
I used to be a crying wreck because of all the suffering in the world. I spread myself thin trying to help everyone. I felt guilty about the majority of terrible things in the world that I was powerless to prevent. (Suicidal too, except that would've been self-defeating.)
Then EA came along and gave me an excuse to “pick my battles,” i.e. focus on a few things where I could make a big difference, taking into account my skills and temperament. Now, if someone went, “Hey, you should become a politician to prevent X and Y,” I could go, “No, I wouldn't be good at that and hate every second of it and it would come at a great cost to A, which I'm already doing.” EA, for the first time, allowed me to set boundaries.
EA also gave me an appreciation for the (perhaps, plausibly, who knows really) great absolute impact that I can have despite the minimal impact that I have (perhaps, plausibly, who knows really) relative to the totality of the suffering in the world. That made it much easier to find fulfillment.
Hiii! I found this list of “Crucial questions for longtermists” to be quite impressive. It is also listed as part of “A central directory for open research questions,” which is broader than your question.
My current practical ethics
The question often comes up how we should make decisions under epistemic uncertainty and normative diversity of opinion. Since I need to make such decisions every day, I had to develop a personal system, however inchoative, to assist me.
A concrete (or granite) pyramid
My personal system can be thought of like a pyramid.
The ground floor
The ground floor of principles and heuristics is really the most interesting part for anyone who has to act in the world, so I won't further explain the top two floors.
The principles and heuristics should be expected to be messy. That is, I think, because they are by necessity the result of an intersubjective process of negotiation and moral trade (positive-sum compromise) with all the other agents and their preferences. (This should probably include acausal moral trades like Evidential Cooperation in Large Worlds.)
It should also be expected to be messy because these principles and heuristics have to satisfy all sorts of awkward criteria:
Three types of freedom
But really that leaves us still a lot of freedom (for better or worse):
These suggest a particular stance toward other activists:
Very few examples
In my experience, principles and heuristics are best identified by chatting with friends and generalizing from their various intuitions.
Various non-consequentialist ethical theories can come in handy here to generate further useful principles and heuristics. That is probably because they are attempts at generalizing from the intuitions of certain authors, which puts them almost on par (to the extent to which these authors are relateable to you) with generalizations from the intuitions of your friends.
(If you find my writing style hard to read, you can ask Claude to rephrase the message into a style that works for you.)