Bio

I’m working on impact markets – markets to trade nonexcludable goods. (My profile.)

I have a conversation menu and a Calendly for you to pick from! 

If you’re also interested in less directly optimific things – such as climbing around and on top of boulders or amateurish musings on psychology – then you may enjoy some of the posts I don’t cross-post from my blog, Impartial Priorities.

Pronouns: Ideally she or they. I also still go by Denis and Telofy in various venues.

How others can help me

GoodX needs: advisors/collaborators for marketing, and funding. The funding can be for our operation or for retro funding of other impactful projects on our impact markets. We're a PBC and seek SAFE investments over donations.

How I can help others

I’m happy to do calls, give feedback, or go bouldering together, also virtually. You can book me on Calendly.

Please check out my Conversation Menu!

Sequences
2

Impact Markets
Researchers Answering Questions

Comments
578

My current practical ethics

The question often comes up how we should make decisions under epistemic uncertainty and normative diversity of opinion. Since I need to make such decisions every day, I had to develop a personal system, however inchoative, to assist me.

A concrete (or granite) pyramid

My personal system can be thought of like a pyramid.

  1. At the top sits some sort of measurement of success. It's highly abstract and impractical. Let's call it the axiology. This is really a collection of all axiologies I relate to, including the amount of frustrated preferences and suffering across our world history. This also deals with hairy questions such as how to weigh Everett branches morally and infinite ethics.
  2. Below that sits a kind of mission statement. Let's call it the ethical theory. It's just as abstract, but it is opinionated about the direction in which to push our world history. For example, it may desire a reduction in suffering, but for others this floor needn't be consequentialist in flavor.
  3. Both of these abstract floors of the pyramid are held up by a mess of principles and heuristics at the ground floor level to guide the actual implementation.

The ground floor

The ground floor of principles and heuristics is really the most interesting part for anyone who has to act in the world, so I won't further explain the top two floors. 

The principles and heuristics should be expected to be messy. That is, I think, because they are by necessity the result of an intersubjective process of negotiation and moral trade (positive-sum compromise) with all the other agents and their preferences. (This should probably include acausal moral trades like Evidential Cooperation in Large Worlds.)

It should also be expected to be messy because these principles and heuristics have to satisfy all sorts of awkward criteria:

  1. They have to inspire cooperation or at least not generate overwhelming opposition.
  2. They have to be easily communicable so people at least don't misunderstand what you're trying to achieve and call the police on you. Ideally so people will understand your goal well enough that they want to join you.
  3. They have to be rapidly actionable, sometimes for split second decisions.
  4. They have to be viable under imperfect information.
  5. They have to be psychologically sustainable for a lifetime.
  6. They have to avoid violating laws.
  7. And many more.

Three types of freedom

But really that leaves us still a lot of freedom (for better or worse):

  1. There are countless things that we can do that are highly impactful and hardly violate anyone's preferences or expectations.
  2. There are also plenty of things that don't violate any preferences or expectations once we get to explain them.
  3. Finally, there are many opportunities for positive-sum moral trade.

These suggest a particular stance toward other activists:

  1. If someone is trying to achieve the same thing you're trying to achieve, maybe you can collaborate.
  2. If someone is trying to achieve something other than what you're trying to achieve, but you think their goals are valuable, don't stand in their way. In particular, it may sometimes feel like doing nothing (to further or hinder their cause) is a form of “not standing in their way.” But if your peers are actually collaborating with them to some extent, doing nothing (or collaborating less) can cause others to also reduce their collaboration and can prevent key threshold effects from taking hold. So the true neutral position is to try to understand how much you need to collaborate toward the valuable goal so it would not have been achieved sooner without you. This is usually very cheap to do and has a chance to get runaway threshold effects rolling
  3. If someone is trying to achieve something that you consider neutral, the above may still apply to some extent because perhaps you can still be friends. And for reasons of Evidential Cooperation in Large Worlds. (Maybe you'll find that their (to you) neutral thing is easy to achieve here and that other agents like them will collaborate back elsewhere where your goal is easy to achieve.)
  4. Finally, if someone is trying achieve something that you disapprove of… Well, that's not my metier, temperamentally, but this is where compromise can generate gains from moral trade.

Very few examples

In my experience, principles and heuristics are best identified by chatting with friends and generalizing from their various intuitions.

  1. Charitable donations are total anarchy. Mostly, you can just donate wherever the fluff you want, and (unless you're Open Phil) no one will throw stones through your windows in retaliation. You can just optimize directly for your goals – except, Evidential Cooperation in Large Worlds will still make strong recommendations here, but what they are is still a bit underexplored.
  2. Even if you're not an animal welfare activist yourself, you're still well-advised to cooperate with behavior change to avert animal suffering to the extent expected by your peers. (And certainly to avoiding inventing phony reasons to excuse your violation of these expectations. These might be even more detrimental to moral progress and rationality waterline.)
  3. If you want to spend time with someone but they behave outrageously unempathetically toward you or someone else (e.g., say something like “Your suffering is nothing compared to the suffering of X” to their face), you should rather cut all ties with them even though, strictly speaking, this does not imply that no positive-sum trade is possible with them.
  4. Trying to systematically put people in powerful positions can arouse suspicion and actually make it harder to put people in powerful positions. Trying to systematically put people into the sorts of positions they find fulfilling might put as many people in powerful positions and make their lives easier too. (Or training highly conscientious people in how to dare to accept responsibility so it's not just those who don't care who self-select into powerful positions.)
  5. And hundreds more…

Various non-consequentialist ethical theories can come in handy here to generate further useful principles and heuristics. That is probably because they are attempts at generalizing from the intuitions of certain authors, which puts them almost on par (to the extent to which these authors are relateable to you) with generalizations from the intuitions of your friends.

(If you find my writing style hard to read, you can ask Claude to rephrase the message into a style that works for you.)

Hiii! I found this list of “Crucial questions for longtermists” to be quite impressive. It is also listed as part of “A central directory for open research questions,” which is broader than your question.

I met Marisa at EAG London in 2019. We had approximately weekly calls afterwards during the lockdown that I greatly enjoyed. That and all the virtual events helped me connect with the rest of the EA world – probably more so than in-person events. Sadly, I missed one of our calls, which prompted me to set up a comprehensive reminder solution that I use to this day. 

When we supported each other again in the context of some job applications a few years later, I learned that she had just survived a very difficult phase of her life. Then, as now, I wish I had known and had been able to support her in some fashion.

When you lose hope in humanity and x-risk reduction seems pointless, she’s the sort of existence proof that keeps you going.

I love this research! Thank you so much for doing it!

My gut reaction to the results is that it's odd that humans are so high up in terms of their capacity for welfare. Just as an uninformative prior, I would've expected us to be somewhere in the middle. Less confidently, I would've expected a similar number of orders of magnitude deviation from the human baseline in either direction, within reason. E.g. +/- ~.5 OOM.

Plus, we are humans, so there's a risk that we're biased in our favor. It could be simply a bias from our ability to emphasize with other humans. But it could also be the case that there are countless more markers of sentience that humans don't have (but many other sentient animals do) that we are prone to overlook.

Have you investigated what the sources of this effect might be? There might be any number of biases at work as I mentioned, but perhaps our lives have become so comfy most of the time that we perceive slight problems very strongly (e.g., a disapproving gaze). If then something really bad happens, it feels enormously bad?

(I've in the past explicitly assumed that most beings with a few (million) neurons have a roughly human capacity for welfare – not because I thought that was likely but because I couldn't tell in which direction it was off. Do you maybe already have a defense of the results for people like me?)

In any case, I'll probably just adopt your results into my thinking now. I don't expect them to change my priorities much given all the other factors.

Thank you again! <3

Update: When I mentioned this to a friend on a hike, I came up with two ways in which the criteria might be amended to include nonhuman ones: (1) In may cases, we probably have a theory for why a particular behavior or feature is likely to be indicative of conscious experience. Understanding this mechanism, we can look for other systems that might implement the same mechanism, sort of how the eyes of humans, eagles, and flies are very different but we infer that they are probably all for the purpose of vision. (2) Maybe a number of animals that show certain known criteria for consciousness also share suspiciously consistently some other features. One could then investigate whether these features are also indicative of consciousness and whether there are other animals that have these new features at the expense of the older, known ones. (The analysis could cluster features that usually co-occur to not overweight causally related features in cases where many of them are observable.)

Only half a person per sandal I think!

Even scandal-prone individuals can't survive in a vacuum. (You may be thinking of sandals, not scandals?)

We have sympathies towards both movements, and consider ourselves to take the middle path. We race forward and accelerate as quickly as possible while mentioning safety.

Mentioning safety is a waste of resources that you could direct toward attaching propulsion to asteroids to get them here faster.

In fact, asteroids will inevitably hit earth earlier or later, and if they kill humanity, clearly they are superior to humanity. The true masters of our future lightcone are the asteroids. That which can be destroyed by asteroids ought to be destroyed by asteroids.

True progress is in speeding the inevitable. Resistance is futile.

This post is also a great info hazard. It risks causing impostors with sub-146 IQs (2009 LW survey) to feel adequate!

That's a good point. Time discounting (a “time premium” for the past) has also made me very concerned about the welfare of Boltzmann brains in the first moments after the Big Bang. It might seem intractable to still improve their welfare, but if there is any nonzero chance, the EV will be overwhelming!

But I also see the value in longtermism because if these Boltzmann brains had positive welfare, it'll be even more phenomenally positive from the vantage points of our descendants millions of years from now!

There is a Swiss canton Appenzell Innerrhoden (AI). Maybe we can hide there and trick the AI into thinking it already invaded it?

Load more