Benton 🔹

Software Engineer
92 karmaJoined Working (0-5 years)

Posts
1

Sorted by New

Comments
6

The current administration is definitely not predisposed to cooperate with China, Russia, Iran, or North Korea. The US started a trade war with the rest of the world. If anything, the current administration is the least cooperative with China the US has seen in recent memory. 

It seems plausible to me that protecting liberal democracy in America is the most important issue. If America falls to authoritarian rule, what hope is there of international cooperation on existential issues like AI safety, pandemic risk, etc? But, probably like many EAs, I worry that this is not a very tractable issue. Maybe it would be a good idea to read some history and learn how authoritarian regimes can be combated. 

I recently (May 2024) graduated with two bachelor's degrees in computer science and philosophy. Currently, I work as a software engineer for a utility company in my home state of Arkansas. I would like to do what I can to have a career that helps effectively. However, as someone born and raised in Arkansas, with all my family and friends here, I'm not really considering moving anytime soon. On the 80k Hours job board, most of the positions prefer relocation, at least for openings with my experience level (even the ones listed as remote). How can someone working in middle America contribute to effective causes without relocating?

Yes, I think consequences are very important, but I am not a consequentialist. Consequentialists claim that only consequences matter, morally speaking. I disagree. I think things like virtue, autonomy, justice, fidelity, and so on also matter, in addition to consequences. 

Concerning the case against EA, I was a moral antirealist for a while. And since I thought there are no moral truths, then we are not obligated to donate to charity, pick a highly impactful career, etc. But I thought that even if there were objective moral truths, then it would certainly not be utilitarianism (due to all the counterexamples such as the utility monster, the experience machine, etc). I mistakenly thought this would completely disqualify Peter Singer's pond analogy/argument. 

My journey in three steps:

  1. About a year ago, I read Michael Huemer's Knowledge, Reality, and Value, then his Ethical Intuitionism since his ethical arguments sparked my curiosity. This convinced me of moral realism - specifically moral intuitionism. This is, roughly, the view in metaethics that we come to know moral truths through our moral intuition.
  2. Using my moral intuition, the case against utilitarianism (and consequentialism) seems very strong. There are some cases (utility monster, experience machine, the sheriff that sacrifices one innocent to save the town, etc) that I have such a strong moral intuition against. Some kind of deontology (such as Ross' prima facie duties theory) makes much more sense.
  3. Revisiting Peter Singer's pond analogy/argument made me realize that Singer does not have utilitarianism or even consequentialism as a premise. The idea that one ought to prevent suffering without significant sacrifice is one that any plausible moral view will accept, consequentialist or not. For example, the principle of beneficence is one of Ross' prima facie duties. And that principle is all one needs to agree with for effective altruism to get off the ground, so to speak.

Hello there, my name is Benton. I have a bachelor's degree in philosophy (and one in computer science), and that's where I first heard about effective altruism (I believe it was an ethics class I took). I initially rejected it, as I believed it presupposed two claims that I was very skeptical of: moral realism and utilitarianism. It took several years for me to come back around to this movement. Through reading the work of Micheal Huemer and others, I become convinced of moral realism, and I also realized that the arguments of Peter Singer and others can apply even for non-utilitarians. All it takes is the recognition that preventing unnecessary suffering is a good thing (and probably obligatory). And that brings me here.

I don't actively work on any projects, however, I am split between four priorities: global health and wellbeing, animal welfare, AI safety (including x-risk and s-risk), and politics/systemic change. I am not a consequentialist, as I think what causes to prioritize depends on more than the expected value of a given amount of time or money input into a cause. I generally have the intuition that we should end extreme poverty first before prioritizing other issues (yes, I think I am species-ist). However, factory farming is just so abhorrent that I can't help but think that should also be a priority, especially since ending extreme poverty may lead to an increase of the consumption of animal products from factory farms. And not only is AI safety important for longtermist reasons, it may be the most important neartermist cause if AGI is only a few years away as many experts claim. That's why I am split between these three issues. But I really want more research and focus on politics and systemic change in the EA community. As far as I can tell, there is no research being done on alternate economic systems (market socialism, participatory socialism, etc) within the community. One could argue that implementing alternate systems is not a very tractable solution, however, one could argue the current economic system is what gives rise to the other three problems in the first place, so it could be the most effective solution to many issues even though it's improbable.

Fun facts about me:

I love philosophy and do it in my spare time; I have a substack that I may or may not keep posting to here: https://thepurpleturtle.substack.com/

I work as a software engineer.

As I said before, I am very much not a utilitarian (though I think consequences are very important)

I recently decided to go vegetarian.

I am considering a trial pledge of 5% of my income.