Hide table of contents

I have not found any Effective Altruist literature on free will debates and implications, which I was surprised by as it seems to be a topic of potentially great moral importance.  Can anyone point me to existing work?

If free will doesn't exist, does that ruin/render void the EA endeavour?  If so are most EAs libertarians re free will?

In light of thinkers such as Sam Harris's work dismantling free will, which I find compelling, https://samharris.org/the-illusion-of-free-will/ and given the ought-implies-can principle, can morality be salvaged?  Eg, how could I 'ought' to choose an impactful career, if my actions are all predetermined?

17

0
0

Reactions

0
0
New Answer
New Comment


4 Answers sorted by

If skepticism about free will renders the EA endeavor void, then wouldn't it also render any action-guiding principles void (including principles about what's best to do out of self-interest)? In which case, it seems odd to single out its consequences for EA.

You sometimes see some (implicit) moving between "we did this good thing, but there's a sense in which we can't take credit, because it was determined before we chose to do it" to "we did this good thing, but there's a sense in which we can't take credit, because it would have happened whether or not we chose to do it", where the latter can be untrue even if the former always true. The former doesn't imply anything about what you should have done instead, while the latter does but has nothing to do with skepticism about free will. So even if determinism undermines certain kinds of "you ought to x" claims, it doesn't imply "you ought to not bother doing x" — it does not justify resignation. There is a parallel (though maybe more problematic) discussion about what to do about the possibility of nihilism.

Anyway, even skeptics about free will can agree that ex post it was good that the good thing happened (compared to it not happening), and they can agree that certain choices were instrumental in it happening (if the choices weren't made, it wouldn't have happened). Looking forward, the skeptic could also understand "you ought to x" claims as saying "the world where you do x will be better than the world where you don't, and I don't have enough information to know which world we're in". They also don't need to deny that people are and will continue to be sensitive to "ought" claims in the sense that explaining to people why they ought to do something can make them more likely to do it compared to the world where you don't explain why. Basically, counterfactual talk can still make sense for determinists. And all this seems like more then enough for anything worth caring about — I don't think any part of EA requires our choices to be undetermined or freely made in some especially deep way.

Some things you might be interested in reading —

I think maybe this free will stuff does matter in a more practical way when it comes to prison reform and punishment, since (plausibly) support for 'retributive' punishment vs rehabilitation comes from attitudes about free will and responsbility that are either incoherent or wrong in a influencable way. 

Thanks finm, I agree, EA is far from uniquely vulnerable to determinism, as you say all action-guiding principles would be affected, I was just contextualising to the forum.

Yes, I think that's a useful distinction, Harris labels these 'determinism' and 'fatalism' respectively, and so still believes our decisions matter in the sense that they will impact the value of future world-states.

That could work to reformulate the meaning of ought statements, though I still feel something important is lost from ethics if determinism is true.

Will have a look at the resources :)

According to the PhilPapers survey, over half of philosophers favour a compatibilist approach to free will - i.e. that free will is compatible with determinism.

I also recommend the LessWrong writing on the subject.

Thanks, I am quite sceptical of compatibilism as a work-around as it still seems unreasonable to say I ought to have done something I metaphysically could not have done.  But yes, given epistemic modesty I can't dismiss it entirely when so many professional philosophers support it.  I'll have a look through LessWrong.

“If free will doesn’t exist, does that ruin/render void the EA endeavor?”

 

Well, what does it matter if free will exists? Even if free will doesn’t exist, my life circumstances have led to me becoming invested in improving the world by engaging in altruism. My brain’s reward circuitry is still aligned with doing the most good that I can do for as long as I am able. I think for most of us who identify as altruists, the tendency to help those who need help is not tied to the idea of free will. I suppose that there are people who would take the absence of free will to be a pass to stray from altruism, but I doubt you’ll find them in the EA community.

 

Personally, losing my belief in free will has had a big, big difference in how I see the world. Because I believe free will doesn’t exist, I cease to judge those who are on the bottom rungs of our society. I have a deeper compassion for people who have addictions, who have committed crimes, who are not the easiest to care about. I have more patience with those who have differing opinions, even with flat-earthers and religious fundamentalists.

 

Shedding my belief in free will also helped me be kinder to myself. I am more patient whenever I face challenges arising from my shortcomings. I forgive myself for my failures and try to be humble even in my triumphs. My prime motivation to make the world a better place is no longer guilt but rather a genuine pleasure in spreading kindness. 

 

In so many different ways, not believing in free will has made me a better altruist and a kinder friend to myself. I hope questioning free will does the same to you!

Thanks for that personal perspective, good to hear.  For me too I think doubting free will is beneficial in my perceptions of others, as you say it makes judgementalism impossible.  I am yet to reconcile myself emotionally to me lacking freedom though, and perhaps never will.

Yes, perhaps some people will be demotivated by disbelieving free will and choose to be less altruistic, which itself is determined, as is how much I will try to break them out of it.  My moral system would take a lot of adjusting to without being able to use 'ought' statements (given ough-implies-can conception).

I'm no expert in this topic and haven't read Sam Harris's argument, but there are a couple of things I usually bear in mind:

1. If you're uncertain about whether determinism is true (that is, the probability you assign to hard determinism is less than 1), then it seems you should still act as though you are not determined.  Then we can apply reasoning like Pascal's Wager -- if determinism is false, then sadistic torture is terrible; if it's right, then we are indifferent.  Hence it seems that we should still act on the side of morality still having bearing.

2. A more compelling response (although, still contentious) is compatibilism.  I leave you to explore it here.

Exactly, 1 has been the approach I have taken; as long as I am unsure I err on the side of safety and believing in morally large universes including those with free will.  That said, it would be interesting if many EAs were similar and thought something like "there's only a ~10% chance free will and hence morality is real, so very likely my life is useless, but I am trying anyway".  I think that is a good approach, but would be an odd outcome.

Comments4
Sorted by Click to highlight new comments since:

If free will doesn't exist, does that ruin/render void the EA endeavour?

Can you say more about why free will not existing is relevant to morality? 

My personal take is that free will seems like a pretty meaningless and confused concept, and probably doesn't exist (whatever that means). But that I want to do what I can to make the world a better place anyway, in the same way that I clearly want and value things in my normal life, regardless of whether I'm doing this with free will.

Sure, I think that makes sense if we see EA as just another preference like any other, I think if we were 100% certain there was no free will though it would greatly reduce the moral force of the argument supporting EA (and any decision-guiding framework), as I couldn't reasonably tell someone or myself, 'you ought to do X over and above Y'.

As a strong free will sceptic I agree that you can never reasonably tell someone “you ought to do X over and above Y”.

However, it makes complete sense to me in a purely deterministic world to make one small addition to the phrase: “you ought to do X over and above Y in order to achieve Z”. The ought has no meaning without the Z, with the Z representing the ideal world you are deterministically programmed to want to live in.

Thanks for the comment (and welcome to the Forum! :) ). Yeah using conditional oughts seems like a pretty reasonable approach to me, though of course has some convenience cost when the Z is very widely shared ('you ought to fix your brakes over drive without brakes in order to not crash') so can perhaps then be implied.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 1m read
 · 
Shape and lead the future of effective altruism in the UK — apply to be the Director of EA UK. The UK has the world's second-largest EA community, with London having the highest concentration of EAs globally. This represents a significant opportunity to strengthen and grow the effective altruism movement where it matters most. The EA UK board is recruiting for a new Director, as our existing Director is moving on to another opportunity. We believe that the strongest version of EA UK is one where the Director is implementing a strategy that they have created themselves, hence the open nature of this opportunity. As Director of EA UK, you'll have access to: * An established organisation with 9 years of London community building experience * An extensive network and documented history of what works (and what doesn't) * 9+ months of secured funding to develop and implement your vision, and additional potential funding and connections to funders * A supportive board and engaged community eager to help you succeed Your task would be to determine how to best leverage these resources to maximize positive impact through community building in the UK. This is a unique opportunity for a self-directed community-builder to shape EA UK's future. You'll be responsible for both setting the strategic direction and executing on that strategy. This is currently a one-person organisation (you), so you'll need to thrive working independently while building connections across the EA ecosystem. There is scope to pitch expansion to funders and the board. We hope to see your application! Alternatively, if you know anyone who might be a good fit for this role, please email madeleine@goodstructures.co. 
 ·  · 2m read
 · 
TL;DR Starting an Effective Altruism (EA) group might be one of the highest-impact opportunities available right now. Here’s how you can get involved: * University students: Apply to the Centre for Effective Altruism’s Organiser Support Programme (OSP) by Sunday, June 22. * City or national group organisers: You’re invited, too. See details here! * Interested in mentorship? Apply to mentor organisers by Wednesday, June 18. * Know someone who could be an organiser or mentor? Forward this post or recommend them directly. OSP provides mentorship, workshops, funding support, and practical resources to build thriving EA communities. Why Starting an EA Group Matters EA Groups, especially university groups, are often the very first exposure people have to effective altruism principles such as scale, tractability, and neglectedness. One conversation, one fellowship, one book club - these seemingly small moments can reshape someone’s career trajectory. Changing trajectories matters - even if one person changes course because of an EA group and ends up working in a high-impact role, the return on investment is huge. You don’t need to take our word for it: * 80,000 Hours: "Probably one of the highest-impact volunteer opportunities we know of." * Rethink Priorities: Only 3–7% of students at universities have heard of EA, indicating neglectedness and a high potential to scale. * Open Philanthropy: In a survey of 217 individuals identified as likely to have careers of particularly high altruistic value from a longtermist perspective, most respondents reported first encountering EA ideas during their college years. When asked what had contributed to their positive impact, local groups were cited most frequently on their list of biggest contributors. This indicates that groups play a very large role in changing career trajectories to high-impact roles. About the Organiser Support Programme (OSP) OSP is a remote programme by the Centre for Effective Altruism designed