I'm pleased to announce that Jeff Jonson and I will be co-editing an issue of Essays in Philosophy focused on effective altruism. Possible topics include, but are not limited to:

  • Cause selection and prioritization
  • The ethics of career choice
  • Effective altruism and existential risk
  • Effective animal activism
  • Systemic change vs. alternative paths to impact
  • Proven vs. speculative causes
  • The demandingness of effective altruism
  • The moral foundations of effective altruism

The submission deadline is September 30, 2016.  You can find more information here.

9

0
0

Reactions

0
0
Comments9


Sorted by Click to highlight new comments since:

Thanks, Will!

I have several ideas in mind but wouldn't feel confident submitting right now, because I don't know the norms of philosophy publication. I'd love to have someone who's written for philosophy journals (preferably an EA) provide a guide to EAs who might want to submit articles. Is there anyone who might be able to address these kinds of questions?

Hi Scott. I've had one paper published in philosophy, and I've had several others accepted to conferences. I'm certainly not as credentialed as Will, but I might be able to give some tips. My guess is that many of these are not particularly unique to philosophy. First, it's always good to reference other relevant philosophical work. We all know what hedonistic utilitarianism is, but if you're going to write a paper about the implications of effective altruism for a hedonistic utilitarian, you should still clearly define the concept and cite major works on the topic. Second, clear writing is always preferred over convoluted writing. Sometimes people think philosophers want to sound smart and intentionally use complicated language, but the reverse is true. Sure, philosophy sometimes does legitimately require an understanding of technical terms, but good philosophical writing aims to be as clear as possible. Third, a good format to follow is abstract, introduction, argument, conclusion. Abstracts are extremely useful because they allow people to get the gist of your argument very quickly. Fourth, it is often better to make a genuine contribution to a narrow problem than to not really contribute anything to a broad topic. Finally, a good practice is probably to just read some published philosophy work. That is the best way to get an idea of the writing quality and organizational nature of publishable papers. I believe Will has some of his papers posted on his site. I've read some of his work, and I think it's a good example of clear writing. That's probably a good place to start.

Most CFPs request papers that have been prepared for blind review as well, so be sure to do that.

Thanks, Zack!

really this is a great idea i like this

very nice idea

Can anybody submit an essay or do authors have to meet certain qualifications?

Would it be a good idea to create an open access journal dedicated specifically to effective altruism? From what I understand, it would cost relatively little to run a website where papers could be submitted by authors, assigned to referees, evaluated by editors, and published for anyone to read. There also seems to be enough technical expertise in the community to design a website like that if there are volunteers interested in doing it. Of course, it would be a big time commitment on the part of whoever serves as the editor, but it could have significant benefits to the movement including:

-increasing dialogue between the EA community and academic philosophers

-creating a formal mechanism for receiving thoughtful feedback on new ideas

-allowing readers to find the most important new contributions in one location

-incentivizing serious research on topics that are important to the movement

-signalling the openness of the community to changing its mind on key issues

The journal could complement the various ways that new ideas are currently shared. An EA could still write a blog post to share her views, and online forums could still be considered legitimate places for serious discussion. It's just that the author of a post would now have the option of developing the idea in greater depth, a process during which she may significantly improve her argument.

The journal could be slightly different in its willingness to publish articles from authors outside the academy and select referees outside the academy. As long as the editor is an experienced academic, she should be able to ensure that the papers still meet the standards of normal philosophy journals.

Since there is currently a large unpublished literature on effective altruism, there should be enough material for the first several issues. After that, you would probably have enough submissions from people who devise arguments with the intent of getting them published in the journal.

[Edited]

What is the desired range of length, if any? Is there any provision in submitting to a journal for originality? I want to avoid writing something that is similar to something already published. I'll double check myself, of course, but I could still miss it.

Are you SURE that's the deadline? ;^)

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 3m read
 · 
Hi all, This is a one time cross-post from my substack. If you like it, you can subscribe to the substack at tobiasleenaert.substack.com. Thanks Gaslit by humanity After twenty-five years in the animal liberation movement, I’m still looking for ways to make people see. I’ve given countless talks, co-founded organizations, written numerous articles and cited hundreds of statistics to thousands of people. And yet, most days, I know none of this will do what I hope: open their eyes to the immensity of animal suffering. Sometimes I feel obsessed with finding the ultimate way to make people understand and care. This obsession is about stopping the horror, but it’s also about something else, something harder to put into words: sometimes the suffering feels so enormous that I start doubting my own perception - especially because others don’t seem to see it. It’s as if I am being gaslit by humanity, with its quiet, constant suggestion that I must be overreacting, because no one else seems alarmed. “I must be mad” Some quotes from the book The Lives of Animals, by South African writer and Nobel laureate J.M. Coetzee, may help illustrate this feeling. In his novella, Coetzee speaks through a female vegetarian protagonist named Elisabeth Costello. We see her wrestle with questions of suffering, guilt and responsibility. At one point, Elisabeth makes the following internal observation about her family’s consumption of animal products: “I seem to move around perfectly easily among people, to have perfectly normal relations with them. Is it possible, I ask myself, that all of them are participants in a crime of stupefying proportions? Am I fantasizing it all? I must be mad!” Elisabeth wonders: can something be a crime if billions are participating in it? She goes back and forth on this. On the one hand she can’t not see what she is seeing: “Yet every day I see the evidences. The very people I suspect produce the evidence, exhibit it, offer it to me. Corpses. Fragments of