Hello! 

I’m Toby, the new Content Manager @ CEA. 

Before working at CEA, I studied Philosophy at the University of Warwick, and worked for a couple of years on a range of writing and editing projects in the EA space. Recently I helped run the Amplify Creative Grants program, in order to encourage more impactful podcasting and YouTube projects (such as the podcast in this Forum post). You can find a bit of my own creative output on my more-handwavey-than-the-ea-forum blog, and my (now inactive) podcast feed.

I’ll be doing some combination of: moderating, running events on the Forum, making changes to the Forum based on user feedback, writing announcements, writing the Forum Digest and/or the EA Newsletter, participating in the Forum a lot etc… I’ll be doubling the capacity of the content team (the team formerly known as Lizka). 

I’m here because the Forum is great in itself, and safeguards parts of EA culture I care about preserving. The Forum is the first place I found online where people would respond to what I wrote and actually understand it. Often they understood it better than I did. They wanted to help me (and each other) understand the content better. They actually cared about there being an answer. 

The EA community is uniquely committed to thinking seriously about how to do good. The Forum does a lot to maintain that commitment, by platforming critiques, encouraging careful, high-context conversations, and sharing relevant information. I’m excited that I get to be a part of sustaining and improving this space. 

I’d love to hear more about why you value the Forum in the comments (or, alternatively, anything we could work on to make it better!)

This is the image I'm using for my profile picture. It's a linoprint I made of one of my favourite statues, The Rites of Dionysus.


 

81

0
0
10

Reactions

0
0
10
Comments23


Sorted by Click to highlight new comments since:

Just to be clear, Lizka isn't being replaced and you're a new, additional content manager? Or does Lizka have a new role now?

Yep, Lizka is still Content Specialist, and I'm additive. There were a lot of great content related ideas being left on the table because Lizka can't do everything at once. So once I'm up to speed we should be able to get even more projects done. 

What's the difference between a Content Specialist and a Content Manager?

The difference in role titles reflects the fact that Lizka is the team lead (of our team of two). From what I understand, the titles needn't make much difference in practice.

PS- I'm presuming there is a disagree react on my above comment because Lizka can in fact do everything at once. Fair enough. 

FWIW I would've expected the Content Manager manages the Content Specialist, not the other way around.

FWIW I would have guessed the reverse re role titles

Yes I am also curious about the difference. I’ve been using them interchangeably.

(I'd guess the different titles mostly just reflect the difference in seniority? cf. "program officer" vs "program associate")

Wow, seeing as HILTS is hands down my favorite podcast so now I’m quite excited to see what new and exciting content will come from the forum. Welcome to the EA Forum team!

Thank you Constance! I'm glad to hear you like the podcast. To be very clear- everything you like about the podcast is down to James and Amy, we just chose to fund them. 

The only thing that comes to mind for me regarding "make it better" would be to change the wording on the tooltips for voting to clarify (or to police?) what they are for. I somewhat regularly see people agree vote or disagree vote with comments that don't contain any claims or arguments.

Interesting! Let me know if any examples come up (feel free to post here or dm). Ideally we wouldn't have the disagree button playing the same role as the karma button. 

Sure. The silly and simplified cliché is something like this: a comment describes someone's feelings (or internal state) and then gets some agree votes and disagree votes, as if Person A says "this makes me happy" and person be wants to argue that point.

(to be clear, this is a very small flaw/issue with the EA Forum, and I wouldn't really object if the people running the forum decide that this is too minor of an issue to spend time on)

A few little examples:

  • Peter Wildeford's comment on this post "What's the difference between a Content Specialist and a Content Manager?" currently has two agree votes. There isn't any argument or stance there; it is merely asking a question. So I assume people are using the agree vote to indicate something like "I also have this question" or "I am glad that you are asking this question."
  • I made a comment a few days ago about being glad that I am not the only one who wants to have financial runway before donating. It currently has a few agree votes and disagree votes, and I can't for the life of me figure out why. There aren't really any stances or claims being made in that comment.
  • Ben West made a comment about lab grown meat that currently has 27 agree votes, even through the comment has nothing to agree with: "Congratulations to Upside Foods, Good Meat, and everyone who worked on this technology!" I guess that people are using the agree vote to indicate something like "I like this, and I want to express the same gratitude."

Is this a problem? Seems fine to me, because the meaning is often clear, as in two of your examples, and I think it adds value in those contexts. And if it's not clear, doesn't seem like a big loss compared to a counterfactual of having none of these types of vote available.

Thanks for putting these together. This doesn't currently seem obviously bad to me for (I think) the same reasons as Isaac Dunn (those examples don't show valueless reacts, and most cases are much clearer). However, your cases are interesting. 

I agree with your read of the reactions to Ben West's comment. 

In the question about my role, perhaps it is slightly less clear, because "I agree that this is a good question" or "I have this question as well" could probably be adequately expressed with Karma. But I also doubt that this has led to significant confusion. 

In the reaction to your comment, I'd go with the agrees saying that they echo the statement in your tl;dr. The disagree is weirder- perhaps they are signalling disencouragement of your encouraging Lizka's sentiment? 


(Perhaps how perplexing people find agree/disagree reacts to comments which don't straightforwardly contain propositions maps to how habitually the reader decouples propositional content from context.) 


I'll keep an eye out for issues with this- my view is loosely held. Thanks again for raising the issue. 
 

Congratulations on the new role! :)

Welcome! Glad to have you here, Toby.

Thanks Joseph!

Welcome Toby :)

Thank you Max!

Congrats Toby, excited to see what you get up to in the new role! And thanks for all your work on Amplify.

Curated and popular this week
 ·  · 13m read
 · 
Notes  The following text explores, in a speculative manner, the evolutionary question: Did high-intensity affective states, specifically Pain, emerge early in evolutionary history, or did they develop gradually over time? Note: We are not neuroscientists; our work draws on our evolutionary biology background and our efforts to develop welfare metrics that accurately reflect reality and effectively reduce suffering. We hope these ideas may interest researchers in neuroscience, comparative cognition, and animal welfare science. This discussion is part of a broader manuscript in progress, focusing on interspecific comparisons of affective capacities—a critical question for advancing animal welfare science and estimating the Welfare Footprint of animal-sourced products.     Key points  Ultimate question: Do primitive sentient organisms experience extreme pain intensities, or fine-grained pain intensity discrimination, or both? Scientific framing: Pain functions as a biological signalling system that guides behavior by encoding motivational importance. The evolution of Pain signalling —its intensity range and resolution (i.e., the granularity with which differences in Pain intensity can be perceived)— can be viewed as an optimization problem, where neural architectures must balance computational efficiency, survival-driven signal prioritization, and adaptive flexibility. Mathematical clarification: Resolution is a fundamental requirement for encoding and processing information. Pain varies not only in overall intensity but also in granularity—how finely intensity levels can be distinguished.  Hypothetical Evolutionary Pathways: by analysing affective intensity (low, high) and resolution (low, high) as independent dimensions, we describe four illustrative evolutionary scenarios that provide a structured framework to examine whether primitive sentient organisms can experience Pain of high intensity, nuanced affective intensities, both, or neither.     Introdu
 ·  · 7m read
 · 
Article 5 of the 1948 Universal Declaration of Human Rights states: "Obviously, no one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment." OK, it doesn’t actually start with "obviously," but I like to imagine the commissioners all murmuring to themselves “obviously” when this item was brought up. I’m not sure what the causal effect of Article 5 (or the 1984 UN Convention Against Torture) has been on reducing torture globally, though the physical integrity rights index (which “captures the extent to which people are free from government torture and political killings”) has increased from 0.48 in 1948 to 0.67 in 2024 (which is good). However, the index reached 0.67 already back in 2001, so at least according to this metric, we haven’t made much progress in the past 25 years. Reducing government torture and killings seems to be low in tractability. Despite many countries having a physical integrity rights index close to 1.0 (i.e., virtually no government torture or political killings), many of their citizens still experience torture-level pain on a regular basis. I’m talking about cluster headache, the “most painful condition known to mankind” according to Dr. Caroline Ran of the Centre for Cluster Headache, a newly-founded research group at the Karolinska Institutet in Sweden. Dr. Caroline Ran speaking at the 2025 Symposium on the recent advances in Cluster Headache research and medicine Yesterday I had the opportunity to join the first-ever international research symposium on cluster headache organized at the Nobel Forum of the Karolinska Institutet. It was a 1-day gathering of roughly 100 participants interested in advancing our understanding of the origins of and potential treatments for cluster headache. I'd like to share some impressions in this post. The most compelling evidence for Dr. Ran’s quote above comes from a 2020 survey of cluster headache patients by Burish et al., which asked patients to rate cluster headach
 ·  · 2m read
 · 
A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because: -Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions  (or more) of years likely  requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.  -Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support. Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population  is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of