Hello! 

I’m Toby, the new Content Manager @ CEA. 

Before working at CEA, I studied Philosophy at the University of Warwick, and worked for a couple of years on a range of writing and editing projects in the EA space. Recently I helped run the Amplify Creative Grants program, in order to encourage more impactful podcasting and YouTube projects (such as the podcast in this Forum post). You can find a bit of my own creative output on my more-handwavey-than-the-ea-forum blog, and my (now inactive) podcast feed.

I’ll be doing some combination of: moderating, running events on the Forum, making changes to the Forum based on user feedback, writing announcements, writing the Forum Digest and/or the EA Newsletter, participating in the Forum a lot etc… I’ll be doubling the capacity of the content team (the team formerly known as Lizka). 

I’m here because the Forum is great in itself, and safeguards parts of EA culture I care about preserving. The Forum is the first place I found online where people would respond to what I wrote and actually understand it. Often they understood it better than I did. They wanted to help me (and each other) understand the content better. They actually cared about there being an answer. 

The EA community is uniquely committed to thinking seriously about how to do good. The Forum does a lot to maintain that commitment, by platforming critiques, encouraging careful, high-context conversations, and sharing relevant information. I’m excited that I get to be a part of sustaining and improving this space. 

I’d love to hear more about why you value the Forum in the comments (or, alternatively, anything we could work on to make it better!)

This is the image I'm using for my profile picture. It's a linoprint I made of one of my favourite statues, The Rites of Dionysus.


 

81

0
0
10

Reactions

0
0
10
Comments23


Sorted by Click to highlight new comments since:

Just to be clear, Lizka isn't being replaced and you're a new, additional content manager? Or does Lizka have a new role now?

Yep, Lizka is still Content Specialist, and I'm additive. There were a lot of great content related ideas being left on the table because Lizka can't do everything at once. So once I'm up to speed we should be able to get even more projects done. 

What's the difference between a Content Specialist and a Content Manager?

The difference in role titles reflects the fact that Lizka is the team lead (of our team of two). From what I understand, the titles needn't make much difference in practice.

PS- I'm presuming there is a disagree react on my above comment because Lizka can in fact do everything at once. Fair enough. 

FWIW I would've expected the Content Manager manages the Content Specialist, not the other way around.

FWIW I would have guessed the reverse re role titles

Yes I am also curious about the difference. I’ve been using them interchangeably.

(I'd guess the different titles mostly just reflect the difference in seniority? cf. "program officer" vs "program associate")

Wow, seeing as HILTS is hands down my favorite podcast so now I’m quite excited to see what new and exciting content will come from the forum. Welcome to the EA Forum team!

Thank you Constance! I'm glad to hear you like the podcast. To be very clear- everything you like about the podcast is down to James and Amy, we just chose to fund them. 

The only thing that comes to mind for me regarding "make it better" would be to change the wording on the tooltips for voting to clarify (or to police?) what they are for. I somewhat regularly see people agree vote or disagree vote with comments that don't contain any claims or arguments.

Interesting! Let me know if any examples come up (feel free to post here or dm). Ideally we wouldn't have the disagree button playing the same role as the karma button. 

Sure. The silly and simplified cliché is something like this: a comment describes someone's feelings (or internal state) and then gets some agree votes and disagree votes, as if Person A says "this makes me happy" and person be wants to argue that point.

(to be clear, this is a very small flaw/issue with the EA Forum, and I wouldn't really object if the people running the forum decide that this is too minor of an issue to spend time on)

A few little examples:

  • Peter Wildeford's comment on this post "What's the difference between a Content Specialist and a Content Manager?" currently has two agree votes. There isn't any argument or stance there; it is merely asking a question. So I assume people are using the agree vote to indicate something like "I also have this question" or "I am glad that you are asking this question."
  • I made a comment a few days ago about being glad that I am not the only one who wants to have financial runway before donating. It currently has a few agree votes and disagree votes, and I can't for the life of me figure out why. There aren't really any stances or claims being made in that comment.
  • Ben West made a comment about lab grown meat that currently has 27 agree votes, even through the comment has nothing to agree with: "Congratulations to Upside Foods, Good Meat, and everyone who worked on this technology!" I guess that people are using the agree vote to indicate something like "I like this, and I want to express the same gratitude."

Is this a problem? Seems fine to me, because the meaning is often clear, as in two of your examples, and I think it adds value in those contexts. And if it's not clear, doesn't seem like a big loss compared to a counterfactual of having none of these types of vote available.

Thanks for putting these together. This doesn't currently seem obviously bad to me for (I think) the same reasons as Isaac Dunn (those examples don't show valueless reacts, and most cases are much clearer). However, your cases are interesting. 

I agree with your read of the reactions to Ben West's comment. 

In the question about my role, perhaps it is slightly less clear, because "I agree that this is a good question" or "I have this question as well" could probably be adequately expressed with Karma. But I also doubt that this has led to significant confusion. 

In the reaction to your comment, I'd go with the agrees saying that they echo the statement in your tl;dr. The disagree is weirder- perhaps they are signalling disencouragement of your encouraging Lizka's sentiment? 


(Perhaps how perplexing people find agree/disagree reacts to comments which don't straightforwardly contain propositions maps to how habitually the reader decouples propositional content from context.) 


I'll keep an eye out for issues with this- my view is loosely held. Thanks again for raising the issue. 
 

Congratulations on the new role! :)

Welcome! Glad to have you here, Toby.

Congrats Toby, excited to see what you get up to in the new role! And thanks for all your work on Amplify.

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 6m read
 · 
I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar.  Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn’t seem deterred, he offered me an internship.  I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks:  * Made ~20 visits to fish farms * Wrote a recommendation on next steps for FWI’s stunning project * Conducted data analysis in Python on the efficacy of the Alliance for Responsible Aquaculture’s corrective actions * Received training in water quality testing methods * Created charts in Tableau for a webinar presentation * Brainstormed and implemented office improvements  I wasn’t able to drive myself around in India, so I rode on the back of a coworker’s motorbike to commute. FWI provided me with my own bedroom in a company-owned flat. Sometimes Haven and I would cook together at the residence, talking for hours over a chopping board and our metal plates about war, family, or effective altruism. Other times I would eat at restaurants or street food booths with my Indian coworkers. Excluding flights, I spent less than $100 USD in total. I covered all costs, including international transportation, through the Summer in South Asia Fellowship, which provides funding for University of Michigan under