Hello! 

I’m Toby, the new Content Manager @ CEA. 

Before working at CEA, I studied Philosophy at the University of Warwick, and worked for a couple of years on a range of writing and editing projects in the EA space. Recently I helped run the Amplify Creative Grants program, in order to encourage more impactful podcasting and YouTube projects (such as the podcast in this Forum post). You can find a bit of my own creative output on my more-handwavey-than-the-ea-forum blog, and my (now inactive) podcast feed.

I’ll be doing some combination of: moderating, running events on the Forum, making changes to the Forum based on user feedback, writing announcements, writing the Forum Digest and/or the EA Newsletter, participating in the Forum a lot etc… I’ll be doubling the capacity of the content team (the team formerly known as Lizka). 

I’m here because the Forum is great in itself, and safeguards parts of EA culture I care about preserving. The Forum is the first place I found online where people would respond to what I wrote and actually understand it. Often they understood it better than I did. They wanted to help me (and each other) understand the content better. They actually cared about there being an answer. 

The EA community is uniquely committed to thinking seriously about how to do good. The Forum does a lot to maintain that commitment, by platforming critiques, encouraging careful, high-context conversations, and sharing relevant information. I’m excited that I get to be a part of sustaining and improving this space. 

I’d love to hear more about why you value the Forum in the comments (or, alternatively, anything we could work on to make it better!)

This is the image I'm using for my profile picture. It's a linoprint I made of one of my favourite statues, The Rites of Dionysus.


 

81

0
0
10

Reactions

0
0
10
Comments23


Sorted by Click to highlight new comments since:

Just to be clear, Lizka isn't being replaced and you're a new, additional content manager? Or does Lizka have a new role now?

Yep, Lizka is still Content Specialist, and I'm additive. There were a lot of great content related ideas being left on the table because Lizka can't do everything at once. So once I'm up to speed we should be able to get even more projects done. 

What's the difference between a Content Specialist and a Content Manager?

The difference in role titles reflects the fact that Lizka is the team lead (of our team of two). From what I understand, the titles needn't make much difference in practice.

PS- I'm presuming there is a disagree react on my above comment because Lizka can in fact do everything at once. Fair enough. 

FWIW I would've expected the Content Manager manages the Content Specialist, not the other way around.

FWIW I would have guessed the reverse re role titles

Yes I am also curious about the difference. I’ve been using them interchangeably.

(I'd guess the different titles mostly just reflect the difference in seniority? cf. "program officer" vs "program associate")

Wow, seeing as HILTS is hands down my favorite podcast so now I’m quite excited to see what new and exciting content will come from the forum. Welcome to the EA Forum team!

Thank you Constance! I'm glad to hear you like the podcast. To be very clear- everything you like about the podcast is down to James and Amy, we just chose to fund them. 

The only thing that comes to mind for me regarding "make it better" would be to change the wording on the tooltips for voting to clarify (or to police?) what they are for. I somewhat regularly see people agree vote or disagree vote with comments that don't contain any claims or arguments.

Interesting! Let me know if any examples come up (feel free to post here or dm). Ideally we wouldn't have the disagree button playing the same role as the karma button. 

Sure. The silly and simplified cliché is something like this: a comment describes someone's feelings (or internal state) and then gets some agree votes and disagree votes, as if Person A says "this makes me happy" and person be wants to argue that point.

(to be clear, this is a very small flaw/issue with the EA Forum, and I wouldn't really object if the people running the forum decide that this is too minor of an issue to spend time on)

A few little examples:

  • Peter Wildeford's comment on this post "What's the difference between a Content Specialist and a Content Manager?" currently has two agree votes. There isn't any argument or stance there; it is merely asking a question. So I assume people are using the agree vote to indicate something like "I also have this question" or "I am glad that you are asking this question."
  • I made a comment a few days ago about being glad that I am not the only one who wants to have financial runway before donating. It currently has a few agree votes and disagree votes, and I can't for the life of me figure out why. There aren't really any stances or claims being made in that comment.
  • Ben West made a comment about lab grown meat that currently has 27 agree votes, even through the comment has nothing to agree with: "Congratulations to Upside Foods, Good Meat, and everyone who worked on this technology!" I guess that people are using the agree vote to indicate something like "I like this, and I want to express the same gratitude."

Is this a problem? Seems fine to me, because the meaning is often clear, as in two of your examples, and I think it adds value in those contexts. And if it's not clear, doesn't seem like a big loss compared to a counterfactual of having none of these types of vote available.

Thanks for putting these together. This doesn't currently seem obviously bad to me for (I think) the same reasons as Isaac Dunn (those examples don't show valueless reacts, and most cases are much clearer). However, your cases are interesting. 

I agree with your read of the reactions to Ben West's comment. 

In the question about my role, perhaps it is slightly less clear, because "I agree that this is a good question" or "I have this question as well" could probably be adequately expressed with Karma. But I also doubt that this has led to significant confusion. 

In the reaction to your comment, I'd go with the agrees saying that they echo the statement in your tl;dr. The disagree is weirder- perhaps they are signalling disencouragement of your encouraging Lizka's sentiment? 


(Perhaps how perplexing people find agree/disagree reacts to comments which don't straightforwardly contain propositions maps to how habitually the reader decouples propositional content from context.) 


I'll keep an eye out for issues with this- my view is loosely held. Thanks again for raising the issue. 
 

Congratulations on the new role! :)

Welcome! Glad to have you here, Toby.

Thanks Joseph!

Welcome Toby :)

Thank you Max!

Congrats Toby, excited to see what you get up to in the new role! And thanks for all your work on Amplify.

Curated and popular this week
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 8m read
 · 
And other ways to make event content more valuable.   I organise and attend a lot of conferences, so the below is correct and need not be caveated based on my experience, but I could be missing some angles here. Also on my substack. When you imagine a session at an event going wrong, you’re probably thinking of the hapless, unlucky speaker. Maybe their slides broke, they forgot their lines, or they tripped on a cable and took the whole stage backdrop down. This happens sometimes, but event organizers usually remember to invest the effort required to prevent this from happening (e.g., checking that the slides work, not leaving cables lying on the stage). But there’s another big way that sessions go wrong that is sorely neglected: wasting everyone’s time, often without people noticing. Let’s give talks a break. They often suck, but event organizers are mostly doing the right things to make them not suck. I’m going to pick on two event formats that (often) suck, why they suck, and how to run more useful content instead. Panels Panels. (very often). suck. Reid Hoffman (and others) have already explained why, but this message has not yet reached a wide enough audience: Because panelists know they'll only have limited time to speak, they tend to focus on clear and simple messages that will resonate with the broadest number of people. The result is that you get one person giving you an overly simplistic take on the subject at hand. And then the process repeats itself multiple times! Instead of going deeper or providing more nuance, the panel format ensures shallowness. Even worse, this shallow discourse manifests as polite groupthink. After all, panelists attend conferences for the same reasons that attendees do – they want to make connections and build relationships. So panels end up heavy on positivity and agreement, and light on the sort of discourse which, through contrasting opinions and debate, could potentially be more illuminating. The worst form of shal
 ·  · 8m read
 · 
Confidence Level: I’ve been an organizer at UChicago for over a year now with my co-organizer, Avik. I also started the UChicago Rationality Group, co-organized a 50-person Midwest EA Retreat, and have spoken to many EA organizers from other universities. A lot of this post is based on vibes and conversations with other organizers, so while it's grounded in experience, some parts are more speculative than others. I’ll try to flag the more speculative points when I can (the * indicates points that I’m less certain about).  I think it’s really important to make sure that EA principles persist in the future. To give one framing for why I believe this: if you think EA is likely to significantly reduce the chances of existential risks, you should think that losing EA is itself a factor significantly contributing to existential risks.  Therefore, I also think one of the most important ways to have a large impact in university (and in general) is to organize/start a university EA group.  Impact Through Force Multiplication 1. Scope – It's easy to be scope insensitive with respect to movement building and creating counterfactual EAs, but a few counterfactual EAs potentially means millions of dollars going to either direct work or effective charities. Getting one more cracked EA involved can potentially double your impact! 1. According to this post from 2021 by the Uni Groups Team: “Assuming a 20% discount rate, a 40 year career, and $2 million of additional value created per year per highly engaged Campus Centre alumnus, ten highly engaged Campus Centre alumni would produce around $80 million of net present value. The actual number is lower, because of counterfactuals.” It should be noted that campus centre alumni is referring to numbers estimated from these schools. 2. They also included an anecdote of a potential near-best-case scenario that I think is worth paraphrasing: The 2015 Stanford EA group included: Redwood CEO Buck Shlegeris, OpenPhil Program Direct