Senior Content Specialist @ Centre for Effective Altruism
15989 karmaJoined Nov 2019Working (0-5 years)


I run the non-engineering side of the EA Forum (this platform), run the EA Newsletter, and work on some other content-related tasks at CEA. Please feel free to reach out! You can email me. [More about my job.]

Some of my favorite of my own posts:

I finished my undergraduate studies with a double major in mathematics and comparative literature in 2021. I was a research fellow at Rethink Priorities in the summer of 2021 and was then hired by the Events Team at CEA. I've since switched to the Online Team. In the past, I've also done some (math) research and worked at Canada/USA Mathcamp.

Some links I think people should see more frequently:


Celebrating Benjamin Lay (1682 - 1759)
Donation Debate Week (Giving Season 2023)
Marginal Funding Week (Giving Season 2023)
Effective giving spotlight - classic posts
Selected Forum posts (Lizka)
Classic posts (from the Forum Digest)
Forum updates and new features
Winners of the Creative Writing Contest
Winners of the First Decade Review
Load more (9/10)


Topic contributions

I know Grace has seen this already, but in case others reading this thread are interested: I've shared some thoughts on not taking the pledge (yet) here.[1]

Adding to the post: part of the value of pledges like this comes from their role as a commitment mechanism to prevent yourself from drifting away from values and behaviors that you endorse. I'm not currently worried about drifting in this way, partly because I work for CEA and have lots of social connections to extremely altruistic people. If I started working somewhere that isn't explicitly EA-oriented and/or lost my connections to the EA community, I think I'd worry a lot more about drift and the usefulness of the pledge would jump for me. (I plan on thinking about taking some kind of pledge if/when that happens.)

I'll also note that I've recently seen multiple people ~dunking on folks in EA who haven't taken the pledge (or making fun of arguments against taking the pledge), and I think this is pretty unhelpful. I'm really grateful to the GWWC Pledge community, but I really don't think the pledge is right for everyone (and neither does GWWC). Even if you think almost all the people who aren't pledging are wrong and/or biased, dunking is probably a bad way to argue. Additionally, it disincentivizes people from coming out and answering Grace's question, since they might worry that they'll (indirectly) get ridiculed for it. So if you see someone you know ~dunking, consider asking them to avoid doing that (especially if you already know them and/or have been sharing arguments for taking the pledge).

  1. ^

    To be clear: I totally believe my conclusion could be wrong, and I'm happy to see (more) arguments about why that could be. (Having said that, I should flag that I don't plan on spending time on this decision right now because I think I have more pressing decisions at the moment, but it's something I want to think more about in the future. So e.g. I might not respond to comments.)

As a quick update: I did not in fact share two posts during the week. I'll try to post another "DAW post" (i.e. something from my drafts, without spending too much time polishing it) sometime soon, but I don't endorse prioritizing this right now and didn't meet my commitment. 

Answer by LizkaMar 11, 202410

Not sure if this already exists somewhere (would love recommendations!), but I'd be really excited to see a clear and carefully linked/referenced overview or summary of what various agriculture/farming ~lobby groups do to influence laws and public opinion, and how they do it (with a focus on anything related to animal welfare concerns). This seems relevant.

Just chiming in with a quick note: I collected some tips on what could make criticism more productive in this post: "Productive criticism: what could help?"

I'll also add a suggestion from Aaron: If you like a post, tell the author! (And if you're not sure about commenting with something you think isn't substantive, you can message the author a quick note of appreciation or even just heart-react on the post.) I know that I get a lot out of appreciative comments/messages related to my posts (and I want to do more of this myself). 

I'll commit to posting a couple of drafts. Y'all can look at me with disapproval (or downvote this comment) if I fail to share two posts during Draft Amnesty Week. 

Answer by LizkaFeb 28, 202421

I'm basically always interested in potential lessons for EA/EA-related projects from various social movements/fields/projects.

Note that you can find existing research that hasn't been discussed (much) on the Forum and link-post it (I bet there's a lot of useful stuff out there), maybe with some notes on your takeaways. 

Example movements/fields/topics: 

  • Environmentalism — I've heard people bring up the environmentalist/climate movement a bunch in informal discussions as an example for various hypotheses, including "movements splinter/develop highly counterproductive & influential factions" or "movements can get widespread interest and make policy progress" etc. 
  • The effectiveness of protest — I'm interested in more research/work on this (see e.g. this and this).
  • Modern academia (maybe specific fields) — seems like there are probably various successes/failures/ideas we could learn from. 
  • Animal welfare
  • Mohism (see also)
  • Medicine/psychology in different time periods

Some resources, examples, etc. (not exhaustive or even a coherent category): 

Answer by LizkaFeb 28, 20245

I'd love to see two types of posts that were already requested in the last version of this thread:

  • From Aaron: "More journalistic articles about EA projects. [...] Telling an interesting story about the work of a person/organization, while mixing in the origin story, interesting details about the people involved, photos, etc."
  • From Ben: "More accessible summaries of technical work." (I might share some ideas for technical work I'd love to see summarized later.)

I really like this post and am curating it (I might be biased in my assessment, but I endorse it and Toby can't curate his own post). 

A personal note: the opportunity framing has never quite resonated with me (neither has the "joy in righteousness" framing), but I don't think I can articulate what does motivate me. Some of my motivations end up routing through something ~social. For instance, one (quite imperfect, I think!) approach I take[1] is to imagine some people (sometimes fictional or historical) I respect and feel a strong urge to be the kind of person they would respect or understand; I want to be able to look them in the eye and say that I did what I could and what I thought was right. (Another thing I do is try to surround myself with people[2] I'm happy to become more similar to, because I think I will often end up seeking their approval at least a bit, whether I endorse doing it or not.)

I also want to highlight a couple of related things: 

  1. "Staring into the abyss as a core life skill"
    1. "Recently I’ve been thinking about how all my favorite people are great at a skill I’ve labeled in my head as “staring into the abyss.” 
      Staring into the abyss means thinking reasonably about things that are uncomfortable to contemplate, like arguments against your religious beliefs, or in favor of breaking up with your partner. It’s common to procrastinate on thinking hard about these things because it might require you to acknowledge that you were very wrong about something in the past, and perhaps wasted a bunch of time based on that (e.g. dating the wrong person or praying to the wrong god)."
    2. (The post discusses how we could get better at the skill.)
  2. I like this line from Benjamin Lay's book: "For custom in sin hides, covers, as it were takes away the guilt of sin." It feels relevant.
  1. ^

    both explicitly/on purpose (sometimes) and often accidentally/implicitly (I don't notice that I've started thinking about whether I could face Lay or Karel Capek or whoever else until later, when I find myself reflecting on it)

  2. ^

    I'm mostly talking about something like my social circle, but I also find this holds for fictional characters, people I follow online, etc. 

Thanks for sharing this! I'm going to use this thread as a chance to flag some other recent updates (no particular order or selection criteria — just what I've recently thought was notable or recently mentioned to people): 

  1. California proposes sweeping safety measure for AI — State Sen. Scott Wiener wants to require companies to run safety tests before deploying AI models. (link goes to "Politico Pro"; I only see the top half)
    1. Here's also Senator Scott Wiener's Twitter thread on the topic (note the endorsements)
    2. See also the California effect
  2. Trump: AI ‘maybe the most dangerous thing out there’ (seems mostly focused on voting-related robocalls/deepfakes and digital currency)
  3. Jacobin publishes an article on AI existential risk (Twitter)

I don't actually think you need to retract your comment — most of the teams they used did have (at least some) biological expertise, and it's really unclear how much info the addition of the crimson cells adds. (You could add a note saying that they did try to evaluate this with the additional of two crimson cells? In any case, up to you.)

(I will also say that I don't actually know anything about what we should expect about the expertise that we might see on terrorist cells planning biological attacks — i.e. I don't know which of these is actually appropriate.)

Load more