Hide table of contents

Hello! 

The tl;dr: 

  • I’m the new Content Specialist at the Centre for Effective Altruism, taking over from Aaron Gertler
  • I’m taking on a lot of his old responsibilities, including running the non-engineering side of the EA Forum, and you can reach out to me if you have any questions about or issues with the Forum. 
  • More below.

My name is Lizka.[1] You may have seen some of my old posts. You’ll likely be seeing me post more frequently now. :)

Some things I’ll be doing in my new role[2]

  1. Running the EA Newsletter and the Forum Digest
  2. Encouraging people to post on the Forum
    1. Or to sign up and add a bio
    2. Or to comment, or upvote
    3. Etc.
  3. Offering Forum workshops (e.g. at EAGx Oxford)
  4. Offering feedback on drafts
  5. Some moderation
  6. Setting up AMAs and contests
  7. General Forum support — if you’re experiencing an issue with the Forum, please reach out!
  8. Generally trying to make the Forum as great and welcoming as possible

Some things I won’t be doing

  1. Awesome software engineering that makes the Forum actually exist
  2. Reading and responding to every post, sadly (because, happily, there are too many)

Feel free to get in touch with me!

DM me on the Forum or email me at lizka.vaintrob@centreforeffectivealtruism.org. (And you can always just comment on this post.)

Somewhat random image[3]


 

  1. ^

    I have yet to meet another “Lizka” in the EA community. Please let me know if your name is also Lizka! (And as a minor point of clarification, my name is pronounced: "lease-kah.")

  2. ^

    At least for now. We’ll see exactly how my role evolves and we’re leaving it somewhat flexible to adapt to how things go. 

  3. ^

     I think more posts should have pictures. This is a visualization I made of certain points’ orbits under the action of a kleinian group.

Comments25
Sorted by Click to highlight new comments since:

Congrats! I'm excited for you and for the future of our Forum!

Thanks:)

Congrats on the new role! I'd be keen to hear about your strategy for the forum when you've had time to formulate it.

(This is one place where I disagree with the LW team that tends to deemphasise meta-posts. I agree that it's important to prevent a forum from being overrun with meta, but I believe that it's important to very occasionally promote certain meta-posts in order to create the sense of a community. In particular, I'm in favour of occasionally promoting posts that provide the community with a sense of where things are headed and allow input into major decisions).

Thank you! 

I'll be working on this sort of question, and I'm always curious to hear what people think! 

Welcome to your new role Lizka!

Thank you! 

Congrats! Really exciting to a) see how much skill development and progress you've made in the <1 year that I've known you, and b) the future of the Forum now that you're at the helm!

Thanks for the sentiment and for your part in that! :)

I intend to take 10% of the credit for all of your future impact. :P 

Woot, congratulations! I struggled to imagine a good successor to Aaron but I'm genuinely excited to see how the Forum will flourish under you. : )

<3 Thank you!

Crazy forum idea:

Add a checkbox which is something like "employers may contact me"

Thanks for sharing this idea! 

Hi! Congratulations on the new position! 

 

I was curious, MetaCulus and similar sites have been really useful recently for many EA's. Would there be anyway to create a similar system on EA here to do group sourced elicitation for possible ethical outcomes? It might super power the EA Forum's ability to make rational comments! 

Thanks! I don't have a very clear understanding of what you're proposing, but generally I'm excited for things like Metaculus :)

Hi Lizka! Thank you for the great introduction of yourself, would be great to communicate and discuss lots of interesting/important topics here on EA forum! 

Thanks! 

Welcome Lizka! Hope the role is enjoyable for you for quite some time

Thanks! (I do, too!)

Congrats! Loving the random image

Yay, thank you! 

Congrats!

Thanks! 

Oh, very exciting – looking forward to attending a Forum workshop! :)

Awesome, looking forward to seeing you at one! 

More from Lizka
Curated and popular this week
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 8m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It’s deadline time. Over the last decade, many of the world’s largest food companies — from McDonald’s to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just seven months left, it turns out that many are not. Walmart backtracked first, blaming both its customers and suppliers, who “have not kept pace with our aspiration to transition to a full cage-free egg supply chain.” Kroger soon followed suit. Others, like Target, waited until the last minute, when they could blame bird flu and high egg prices for their backtracks. Then there are those who have just gone quiet. Some, like Subway and Best Western, still insist they’ll be 100% cage-free by year’s end, but haven’t shared updates on their progress in years. Others, like Albertsons and Marriott, are sharing their progress, but have quietly removed their pledges to reach 100% cage-free. Opportunistic politicians are now getting in on the act. Nevada’s Republican governor recently delayed his state’s impending ban on caged eggs by 120 days. Arizona’s Democratic governor then did one better by delaying her state’s ban by seven years. US Secretary of Agriculture Brooke Rollins is trying to outdo them all by pushing Congress to wipe out all stat
 ·  · 5m read
 · 
Note: the deadline for applications has been extended to June 29th.  The Centre for Effective Altruism (CEA) is seeking an experienced leader to join our senior leadership team as the Director of  EA Funds. EA Funds is an established grantmaking organization that currently operates as an independent project but is merging into CEA. We wrote about the merger of CEA and EA Funds here, and now we’re looking for an ambitious leader to scale EA Funds and move (at least) hundreds of millions of dollars. Apply now About EA Funds and CEA Effective Altruism Funds (EA Funds) is an existing foundation that directs financial resources to particularly cost-effective and altruistically impactful projects. The platform makes funding accessible for high-impact projects and maintains specialized funds in key focus areas, managed by subject-matter experts who identify the highest-impact opportunities. EA Funds is composed of four separate funds: the EA Infrastructure Fund, the Long-term Future Fund, the Animal Welfare Fund, and the Global Health and Development Fund. The Centre for Effective Altruism (CEA, that’s us!) is an organization dedicated to building and stewarding a global community of people who are thinking carefully about the world’s most pressing problems and taking action to solve them. Our current strategic priorities include growing the effective altruism community, improving the EA brand, and diversifying EA funding sources.  Our organizations currently operate independently under our mutual parent company Effective Ventures, and will integrate operations under CEA as we spin out from our parent company. We believe this merger is the best way to achieve our common goal of contributing to a radically better world. EA Funds is a natural fit for CEA’s strategy to steward the EA community and our focus on building sustainable momentum for effective altruism. EA Funds currently directs $10-$20M in funding on an annual basis, and we want to hire an ambitious leader