Hide table of contents

It’s been four months since our last forum update post! Here are some things the dev team has been working on.

We launched V1 of a new Events page

It shows you upcoming events near you, as well as global events open to everyone! We think it’s now the most complete overview of EA events that exists anywhere.

Some improvements we’ve made over the last few months:

  1. Anyone can post an event on the new Events page by clicking on “New Event” in their username dropdown menu in the Forum. (We also have a contractor who cross-posts many events).
  2. You can easily add an event to your calendar, or be notified about events near you.
  3. Events now have images, which we think makes the page more engaging and easier to parse.
  4. We’ve improved the individual event pages, to show key information more clearly, and make it obvious how you can attend an event.
  5. You can see upcoming events in the Forum sidebar.

If you think the new Events page is useful, please share it widely! :)

We also made a number of small improvements to the Community page, and we’re working on a significant redesign, to make it more visual and groups-focused.

Update: We launched the redesigned Community page! This will eventually replace the EA Hub groups list. (If you would like to be assigned as a group organizer to one of the groups on the Forum, or if you know of groups that are missing, please let me know.)

100+ karma users can add co-authors

It’s now possible for users to add co-authors to their posts. As a precaution against spamming, this is currently only available to users with 100+ karma. If you have less karma, feel free to contact us to add co-authors.

We updated the Sequences page

We renamed it to “Library” and highlighted some core sequences, like the Most Important Century series by Holden Karnofsky.

We merged our codebase with LessWrong

Now we share a Github repo: ForumMagnum. Feel free to check out what we’re working on, and do let us know of any issues you see.

We ran the EA Decade Review

Thanks to everyone who participated! The Review ran from December 1 to February 1. Our team has been busy since then, but we should be posting about the results soon - I know I’m looking forward to reading it! :)

We started reworking tag subscriptions

Currently, “subscribing” to a tag on the Forum means you get notifications for new posts with that tag. However, we are moving more toward the YouTube model, where “subscribing” weights posts with that tag more heavily on the frontpage, and you can separately sign up for tag notifications via the bell icon. See more details here.

Right now this new version is behind the “experimental features” flag, so if you want to play with it you’ll have to check this box in your account settings:

You can now change your username

You can now change your username (i.e. display name) yourself via your account settings page. However, you only get one change - after that, you’ll need to contact us to change it again. 

We can also hide your profile from search engines and change the URL associated with your profile. Please contact us if you’d like to do this.

We added footnotes support to our default text editor

Last but certainly not least, we deployed one of the most requested Forum features: footnotes! See the standalone post for more details.

Questions? Suggestions?

We welcome feedback! Feel free to comment on this post, leave a note on the EA Forum feature suggestion thread, or contact us directly.

Join our team! :)

We’ve built a lot these past few months, but there’s much more to be done. We’re currently hiring for software developers to join our team and help us make the EA Forum the best that it can be. If you’re interested, you can apply here.

78

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

100+ karma users can add co-authors

Much appreciated!

Currently, “subscribing” to a tag on the Forum means you get notifications for new posts with that tag. However, we are moving more toward the YouTube model, where “subscribing” weights posts with that tag more heavily on the frontpage, and you can separately sign up for tag notifications via the bell icon.

This is great. It would also be valuable (though probably not high priority) to have a Wikipedia-style "watchlist" where users could see all the activity related to the entries they are subscribed to, including new edits to those articles.

Separately, in order to avoid needless jargon, I vote for calling the "sequences" collections.

Thanks for the suggestion! I've added it to our list for triage. Also I agree that "sequence" is unclear - personally I'm a fan of "series", since it still implies that there is an order, but I haven't put that much thought into it. :)

Also [very minor]: the "load more" button loads 10 additional posts, but because of the three-column layout, this means that two out of three times the final line of posts will be incomplete. I think the "load more" button should instead load nine posts, or a multiple of three.

Yeah, I agree "series" would be more appropriate if the collected posts are ordered, though it seems that some of the "sequences" in the library are not meant to be read in any particular order.

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 30m read
 · 
Summary In this article, I argue most of the interesting cross-cause prioritization decisions and conclusions rest on philosophical evidence that isn’t robust enough to justify high degrees of certainty that any given intervention (or class of cause interventions) is “best” above all others. I hold this to be true generally because of the reliance of such cross-cause prioritization judgments on relatively weak philosophical evidence. In particular, the case for high confidence in conclusions on which interventions are all things considered best seems to rely on particular approaches to handling normative uncertainty. The evidence for these approaches is weak and different approaches can produce radically different recommendations, which suggest that cross-cause prioritization intervention rankings or conclusions are fundamentally fragile and that high confidence in any single approach is unwarranted. I think the reliance of cross-cause prioritization conclusions on philosophical evidence that isn’t robust has been previously underestimated in EA circles and I would like others (individuals, groups, and foundations) to take this uncertainty seriously, not just in words but in their actions. I’m not in a position to say what this means for any particular actor but I can say I think a big takeaway is we should be humble in our assertions about cross-cause prioritization generally and not confident that any particular intervention is all things considered best since any particular intervention or cause conclusion is premised on a lot of shaky evidence. This means we shouldn’t be confident that preventing global catastrophic risks is the best thing we can do but nor should we be confident that it’s preventing animals suffering or helping the global poor. Key arguments I am advancing:  1. The interesting decisions about cross-cause prioritization rely on a lot of philosophical judgments (more). 2. Generally speaking, I find the type of evidence for these types of co