Hey all,

There have been a few minor updates with the forum over the past few weeks.

First, new comments are now highlighted. Each time you visit an article, any new comments since your last login will now be displayed with a bright blue outline. This should make it easier to read new thoughts on an article, especially if they are buried deep in a comment thread. A good effect of this is that now when you write a comment, you know that some other users will see it and read it. This feature will start working after your second login, starting from now.

Second, under 'Recent on EA Blogs', Jeff Kaufman's posts about effective altruism are now displayed.

Third, Jeff Kaufman has made us an awesome favicon, which now identifies the site:

It's great to see improvements like these. If you have more feature suggestions, rather than scattering these suggestions all over the site, it's best to concentrate them in this thread or in this Google form (responses here).

In order to facilitate more improvements to the Effective Altruism Forum, its developers, Trike Apps have shared its code respository on GitHub, and they will welcome contributions there over the coming months to years.

In a couple of days, we will also welcome some new users from The Life You Can Save mailing list. If you're reading this, LYCS readers, then we hope you enjoy your stay here. 

In one week, the karma requirement will be decreased, and so more users can post articles on the forum.

Ryan

5

0
0

Reactions

0
0
Comments8


Sorted by Click to highlight new comments since:

I just wanted to say I think the forum is working really well so far - big kudos to the team behind it!

Yes indeed! Who is that team? Am I right in thinking it's Ryan Carey who owns the site?

Thanks Ben!

Hi Toron, Mihai Badic designed the site, giving away some of his time for free, and Trike Apps built the site from the LessWrong codebase entirely for free. I've done what's left of managing the overall project, and am administrating :)

I hope you continue to enjoy it, Toron.

Excellent! The new comment highlighting makes this forum much more readable.

One thing that I'd like to see here, and have wished for LW for a long time, would be an option to sort threads by the most recent posting - so that commenting in those threads would "bump" the thread to the top, like it does on ordinary forums. People haven't been very enthusiastic about this proposal on LW for whatever reason, but the lack of that feature contributes to what I feel is the largest problem of the site - that valuable and semi-active threads get quickly buried below more recent ones, so that e.g. new Open Threads need to be continually reposted rather than the old ones organically rising to the top when they have new activity. This also disincentivizes people to comment in older threads, since their comments won't be seen by as many.

The standard objection to why this feature isn't needed is that a lot of people follow the "all comments" section of the site that also shows comments to old threads, and it's true that sometimes this allows there to be new discussion in some old thread. But I still feel that the amount of people who follow "all comments" is much smaller than the amount of people who read the site by more "normal" means, and that the psychological disincentivizing effect persists even if some people do read "all comments".

I agree with your characterisation of the problem - new good new comments on old threads can get missed. A 'sort by new comments' option would probably help. An alternative would be to use a mixed algorithm to make up the default frontpage. A mixture of recent article/recent posts/lots of upvotes. I'm happy to discuss this more.

It would be nice to have some place (like the LW-FAQ) where things about the forum are explained such as:

  • How to post links (I think I once tried it as described in the LW-FAQ, but it didn't work)
  • How to quote
  • Other site-mechanics: Here's a list from LW:

5 Site Mechanics

5.1 How do I make a comment? 5.2 Is it worth commenting on ancient posts and long-dead threads? 5.3 How does voting work? 5.4 How is karma calculated? 5.5 Why do I want high karma? 5.6 How do I make a submission? 5.7 What is shown on the "New" page? 5.8 How do I get my post on the front page? 5.9 I deleted an article. Can I undelete it?

Those are handy updates Ryan, especially having new comments be highlighted - great work!

One feature I miss from Facebook is the ability to draw people's attention to comments and threads by tagging them. I realise this may be tricky to implement though (you'd probably want to use a program called sendmail on your server).

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier