This is a special post for quick takes by yanni. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Question: I've noticed CE is investing in tobacco regulation. This has made me wonder if alcohol regulation been considered as a cause area? In some ways its externalities are worse (e.g. domestic violence). I'm very uncertain about its tractability and neglectedness compared to tobacco though.

GiveWell has funded Vital Strategy's alcohol work, OP has their global health policy focus area (inclusive of alcohol) and CE has incubated the Centre for Alcohol Policy Solutions (though I have limited visibility on their success since incubation a few years ago).

Check out CE's report on alcohol and tobacco for a short primer; you can also compare their assessment of success rates and neglectedness.

I think it would be great to have the option to listen to comments on the forum (i.e. audio comments).

The ea forum has some very long comments. Sometimes longer than the original post. This is a good thing, but for reasons I think are obvious (LMK if they aren't) I think it would be good to be able to listen to them.

I subscribe to (best $90 I ever spent FYI), and it plugs into the desktop version of chatGPT like the below. I am suggesting something similar for the forum.

Would it be interesting to gather a representative sample of EA's personalities?

The ClearerThinking team has released a new tool: "The Ultimate Personality Test".

We believe it is important to understand diversity in EA across a variety of dimensions, why not this one?

Would people eat factory farmed animals if they knew what they were screaming saying? 

Interesting goal, but the initial plan being recording and playing back animal audio doesn't inspire confidence they'll make much progress anytime soon

Should We Push For An AI Pause Might Be The Wrong Question

A quick thought on the recent discussion on whether pushing for a pause on frontier AI models is a good idea or not.

It seems obvious to me that within the next 3 years the top AI labs will be producing AI that causes large swaths of the public to push for a pause. 

Is it therefore more prudent to ask the following question: when much of the public wants a pause, what should our (the EA community) response be?

Interesting framing.

It's unclear to me how to integrate that theory with our decisions today given how much the strategic situation is likely to have shifted in that time.

Which public. Each country in this AI race has a different view on this, and some do not consult their public as much as others. The EA community ideally should take this into account. If the other countries aren't going to pause, and they will not, what should the USA do?

(The historical action would be AI progress stops being publicly discussed and all the current experts get drafted into secret labs with the goal of AGI first)

What are the animal welfare interventions that (1) have potential for high impact and (2) are very short term [i.e. if they work, they work within 10 years]? Basically, my AGI timelines are something like 40% ≤ 10 years and 40% ≤ 15 years. And I believe there isn't much point worrying about much after these timelines.

I think there is an argument that animal welfare intervention prioritisation should consider an AGI timeline of ~ 5 years, but not put too much stock in it.

More from yanni
Curated and popular this week
Relevant opportunities