Reposting this comment from the CEO of Open Philanthropy 12 days ago, as I think some people missed it:
...A quick update on this: Good Ventures is now open to supporting work that Open Phil recommends on digital minds/AI moral patienthood. We're still figuring out where that work should slot in (including whether we’d open a public call for applications) and will update people working in the field when we do. Additionally, Good Ventures are now open to considering a wider range of recommendations in right-of-center AI policy and a couple other smaller areas (
I'm at EAG NYC right now, and as part of our Forum writing session, we're asking participants to respond to this quick take with their Forum post ideas.
Encourage ideas you'd like to see! It's Draft Amnesty next week...
Thanks for bringing this up, Camille! Noting that we at Probably Good would be happy to speak to you or others like you transitioning from USAID on where you might best use your talent and experience
I am going to be running an effective giving event about malaria. For this I need three effective anti-malaria charities that attendees can split a funding pool between (and thereby learn things about what malaria is and how to tackle it, which is the main purpose of the event). I have the Against Malaria Foundation and the Malaria Consortium. I would like a charity that is focused on the roll-out of malaria vaccines, or better malaria vaccine research, that can at least be argued to be near EA's effectiveness threshold (possibly only when downstream effects of research or advocacy is taken into account). Ideally it should have "Malaria" in its name. Anyone have any ideas?
It seems plausible to me that protecting liberal democracy in America is the most important issue. If America falls to authoritarian rule, what hope is there of international cooperation on existential issues like AI safety, pandemic risk, etc? But, probably like many EAs, I worry that this is not a very tractable issue. Maybe it would be a good idea to read some history and learn how authoritarian regimes can be combated.
A lot of AI racing is driven by the idea that the US has to stop China from getting AI because China is authoritarian. If the US was authoritarian as well, that motive for AI racing would go away. Furthermore, authoritarian countries seem predisposed to cooperate: see the China/Russia/Iran/North Korea axis. If the US became authoritarian, that could usher in a new era of US/China cooperation, to the benefit of the world as a whole.
Poll: What effect have protests calling for a pause / stop / deceleration on the development of AI had on the world?
a) Net positive
b) None / Negligible
c) Net negative
This poll will be used to resolve this prediction market.
Vote by agree reacting to one of my comments below. Only vote once!
At times, It feels difficult to choose between survival and ambitions. Survival indirectly forces me not to look at my ambitions coz the later seem too distant. Hence there's no intersection of survival and ambition- this kinda is saddening and sucks the life out of life at times.
I sometimes think of this idea and haven't found anyone mentioning it with a quick AI search: a tax on suffering.
EDIT: there's a paper on this but specific to animal welfare that was shared on the forum earlier this year.
A suffering tax would function as a Pigouvian tax on negative externalities—specifically, the suffering imposed on sentient beings. The core logic: activities that cause suffering create costs not borne by the actor, so taxation internalizes these costs and incentivizes reduction.
This differs from existing approaches (animal welfare regula...
Ajeya Cotra writes:
...I bet a number of generalist EAs (people who are good at operations, conceptual research / analysis, writing, generally getting shit done) should probably switch from working on AI safety and policy to working on biosecurity on the current margin.
While AI risk is a lot more important overall (on my views there's ~20-30% x-risk from AI vs ~1-3% from bio), it seems like bio is a lot more neglected right now and there's a lot of pretty straightforward object-level work to do that could take a big bite out of the problem (something that's mu
Pivotal is hiring a Research Manager for Technical AI Safety. This could be a great fit if you're technical, enjoy being in the weeds of multiple research projects, love 1:1s, or could see yourself as a coach. You don't need to be an experienced RM already, we want to make you one! Apply here – we evaluate on a rolling basis. Recommend people here or to me directly.
FYI - I'm excited to be adding @kuhanj as a judge for the 'Essays on Longtermism' competition.
Little bit of a bio: Kuhan researches and implements cost-effective ways to safeguard democracy at Movement Labs. Previously he founded two non-profits focused on AI and existential risk mitigation research, advocacy and university student engagement - the Stanford Existential Risks Initiative, which incubated MATS and EA Virtual Programs, and the Cambridge Boston Alignment Initiative. He's also worked with CEA's events and groups teams, helping to grow...
given netflix is working on "the altruists" about SBF I think we need to dispute him being EA, if I claimed to be a vegan but then was caught eating meat no one would continue to see me as a vegan, they'd see me as a meat eater who lied about being vegan - the same logic needs to be applied to SBF, he didn't make a mistake he made a choice, one that is firmly incompatible with believing or being EA, he wasn't an imperfect EA he was repeatedly dishonest and lied for his own gain.
I bet a number of generalist EAs (people who are good at operations, conceptual research / analysis, writing, generally getting shit done) should probably switch from working on AI safety and policy to working on biosecurity on the current margin.
While AI risk is a lot more important overall (on my views there's ~20-30% x-risk from AI vs ~1-3% from bio), it seems like bio is a lot more neglected right now and there's a lot of pretty straightforward object-level work to do that could take a big bite out of the problem (something that's much harder to come b...
Is the 1-3% x-risk from bio including bio catastrophes mediated by AI (via misuse and/or misalignment? Is it taking into account ASI timelines?
I'm largely deferring to ASB on these numbers, so he can potentially speak in more detail, but my guess is this includes AI-mediated misuse and accident (people using LLMs or bio design tools to invent nastier bioweapons and then either deliberately or accidentally releasing them), but excludes misaligned AIs using bioweapons as a tactic in an AI takeover attempt. Since the biodefenses work could also help with t...
Mostly for fun I vibecoded an API to easily parse EA Forum posts as markdown with full comment details based on post URL (I think helpful mostly for complex/nested comment sections where basic copy and paste doesn't work great)
I have tested it on about three posts and every possible disclaimer applies
Here is an endpoint that takes a google doc and turns into a markdown file, including the comments. https://docs.nunosempere.com. Useful for automation, e.g., I downloaded my browser history, extracted all google docs, summarized them, and asked for a summary & blindspots.
Hey yall, does there exist some platform that allows you to post a donation matching campaign?
I would match all donations my friends/families make to a charity. I want the platform to ensure verification that I am actually matching the donations of my friends and vis versa.
Does anything like this exist? And if not, would you be interested in using one? I am thinking of making such a platform.
Interesting! Thank you so much for replying!
I'm interested in making it automated. It's a real pain to get the proper permissioning to hold funds in escrow so my initial pass would be to just have the campaign starter fully donate their amount at the start (since most people who make matching campaigns will ultimately donate the same max amount of money) and then have an 'unlock' that occurs as other people donate (with integration to every.org to verify the charity etc).
However the authorization holds is an interesting idea. Good if you want to actu...
As AI models get better at generating art (music, visual arts, videography, writing, etc.)—better being defined as being less discernable from human-made output—the value of live performances and the act of creating art will dwarf the value of mediums accessible to AI (eg. books, recorded music, YouTube videos).
Writing seems to me the art form which will be least impacted by this, though this is likely due to my skewed perspective—I've engaged with LLMs way more than AI models of other modalities.
TL;DR: $100,000 for insights into an EA's unsolved medical mystery
(Sharing on behalf of the patient to preserve their anonymity)
The Medical Mystery Prize is a patient-funded initiative offering a $100,000 grand prize (plus smaller awards) for ideas that help advance a difficult, unresolved medical case.
The patient works in AI safety. The goal is to solve his health issue so that he can do his best work.
All patient records are fully anonymized and HIPAA-compliant. Submissions for the prize will be reviewed by a licensed healthcare provider before reac...
AI governance could be much more relevant in the EU, if the EU was willing to regulate ASML. Tell ASML they can only service compliant semiconductor foundries, where a "compliant semicondunctor foundry" is defined as a foundry which only allows its chips to be used by compliant AI companies.
I think this is a really promising path for slower, more responsible AI development globally. The EU is known for its cautious approach to regulation. Many EAs believe that a cautious, risk-averse approach to AI development is appropriate. Yet EU regulations are oft...
Let’s create example trial tasks to strengthen EA hiring?
EA orgs use trial tasks quite a lot in hiring, which is great since candidates can demonstrate their skills, which is what truly matters regardless of their background. However, outside of EA, trial tasks are often quite different, and for the average candidate, it usually takes several rejections before they learn how to show their best in that setting.
It would be great if we had example trial tasks for different roles (research, operations, etc.) so that people could practice before applying to real jobs. This way, strong candidates would not get lost in the hiring process simply due to inexperience with trial tasks.