I think some of the AI safety policy community has over-indexed on the visual model of the "Overton Window" and under-indexed on alternatives like the "ratchet effect," "poisoning the well," "clown attacks," and other models where proposing radical changes can make you, your allies, and your ideas look unreasonable.
I'm not familiar with a lot of systematic empirical evidence on either side, but it seems to me like the more effective actors in the DC establishment overall are much more in the habit of looking for small wins that are both good in themselves ...
There have been multiple occasions where I've copy and pasted email threads into an LLM and asked it things like:
I really want an email plugin that basically brute forces rationality INTO email conversations.
Tangentially - I wonder if LLMs can reliably convert peoples claims into a % through sentiment analysis? This would be useful for Forecasters I believe (and rationality in general)
Trump recently said in an interview (https://time.com/6972973/biden-trump-bird-flu-covid/) that he would seek to disband the White House office for pandemic preparedness. Given that he usually doesn't give specifics on his policy positions, this seems like something he is particularly interested in.
I know politics is discouraged on the EA forum, but I thought I would post this to say: EA should really be preparing for a Trump presidency. He's up in the polls and IMO has a >50% chance of winning the election. Right now politicians seem relatively receptive to EA ideas, this may change under a Trump administration.
The full quote suggests this is because he classifies Operation Warp Speed (reactive, targeted) as very different from the Office (wasteful, impossible to predict what you'll need, didn't work last time). I would classify this as a disagreement about means rather than ends.
...One last question, Mr. President, because I know that your time is limited, and I appreciate your generosity. We have just reached the four-year anniversary of the COVID pandemic. One of your historic accomplishments was Operation Warp Speed. If we were to have another pandemic, wo
Is EA as a bait and switch a compelling argument for it being bad?
I don't really think so
Is there any research on the gap between AI safety research and reality? I wanted to read Eric Drexler's report on R&D automation in AI development, but it was too long so I put it on hold.
It is very doubtful whether such things are within the controllable area.
(1)OpenAI incident
(2)Open source projects such as stockfish have their development process made public. However, it is very unclear and opaque (despite their best efforts).
Overall, I feel strongly that research on AI safety is disconnected from reality.
While we're taking a short break from writing criticisms, I (the non-technical author) was wondering if people would be find it valuable for us to share (brief) thoughts what we've learnt so far from writing these first two critiques - such as how to get feedback, balance considerations, anonymity concerns, things we wish would be different in the ecosystem to make it easier for people to provide criticisms etc.
I love this series and I'm sorry to see that you haven't continued it. The rapid growth of AI Safety organizations and the amount of insider information and conflicts of interest is kind of mind boggling. There should be more of this type of informed reporting, not less.
I intend to strong downvote any article about EA that someone posts on here that they themselves have no positive takes on.
If I post an article, I have some reason I liked it. Even a single line. Being critical isn't enough on it's own. If someone posts an article, without a single quote they like, with the implication it's a bad article, I am minded to strong downvote so that noone else has to waste their time on it.
I can't find a better place to ask this, but I was wondering whether/where there is a good explanation of the scepticism of leading rationalists about animal consciousness/moral patienthood. I am thinking in particular of Zvi and Yudkowsky. In the recent podcast with Zvi Mowshowitz on 80K, the question came up a bit, and I know he is also very sceptical of interventions for non-human animals on his blog, but I had a hard time finding a clear explanation of where this belief comes from.
I really like Zvi's work, and he has been right about a lot of things I ...
This is an interesting #OpenPhil grant. $230K for a cyber threat intelligence researcher to create a database that tracks instances of users attempting to misuse large language models.
https://www.openphilanthropy.org/grants/lee-foster-llm-misuse-database/
Will user data be shared with the user's permission? How will an LLM determine the intent of the user when it comes to differentiating between purposeful harmful entries versus user error, safety testing, independent red-teaming, playful entries, etc. If a user is placed on the database, is she notif...
Quick poll [✅ / ❌]: Do you feel like you don't have a good grasp of Shapley values, despite wanting to?
(Context for after voting: I'm trying to figure out if more explainers of this would be helpful. I still feel confused about some of its implications, despite having spent significant time trying to understand it)
Time for the Shrimp Welfare Project to do a Taylor Swift crossover?
https://www.instagram.com/p/C59D5p1PgNm/?igsh=MXZ5d3pjeHAxeHR2dw==
Not a wholly-unserious suggestion. SWP could do a tie-in with the artist creating these fun knock-offs, capitalise on Swift madness, rehabilitate shrimp as cute in the process.
Excerpt from the most recent update from the ALERT team:
Highly pathogenic avian influenza (HPAI) H5N1: What a week! The news, data, and analyses are coming in fast and furious.
Overall, ALERT team members feel that the risk of an H5N1 pandemic emerging over the coming decade is increasing. Team members estimate that the chance that the WHO will declare a Public Health Emergency of International Concern (PHEIC) within 1 year from now because of an H5N1 virus, in whole or in part, is 0.9% (range 0.5%-1.3%). The team sees the chance going up substantiall...
Thanks, here's the link for others: https://forecasting.substack.com/p/alert-minutes-for-week-172024
Given how bird flu is progressing (spread in many cows, virologists believing rumors that humans are getting infected but no human-to-human spread yet), this would be a good time to start a protest movement for biosafety/against factory farming in the US.
Btw, I don't think the virus has a high mortality rate in its current form, based on these reported rumors
How to communicate EA to the commonsense Christian: has it been done before?
I'm considering writing a series of posts exploring the connection between EA and the common-sense Christianity one might encounter on the street if you were to ask someone about their 'faith.'
I've looked into EA for Christians a bit, and haven't done a deep dive into their articles yet. I'm wondering what the consensus is on this group, and if anyone involved can give me a synopsis on how that's been going. Has it been effective?
I'm posting this quick take as a means of feeling ou...
How do you deal with the frustration of trying to find an Entry-level Machine Learning job as a Software Engineer not based near Bay Area or London?
Dustin Moskovitz claims "Tesla has committed consumer fraud on a massive scale", and "people are going to jail at the end"
https://www.threads.net/@moskov/post/C6KW_Odvky0/
Not super EA relevant, but I guess relevant inasmuch as Moskovitz funds us and Musk has in the past too. I think if this were just some random commentator I wouldn't take it seriously at all, but a bit more inclined to believe Dustin will take some concrete action. Not sure I've read everything he's said about it, I'm not used to how Threads works
A lot of policy research seems to be written with an agenda in mind to shape the narrative. And this kind of destroys the point of policy research which is supposed to inform stakeholders and not actively convince or really nudge them.
This might cause polarization in some topics and is in itself, probably snatching legitimacy away from the space.
I have seen similar concerning parallels in the non-profit space, where some third-sector actors endorse/do things which they see as being good but destroys trust in the whole space.
This gives me scary unilaterist's curse vibes..
In case you're interested in supporting my EA-aligned YouTube channel A Happier World:
I've lowered the minimum funding goal from $10,000 to $2,500 to give donors confidence that their money will directly support the project. Because if the minimum funding goal isn't reached, you won't get your money back. Instead it will go back in your Manifund balance for you to spend on a different project. I understand this may have been a barrier for some, which is why I lowered the minimum funding goal.
At this point, I'd be willing to buy out credit from anyone who obtains credit on Manifund, applies said credit to this project, and the project doesn't fund. Hopefully Manifund will find a more elegant solution for this kind of issue (there was a discussion on Discord last week) but this should work as a stopgap.
(Offer limited to $240, which is the current funding gap between current offers and the $2500 minimum.)
First in-ovo sexing in the US
Egg Innovations announced that they are "on track to adopt the technology in early 2025." Approximately 300 million male chicks are ground up alive in the US each year (since only female chicks are valuable) and in-ovo sexing would prevent this.
UEP originally promised to eliminate male chick culling by 2020; needless to say, they didn't keep that commitment. But better late than never!
Congrats to everyone working on this, including @Robert - Innovate Animal Ag, who founded an organization devoted to pushing this tec...
I asked Google when chicken embryos start to feel pain and this was the first result (i.e. I didn't look hard and I didn't anchor on a figure):
A recent study by the Technical University of Munich in Germany measured chicken embryos' heart rate, brain activity, blood pressure and movements in response to potentially painful stimuli like heat and electricity and concluded that they didn't seem to feel them until at least day 13. (14 Oct 2023)
Do you believe that altruism actually makes people happy? Peter Singer's book argues that people become happier by behaving altruistically, and psychoanalysis also classifies altruism as a mature defense mechanism. However, there are also concerns about pathological altruism and people pleasers. In-depth research data on this is desperately needed.
Good question I also think about!
After being only for a few months deeply into EA I already realise that discussing with non EA-people makes me emotional, since I "cannot understand" why they are not getting easily convinced of it as well. How can something so logical not being followed by everyone? At least by donating? I think there is the danger to become pathetic if you don't reflect on it and be aware that you cannot convince everybody.
On the other side EA is already having a big impact on how I donate and how I act in my job - so in this ... (read more)