Pause AI / Veganish
Lets do a bunch of good stuff and have fun gang!
I am always looking for opportunities to contribute directly to big problems and to build my skills. Especially skills related to research, science communication, and project management.
Also, I have a hard time coping with some of the implications of topics like existential risk, the strangeness of the near term future, and the negative experiences of many non-human animals. So, it might be nice to talk to more people about that sort of thing and how they cope.
I have taken BlueDot Impact's AI Alignment Fundamentals course. I have also lurked around EA for a few years now. I would be happy to share what I know about EA and AI Safety.
I also like brainstorming and discussing charity entrepreneurship opportunities.
Distillation for Robust Unlearning Paper (https://arxiv.org/abs/2506.06278) makes me re-interested in the idea of using distillation to absorb the benefits of a Control Protocol (https://arxiv.org/abs/2312.06942).
I thought that was a natural "Distillation and Amplification" next step based for control anyways, but the empirical results for unlearning make me excited about how this might work for control again.
Like, I guess I am just saying that if you are actually in a regime where you are using Trusted model some nontrivial fraction of the time, you might be able to distill off of that.
I relate it to the idea of iterated amplification and distillation; the control protocol is the scaffold/amplification. Plus, it seems natural that your most troubling outputs would receive special attention from bot/human/cyborg overseers and receive high quality training feedback.
Training off of control might make no sense at all if you then think of that model as just one brain playing a game with itself that it can always rig/fake easily. And since a lot of the concern is scheming, this might basically make the "control protocol distill" dead on arrival because any worthwhile distill would still need to be smart enough that it might be sneak attacking us for roughly the same reasons the original model was and even extremely harmless training data doesn't help us with that.
Seems good to make the model tend to be more cool and less sketchy even if it would only be ~"trusted model level good" at some stuff. Idk though, I am divided here.
Here's a question that comes to mind: if local EA communities make people 3x more motivated to pursue high-impact careers, or make it much easier for newcomers to engage with EA ideas, then even if these local groups are only operating at 75% efficiency compared to some theoretical global optimum, you still get significant net benefit.
I am sympathetic to this argument vibes wise and I thought this was an elegant numerate utilitarian case for it. Part of my motivation is that I think it would be good if a lot of EA-ish values were a lot more mainstream. Like, I would even say that you probably get non-linear returns to scale in some important ways. You kind of need a critical mass of people to do certain things.
It feels like, necessarily, these organizations would also be about providing value to the members as well. That is a good thing.
I think there is something like a "but what if we get watered down too much" concern latent here. I can kind of see how this would happen, but I am also not that worried about it. The tent is already pretty big in some ways. Stuff like numerate utilitarianism, empiricism, broad moral circles, thoughtfulness, tough trade-offs doesn't seem in danger of going away soon. Probably EA growing would spread these ideas rather than shrink them.
Also, I just think that societies/people all over the world could significantly benefit from stronger third pillars and that the ideal versions of these sorts of community spaces would tend to share a lot of things in common with EA.
Picture it. The year is 2035 (9 years after the RSI near-miss event triggered the first Great Revolt). You ride your bitchin' electric scooter to the EA-adjacent community center where you and your friends co-work on a local voter awareness campaign, startup idea, or just a fun painting or whatever. An intentional community.
That sounds like a step towards the glorious transhumanist future to me, but maybe the margins on that are bad in practice and the community centers of my day dreams will remain merely EA-adjacent. Perhaps, I just need to move to a town with cooler libraries. I am really not sure what the Dao here is or where the official EA brand really fits into any of this.
Ya, maybe. This concern/way of thinking just seems kind of niche. Probably only a very small demographic who overlaps with me here. So I guess I wouldn't expect it to be a consequential amount of money to eg. Anthropic or OpenAI.
That check box would be really cool though. It might ease friction / dissonance for people who buy into high p(doom) or relatively non-accelerationist perspectives. My views are not representative of anyone, but me, but a checkbox like that would be a killer feature for me and certainly win my $20/mo :) . And maybe, y'know, all 100 people or whatever who would care and see it that way.
Ya, really sad to hear that!
I mean, if they were going to fire him for that, maybe just don't hire him. Feels kind of mercurial that they were shamed into caning him so easily/soon.
Ofc, I can understand nonviolence being, like, an important institutional messaging northstar lol. But the vibes are kind of off when you are going to fire a recently hired podcaster bc of a cringe compilation of their show. Seriously, what the hell?
I do not think listening to that podcast made me more violent. In fact, the thought experiments and ideation like what he was touching on is, like, perfectly reasonable given the stakes he is laying out and the urgency he is advocating. Like, it's not crazy ground to cover; he talks for hours. Whatever lol, at least I think it was understandable in context.
Part of it feels like the equivalent of having to lobby against "civilian nuclear warheads". And you say "I wish that only a small nuclear detonation would occur when an accidental detonation does happen, but the fact is there's likely to be a massive nuclear chain reaction." and then getting absolutely BLASTED after someone clips you saying that you actually WISH a "small nuclear detonation" would occur. What a sadist you must be!
This feels like such stupid politics. I really want John Sherman to land on his feet. I was quite excited to see him take the role. Maybe that org was just not the right fit though...
I think it might be cool if an AI Safety research organization ran a copy of an open model or something and I could pay them a subscription to use it. That way I know my LLM subscription money is going to good AI stuff and not towards the stuff that AI companies that I don't think I like or want more of on net.
Idk, existing independent orgs might not be the best place to do this bc it might "damn them" or "corrupt them" over time. Like, this could lead them to "selling out" in a variety of ways you might conceive of that.
Still, I guess I am saying that to the extent anyone is going to actually "make money" off of my LLM usage subscriptions, it would be awesome if it were just a cool independent AIS lab I personally liked or similar. (I don't really know the margins and unit economics which seems like an important part of this pitch lol).
Like, if "GoodGuy AIS Lab" sets up a little website and inference server (running Qwen or Llama or whatever) then I could pay them the $15-25 a month I may have otherwise paid to an AI company. The selling point would be that less "moral hazard" is better vibes, but probably only some people would care about this at all and it would be a small thing. But also, it's hardly like a felt sense of moral hazard around AI is a terribly niche issue.
This isn't the "final form" of this I have in mind necessarily; I enjoy picking at ideas in the space of "what would a good guy AGI project do" or "how can you do neglected AIS / 'AI go well' research in a for-profit way".
I also like the idea of an explicitly fast follower project for AI capabilities. Like, accelerate safety/security relevant stuff and stay comfortably middle of the pack on everything else. I think improving GUIs is probably fair game too, but not once it starts to shade into scaffolding I think? I wouldn't know all of the right lines to draw here, but I really like this vibe.
This might not work well if you expect gaps to widen as RSI becomes a more important input. I would argue that seems too galaxy brained given that, as of writing, we do live in a world with a lot of mediocre AI companies that I believe can all provide products of ~comparable quality.
It is also just kind of a bet that in practice it is probably going to remain a lot less expensive to stay a little behind the frontier than to be at the frontier. And that, in practice, it may continue to not matter in a lot of cases.
Thanks for linking "Line Goes Up? Inherent Limitations of Benchmarks for Evaluating Large Language Models". Also, I agree with:
MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of ‘AI researchers’ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.
That comparison seems simplistic and inapt for at least a few reasons. That does seem like pretty "trust me bro" justification for the intelligence explosion lol. Granted, I only listened to the accompanying podcast, so I can't speak too much to the paper.
Still, I am of two minds. I still buy into a lot of the premise of "Preparing for the Intelligence Explosion". I find the idea of getting collectively blind-sighted by rapid, uneven AI progress ~eminently plausible. There didn't even need to be that much of a fig leaf.
Don't get me wrong, I am not personally very confident in "expert level AI researcher for arbitrary domains" w/i the next few decades. Even so, it does seem like the sort of thing worth thinking about and preparing about.
From one perspective, AI coding tools are just recursive self improvement gradually coming online. I think I understand some of the urgency, but I appreciate the skepticism a lot too.
Preparing for an intelligence explosion is a worthwhile thought experiment at least. It seems probably good to know what we would do in a world with "a lot of powerful AI" given that we are in a world where all sorts of people are trying to research/make/sell ~"a lot of powerful AI". Like just in case, at least.
I think I see multiple sides. Lots to think about.
I think the focus is generally placed on the cognitive capacities of AIs because it is expected that it will just be a bigger deal overall.
There is at least one 80,000 hours podcast episode on robotics. It tries to explain why it's hard to do ML on, but I didn't understand it.
Also, I think Max Tegmark wrote some stuff on slaughterbots in Life 3.0. Yikes!
You could try looking for other differential development stuff too if you want. I recently liked: AI Tools for Existential Security. I think it's a good conceptual framework for emerging tech / applied ethics stuff I think. Of course, still leaves you with a lot of questions :)
I love to see stuff like this!
It has been a pleasure reading this, listening to your podcast episode, and trying to really think it through,
This reminds me of a few other things I have seen lately like Superalignment, Joe Carlsmith's recent "AI for AI Safety", and the recent 80,000 Hours Podcast with Will McAskill.
I really appreciate the "Tools for Existential Security" framing. Your example applications were on point and many of them brought up things I hadn't even considered. I enjoy the idea of rapidly solving lots of coordination failures.
This sort of DAID approach feels like an interesting continuation on other ideas about differential acceleration and the vulnerable world hypothesis. Trying to get this right can feel like some combination of applied ethics and technology forecasting.
Probably one of the weirdest or most exciting applications you suggest is AI for philosophy. You put it under the "Epistemics" category. I usually think of epistemics as a sub-branch of philosophy, but I think I get what you mean. AI for this sort of thing remains exciting, but very abstract to me.
What a heady thing to think about; really exciting stuff! There is something very cosmic about the idea of using AI research and cognition for ethics, philosophy, and automated wisdom. (I have been meaning to read "Winners of the Essay competition on the Automation of Wisdom and Philosophy"). I strongly agree that since AI comes with many new philosophically difficult and ethically complex questions, it would be amazing if we could use AI to face these.
The section on how to accelerate helpful AI tools was nice too.
Appendix 4 was gold. The DPD framing is really complimentary to the rest of the essay. I can totally appreciate the distinction you are making, but I also see DPD as bleeding into AI for Existential Safety a lot as well. Such mixed feelings. Like, for one thing, you certainly wouldn't want to be deploying whack AI in your "save the world" cutting edge AI startup.
And it seems like there is a good case for thinking about doing better pre-training and finding better paradigms if you are going to be thinking about safer AI development and deployment a lot anyways. Maybe I am missing something about the sheer economics of not wanting to actually do pre-training ever.
In any case, I thought your suggestions around aiming for interpretable, robust, safe paradigms were solid. Paradigm-shaping and application-shaping are both interesting.
***
I really appreciate that this proposal is talking about building stuff! And that it can be done ~unilaterally. I think that's just an important vibe and an important type of project to have going.
I also appreciate that you said in the podcast that this was only one possible framing / clustering. Although you also say "we guess that the highest priority applications will fall into the categories listed above" which seems like a potentially strong claim.
I have also spent some time thinking about which forms of ~research / cognitive labor would be broadly good to accelerate for similar existential security reasons and I kind of tried to retrospectively categorize some notes I had made with your framing. I had some ideas that were hard to categorize cleanly into epistemics, coordination, or direct risk targeting.
I included a few more ideas for areas where AI tools, marginal automated research, and cognitive abundance might be well applied. I was going for a similar vibe, so I'm sorry if I overlap a lot. I will try to only mention things you didn't explicitly suggest:
Epistemics:
Coordination-enabling:
Risk-targeting:
I know it is not the main thrust of "existential security", but I think it worth considering the potential for "abundant cognition" to welfare / sentience research (eg. bio and AI). This seems really important from a lot of perspectives, for a lot of reasons:
That said, I have not really considered the offense / defense balance here. We may discover how to simulate suffering for much cheaper than pleasure or something horrendous like that. Or there might be info hazards. This space seems so high stakes and hard to chart.
Some mix:
I know I included some moonshots. This all depends on what AI systems we are talking about and what they are actually helpful with I guess. I would hate for EA to bet too hard on any of this stuff and accidentally flood the zone of key areas with LLM "slop" or whatever.
Also, to state the obvious, there may be some risk of correlated exposure if you pin too much of your existential security with the crucial aid of unreliable, untrustworthy AIs. Maybe Hal 9000 isn't always the entity to trust with your most critical security.
Lots to think about here! Thanks!
Joe Carlsmith: "Risk evaluation tracks the safety range and the capability frontier, and it forecasts where a given form of AI development/deployment will put them.
I think a real life scenarios where AI kills the most people today is governance stuff and military stuff.
I feel like I have heard the most unhinged haunted uses of LLMs in government and policy spaces. I think that certain people have just "learned to stop worrying and love the hallucination". They are living like it is the future already and getting people killed with their ignorance and spreading /using AI bs in bad faith.
Plus, there is already a lot of slaughter bot stuff going on eg. "Robots First" war in Ukraine.
Maybe job automation is worth saying too. I believe Andrew Yang's stance for example is that it is already largely here and most people just do have less labor power already, but I could be mischaracterizing this. I think "jobs stuff" plausibly shades right into doom via "industrial dehumanization" / gradual disempowerment. In the mean time it hurts people too.
Thanks for everything Holly! Really cool to have people like you actively calling for international pause on ASI!
Hot take: Even if most people hear a really loud ass warning shot, it is just going to fuck with them a lot, but not drive change. What are you even expecting typical poor and middle class nobodies to do?
March in the street and become activists themselves? Donate somewhere? Post on social media? Call representatives? Buy ads (likely from Google or Meta)? Divest in risky AI projects? Boycott LLMs/companies?
Ya, okay, I feel like the pathway from "worry" to any of that if generally very windy, but sure. I still feel like that is just a long way from the kind of galvanized political will and real change you would need for eg. major AI companies with huge market cap to get nationalized or wiped off the market or whatever.
I don't even know how to picture a transition to an intelligence explosion resistant world and I am pretty knee deep in this stuff. I think the road from here to good outcome is just too blurry for much a lot of the time. It is easy to feel and be disempowered here.