Bio

Participation
3

Pause AI / Veganish

Lets do a bunch of good stuff and have fun gang!

How others can help me

I am always looking for opportunities to contribute directly to big problems and to build my skills. Especially skills related to research, science communication, and project management.

Also, I have a hard time coping with some of the implications of topics like existential risk, the strangeness of the near term future, and the negative experiences of many non-human animals. So, it might be nice to talk to more people about that sort of thing and how they cope.

How I can help others

I have taken BlueDot Impact's AI Alignment Fundamentals course. I have also lurked around EA for a few years now. I would be happy to share what I know about EA and AI Safety.

I also like brainstorming and discussing charity entrepreneurship opportunities.

Comments
37

Ya, I think that's right. I think making bad stuff more salient can make it more likely in certain contexts. 

For example, I can imagine it to be naive to be constantly transmitting all sorts of detailed information, media, and discussion about specific weapons platforms. Raising awareness that you really hope the bad guys don't develop because it might make them too strong. I just read "Power to the People: How Open Technological Innovation Is Arming Tomorrow's Terrorists" by Audrey Kurth Cronin and I think it has a really relevant vibe here. Sometimes I worry about EAs doing unintentional advertisement for eg. bioweapons and superintelligence. 

On the other hand, I think that topics like s-risk are already salient enough for other reasons. Like, I think extreme cruelty and torture have arisen independently at a lot of times throughout history and nature. And there are already ages worth of pretty unhinged torture porn stuff that people write which exist already on a lot of other parts of the internet. For example, the Christian conception of hell or horror fiction.

This seems sufficient to say we are unlikely to significantly increase the likelihood of "blind grabs from the memeplex" leading to mass suffering. Even cruel torture is already pretty salient. And suffering is in some sense simple if it is just "the opposite of pleasure" or whatever. Utilitarians commonly talk in these terms already.

I will agree that I don't think it's good to carelessly spread memes about specific bad stuff sometimes. I don't always know how to navigate the trade offs here; probably there is at least some stuff broadly related to GCRs and s-risks which is better left unsaid. But also a lot of stuff related to s-risk is there whether you acknowledge it or not. I submit to you that surely some level of "raise awareness so that more people and resources can be used on mitigation" is necessary/good?
 

What dynamics do you have in mind specifically?

Always a strong unilateralist curse with infohazard stuff haha.

I think it is reasonably based and there is a lot to be said for hype, infohazards, and the strange futurist x-risk warning to product company pipeline. It may even be especially potent or likely to bite in exactly the EA milieu. 

I find the idea of Waluigi a bit of a stretch given that "what if the robot became evil" is a trope. And so is the Christian devil for example. "Evil" seems at least adjacent to "strong value pessimization". 

Maybe a literal bit flip utility minimizer is rare (outside of eg extortion) and talking about it would spread the memes and some cultist or confused billionaire would try to build it sort of thing?

Thanks for sharing, good to read. I got most excited about 3, 6, 7, and 8. 

As far as 6 goes, I would add that I think it would probably be good if AI Safety had a more mature academic publishing scene in general and some more legit journals. There is a place for the Alignment Forum, arXiv, conference papers, and such but where is Nature AI Safety or equivalent. 

I think there is a lot to be said for basically raising the waterline there. I know there is plenty of AI Safety stuff that has been published for decades in perfectly respectable academic journals and such. I personally like the part in "Computing Machinery and Intelligence" where Turing says that we may need to rise up against the machines to prevent them from taking control. 

Still, it is a space I want to see grow and flourish big time. In general, big ups to more and better journals, forums, and conferences within such fields as AI Safety / Robustly Beneficial AI Research, Emerging Technologies Studies, Pandemic Prevention, and Existential Security. 

EA forum, LW, and the Alignment Forum have their place, but these ideas ofc need to germinate out past this particular clique/bubble/subculture. I think more and better venues for publishing are probably very net good in that sense as well.

7 is hard to think about but sounds potentially very high impact. If any billionaires ever have a scary ChatGPT interaction or a similar come to Jesus moment and google "how to spend 10 billion dollars to make AI safe" (or even ask Deep Research), then you could bias/ frame the whole discussion / investigation heavily from the outset. I am sure there is plenty of equivalent googling by staffers and congresspeople in the process of making legislation now.

8 is right there with AI tools for existential security. I mostly agree that an AI product which didn't push forward AGI, but did increase fact checking would be good. This stuff is so hard to think about. There is so much moral hazard in the water and I feel like I am "vibe captured" by all the Silicon Valley money in the AI x-risk subculture.

Like, for example, I am pretty sure I don't think it is ethical to be an AGI scaling/racing company even if Anthropic has better PR and vibes than Meta. Is it okay to be a fast follower though? Compete in terms of fact checking, sure but is making agents more reliable or teaching Claude to run a vending machine "safety" or is that merely equivocation. 

Should I found a synthetic virology unicorn, but we will be way chiller than other synthetic virology companies. And it's not completely dis-analogous because there are medical uses for synthetic virology and pharma companies are also huge capital intensive high tech operations who spend 100s of millions on a single product. Still, that sounds awful.

Maybe you think armed balance of power with nuclear weapons is a legitimate use case. It would still be bad to do a nuclear bomb research company that lets you scale and reduces costs etc. for nuclear weapons. But idk. What if you really could put in a better control system than the other guy? Should hippies start military tech startups now?

Should I start a competing plantation that, in order to stay profitable and competitive with other slave plantations uses slave labor and does a lot of bad stuff. And if I assume that the demands of the market are fixed and this is pretty much the only profitable way to farm at scale, then so as long as I grow my wares at a lower cruelty per bushel than the average of my competitors am I racing to the top? It gets bad. Same thing could apply to factory farming. 

(edit: I reread this comment and wanted to go more out of my way to say that I don't think this represents a real argument made presently or historically for chattel slavery. It was merely an offhand insensitive example of a horrific tension b/w deontology and simple goodness on the one hand and a slice of galaxy brained utilitarian reasoning on the other.)

Like I said, so much moral hazard in the idea of "AGI company for good stuff", but I think I am very much in favor of "AI for AI Safety" and "AI tools for existential security. I like "fact checking" as a paradigm example of a prosocial use case.

 

Hey, cool stuff! I have ideated and read a lot on similar topics and proposals. Love to see it!
 

Is the "Thinking Tools" concept worth exploring further as a direction for building a more trustworthy AI core?

I am agnostic about whether you will hit technical paydirt. I don''t really understand what you are proposing on a "gears level" I guess and I'm not sure I could make a good guess even if I did. But, I will say that I think the vibe of your approach sounded pleasant and empowering. It was a little abstract to me I guess I'm saying, but that need not be a bad thing maybe you're just visionary. 

It reminds me of the idea of using RAG or Toolformer to get LLMs to "show their work" and "cite their sources" and stuff. There is surely a lot of room for improvement there bc Claude bullshits me with links on the regular.

This also reminds me of Conjecture's Cognitive Emulation work and even just Max Tegmark and Steve Omohundro's emphasis on making inscrutable LLMs to use deterministic proof checkers heavily to win back certain gaurantees. 
 

  • Is the "LED Layer" a potentially feasible and effective approach to maintain transparency within a hybrid AI system, or are there inherent limitations?

I don't have a clear enough sense of what you're even talking about, but there are definitely at least some additional interventions you could run in addition to the thinking tools... eg. monitoring, faithful CoT techniques for marginally truer reasoning traces, you could run probes, Anthropic runs a classifier to help with robust jailbreaking for misuse etc. ... 

I think that something like "defense in depth" is something like the current slogan of AI Safety. So, sure I can imagine all sorts of stuff you could try to run for more transparency beyond deterministic tool use, but w/o a cleaer conception of the finer points it feels like I should say that there are quite an awful lot of inherent limitations, but plenty of options / things to try as well.

Like, "robustly managing interpretability" is more like a holy grail than a design spec in some ways lol.
 

  • What are the biggest practical hurdles in considering the implementation of CCACS, and what potential avenues might exist to overcome them?

I think that a lot of what it is shooting for is aspirational and ambitious and correctly points out limitations in the current approaches and designs of AI. All of that is spot on and there is a lot to like here. 

However, I think the problem of interpeting and building appropriate trust in complex learned algorithmic systems like LLMs is a tall order. "Transparency by design" is truly one of the great technological mandates of our era, but without more context it can feel like a buzzword like "security by design". 

I think the biggest "barrier" I can see is just that this framing just isn't sticky enough to survive memetically and people keep trying to do transparency, tool use, control, reasoning, etc. under different frames.

But still, I think there is a lot of value in this space and you would get paid big bucks if you could even marginally improve current ablity to get trustworthy interpretable work out of LLMs. So, y'know, keep up the good work!

Thanks, it's not that original. I am sure I have heard them talk about AIs negotiating and forgetting stuff on the 80,000 Hours Podcast and David Brin has a book that touches on this a lot called "The Transparent Society". I haven't actually read it, but I heard a talk he gave.

Maybe technological surveillance and enforcement requirements will actually be really intense at technological maturity and you will need to be really powerful and really local and need to have a lot of context for what's going on. In that case, some value like privacy or "being alone" might be really hard to save. 

Hopefully, even in that case, you could have other forms of restraint. Like, I can still imagine that if something like the orthogonality thesis is true, then you could maybe have a really really elegant, light-touch special focus anti super-weapons system that feels fundamentally limited to that goal in a reliable sense. If we understood the cognitive elements enough that it felt like physics or programming, then we could even say that the system meaningfully COULD NOT do certain things (violate the prime directive or whatever) and then it wouldn't feel as much like an omnipotent overlord as a special purpose tool deployed by local LE (because this place would be bombed or invaded if it could not prove it had established such a system). 

If you are a poor peasant farmer world, then maybe nobody needs to know what your people are writing in their diaries. But if you are the head of fast prototyping and automated research at some relevant dual use technology firm, then maybe there should be much more oversight. Idk, there feels like lots of room for gradation, nuance, and context awareness here, so I guess I agree with you that the "problem of liberty" is interesting.

There was a lot to this that was worth responding to. Great work.

I think making God would actually be a bad way to handle this. I think you could probably stop this with superior forms of limited knowledge surveillance. I think there are likely socio-technical remedies to dampen some of the harsher liberty-related tradeoffs here considerably.

Imagine, for example a more distributed machine intelligence system. Perhaps it's really not all that invasive to monitor that you're not making a false vacuum or whatever. And it uses futuristic auto-secure hyper-delete technology to instantly delete everything it sees that isn't relevant.

Also the system itself isn't all that powerful, but rather can alert others / draw attention to important things. And system implementation as well as the actual violent / forceful enforcement that goes along with the system probably can and should also be implemented in a generally more cool, chill, and fair way than I associate with the Christian God centralized surveillance and control systems. 

Also, a lot of these problems are already extremely salient for "how to stop civilization ending superweapons from being created"-style problems we are already in the midst of here in 2025 Earth. It seems basically true that you do ~need to maintain some level of coordination with / dominance over anything that could/might make a super weapon that could kill you if you want to stay alive indefinitely. 

Ya, idk, I am just saying that the tradeoff framing feels unnatural. Or, like, maybe that's one lens, but I don't actually generally think in terms of tradeoffs b/w my moral efforts.

Like, I get tired of various things ofc, but it's not usually just cleanly fungible b/w different ethical actions I might plausibly take like that. To the extent it really does work this way for you or people you know on this particular tradeoff, then yep; I would say power to ya for the scope sensitivity.

I agree that the quantitative aspect of donation pushes towards even marginal internal tradeoffs here mattering and I don't think I was really thinking about it as necessarily binary. 

I agree with 1, but I think the framing feels forced for point #2.

I don't think it's obvious that these actions would be strongly in tension with each other. Donating to effective animal charities would correlate quite strongly with being vegan.

Homo economicus deciding what to eat for dinner or something lol.

I actually totally agree that donations are an important part of personal ethics! Also, I am all aboard for the social ripple effects theory of change for effective donation. Hell yes to both of those points. I might have missed it, but I don't know that OP really argues against those contentions? I guess they don't frame it like that though.

I appreciate this survey and I found many of your questions to be charming probes. I would like to register that I object to the "is elitism good actually?" framing here. There is a very common way to define the term "elitism" that is just straightforwardly negative. Like, "elitism" implies classist, inegalitarian stuff that goes beyond just using it as an edgelord libertarian way of saying "meritocracy".

I think there is a lot of conceptual tension between EA as a literal mass movement and EA as an usually talent dense clique / professional network. Probably there is room in the world for both high skill professional networks and broad ethical movements, but y'know ... 

I think a real life scenarios where AI kills the most people today is governance stuff and military stuff.

I feel like I have heard the most unhinged haunted uses of LLMs in government and policy spaces. I think that certain people have just "learned to stop worrying and love the hallucination". They are living like it is the future already and getting people killed with their ignorance and spreading /using AI bs in bad faith.

Plus, there is already a lot of slaughter bot stuff going on eg. "Robots First" war in Ukraine. 

Maybe job automation is worth saying too. I believe Andrew Yang's stance for example is that it is already largely here and most people just do have less labor power already, but I could be mischaracterizing this. I think "jobs stuff" plausibly shades right into doom via "industrial dehumanization" / gradual disempowerment. In the mean time it hurts people too.

Thanks for everything Holly! Really cool to have people like you actively calling for international pause on ASI! 

Hot take: Even if most people hear a really loud ass warning shot, it is just going to fuck with them a lot, but not drive change. What are you even expecting typical poor and middle class nobodies to do? 

March in the street and become activists themselves? Donate somewhere? Post on social media? Call representatives? Buy ads (likely from Google or Meta)? Divest in risky AI projects? Boycott LLMs/companies?

Ya, okay, I feel like the pathway from "worry" to any of that if generally very windy, but sure. I still feel like that is just a long way from the kind of galvanized political will and real change you would need for eg. major AI companies with huge market cap to get nationalized or wiped off the market or whatever. 

I don't even know how to picture a transition to an intelligence explosion resistant world and I am pretty knee deep in this stuff. I think the road from here to good outcome is just too blurry for much a lot of the time. It is easy to feel and be disempowered here.
 

Load more