Bio

Participation
3

Pause AI / Veganish

Lets do a bunch of good stuff and have fun gang!

How others can help me

I am always looking for opportunities to contribute directly to big problems and to build my skills. Especially skills related to research, science communication, and project management.

Also, I have a hard time coping with some of the implications of topics like existential risk, the strangeness of the near term future, and the negative experiences of many non-human animals. So, it might be nice to talk to more people about that sort of thing and how they cope.

How I can help others

I have taken BlueDot Impact's AI Alignment Fundamentals course. I have also lurked around EA for a few years now. I would be happy to share what I know about EA and AI Safety.

I also like brainstorming and discussing charity entrepreneurship opportunities.

Comments
43

There was a lot in here that felt insightful and well considered. 

I agree that thinking about the end state and humanity in the limit is a fruitful area of philosophy with potentially quite important implications. I wrestle with this sort of thing a lot.

One perspective I would note here (I associate this line of thinking with Will McAskill) is that we ought to be immediately aiming for a wiser, more stable sort of middle-ground and then aim for the "end state" from there. I think that can make sense for a lot of practical reasons. I think there is enough of a complex truth to what is and isn't morally good that I am inclined to believe the "moral error as an x-risk" framing and, as such, I tend to place a high premium on option value. I think, given the practical uncertainties of the situation, I feel pretty comfortable aiming for / punting to some more general ""process of wise deliberation" over directly locking my current best guess into the cosmos. 

That said, y'know, we make decisions every day and it is still definitely worth tracking what my current best guess is for what ought actually be done with the physical matter and energy extant in the cosmos. I am partial to much of the substance that you put forward here. 
 

  1. "ensuring the ongoing existence of sentience"

    "sentience" is a bit tricky for me to parse, but I will put in for positively valenced subjective experience :) 

  2. "gaining total knowledge except that knowledge which requires inducing suffering"

    I mean, sure, why not? I think that sort of thing is cool and inspiring for the most part. There are probably things that would count as "knowledge" to me, but which are so trivial that I wouldn't necessarily care about them much. But, y'know, I will put in for the practical necessity of learning more about the universe as well as the aesthetic/ profound beauty of discovery the rules of the universe and the nature of nature. 

  3. "ending all suffering"

    Fuck ya dude! I'm against evil and suffering seems like a central example of that. There may even be more aesthetic or injustice like things that I would consider evil even in the absence of negatively valenced experience per se which I might also entertain abolishing. 

There is a lot to be said about the "end state" which you don't really mention here. Like, for example, I think it is good for people to be really, exceptionally happy if we can swing it. I don't know how to think about population ethics honestly. 

One issue that really bites for me when I try to picture the end of the struggle and the steady end state is:

  1. people often intrinsically value reproducing
  2. I want immortality
  3. Each person may require a minimum subsistence amount of stuff to live happily (even if we shrink everyone or make provably morally relevant simulations or something)
  4. Finite materials / scarcity

I have no reasonable way out of this conundrum and I hate biting the "population control" bullet. That reeks of, like, "one child policy" and overpopulation motivated genocides (cf. The Legacy of India’s Quest to Sterilize Millions of Men / Uttawar forced sterilizations). I think concerns in this general vein about the resources people use and the limits to growth are also pretty closely ties to the not uncommon concerns people have around over population / climate heads not wanting to have kids.

Also, to make it less abstract, I will admit that my morals / impulses are fundamentally quite natalist and I would quite like to be a Dad some day. Even if we grant that resource growth exceeds population growth for now, it seems hard to escape the Malthusian trap forever and I think this is a very fundamental tension in the limit.

Wow, I love that you ended your post in questions. I found your thesis compelling; it reminded me of how much value I used to get from more actively networking with and reaching out to people in online EA spaces. Also, I loved that it was short and salient. 

  • What helps you ask for help when it feels uncomfortable?

Knowing relevant people who have signaled they are okay being asked for help on a given topic. Having a personalish connection to people. A lack of fear of stigma or social consequence for asking a dumb question that I shouldn't have needed help with. A sense of worthiness that I am even allowed to ask things of other people in this context.

  • When was the last time you asked for help, and what happened?

I ask for help multiple times every day. I am a working stiff and my day job is bench work as a technician in a clinical diagnostics lab (microbiology department). I ask the more senior technicians and medical directors for advice constantly, multiple times a day. That usually goes well and people either give me some kind of answer or at least tell me who to ask. The main downside is that it can take up my time and tbh sometimes they don't give me great advice.

Also I ask my wife for help with all the time and that goes great because they are an amazing partner that I am lucky to have! :) I love my wife!

Hey nice! AGI and improvements to representative democracy systems are both right up my alley!

That said, I think the AGI tie in might seem kind of superficial in that having more functional governance and societal coordination mechanisms would help with all sorts of stuff so I think it makes sense to frame this reasonably in a reasonably AGI timeline agnostic sort of way. That said, ya, I see your point that this sort of thing is made all the more dire when thrown into relief by our "time of troubles" and "longtermists on the precipice" style thinking. Your call here, but I am sure it is not necessary to believe random "LLMs will change the world" predictions to believe that certain democratic reforms make sense.

In my experience, a lot of people in online EA spaces are pretty willing to talk to you if you reach out, so I think you'll have decent luck there if that's what you're after. Not as confident about how to find more serious collaborators for a project like this.

A few ideas I would throw out there for the sake of brainstorming (many or all of which you may already be familiar with):

  • independent redistricting / anti gerrymandering schemes
  • merging voting districts and proportionally allocating positions > requiring majorities (ie. Mixed-member proportional representation) to negate "winner take all"/ minority under representation
  • ranked choice / transferable voting to diminish spoiler effect
  • open primaries might be a good idea to disincentivize the party system from filtering for radical candidates as hard
  • liquid democracy to let people vote directly on issues that matter to them instead of going through their rep at all (eg. imagine being able to disagree with your senator whenever you want and cast your individual .0000002% of a vote directly on whatever issue)
  • People talk about quadratic voting too which is probably worth knowing something about from a mechanism design standpoint, but in my opinion doesn't really stand out as a solution to anything on its own without a better way of defining what each actors budget of voting credits would actually need be applied to / split between in any given round.

Also, I definitely second the idea of using a citizen's assembly. In my opinion, the power of random sampling + time to learn about and focus on an issue is really OP and really under utilized by representative democracies. The statistical mathematics around approximating large populations with small random samples are really underutilized here and working in our favor. Honestly, there is tons of adverse selection in the electoral process (eg. this book deals with some elements of that). 

If you haven't seen CGP Grey's "Politics in the Animal Kingdom" series, you might love it! Also the Forward Party in the US tends to push for similar ideas / platforms, so they might be worth checking out.

I think this kind of work is very valuable! Nation states might yet be the death of us. It has been terrible watch the democratic backsliding and corruption in my own US of A (in fact I will be one of the protestors this 10/18 No Kings Day). Plus, I agree with your sentiment that there is a lot of headroom. Personally, I think this has less to do with the rise of cyberspace and more to do with the fact that existing polities were just never particularly optimized around the sorts of ideals we are aspiring towards here. Classical Age Greece and the revolutionary United States were both slave states with a lot of backwards ideas after all.

In the case of a government locking in their own power, it seems like you are holding the motivations constant and just saying "power lets you accumulate more power" or something right?

The obvious dis-analogy here that I am sure you are aware of on some level, but which I didn't really see you foreground here is that in the case of either the pause bootstrap or the constitutional deliberation bootstrap, the motivations of the actors are themselves in flux for this period. There isn't as clear of a story you can tell here necessarily about why acceleration should occur at all, but I take it the implied accelerant to our explosion is something like "additional deliberation/ pausing is factually correct and good" and that "additional deliberation/ pausing will improve epistemic condition". 

Also, let me just flag that the "constitutional conventions of ever greater length" example you gave illustrates a world that is gradually locked in for larger and larger stretches of time not merely one where there is an ever increasing amount of deliberation or something. Like, plausibly, that is an account of gradually sliding into lock in first for a one month interval, then for one year interval, etc.

Good stuff though. I've been wrestling with this kind of morality laden futurology and "what victory looks like" a lot lately, not all in the context of AI but also just against Malthusian traps and the wild state of nature. I tend to agree that viatopia, the great reflection, and any really "any scenario where wise deliberation will occur and be acted upon" are beautiful and desirable waystations.

Ambitious stuff indeed! There's a lot going on here. 

I really appreciate discussions about "big picture strategy about avoiding misalignment". 

For starters, in my opinion, solving technical alignment and control such that one could  elicit the main benefits of having a "superintelligent servant" are merely one threat model / AGI-driven challenge. That said, ofc, getting that sort of thing right also basically means the rest  of the planning is better left to someone else and if you are willing to additionally postulate a strong "decisive strategic advantage" is basically also a win condition for whatever else you could want. 

I would point to eg.

  • robo powered ultra tyranny
  • gradual disempowerment / full unemployment / the intelligence curse
  • misinformation slop /  mass psychosis
  • terrorism, scammers, and a flood of many competent robo psychopaths
  • machines feeling pain and robot rights
  • accelerated R&D and needing to adapt at machine speeds

as issues that can all still bite more or less even in worlds where you get some level "alignment" esp. if you operationalize alignment as more ~"robust instruction tuning++" rather than ~"optimizing for the true moral law itself". 

That said, takeover by rogue models or systems of models is a super salient threat model in any world where machines are being made to "think" more and better.

I found your list of competing framings which cut against AI Safety quite compelling. Safety washing is indeed all over the place. One thing I didn't see noted specifically is that a pretty significant contingency within EA / AI Safety works pretty actively on apologetics for hyperscalers because they directly financially benefit and/or they have kind of groomed themselves into being the kind of person who can work at a top AI Safety lab.

To draw contrast with how this might have been. You don't, for example, see many EAs working at and founding hot new synthetic virology companies in order to "do biocontainment better than the competitors". Ostensibly, there could be a similar grim logic of inevitably and a sense that "we ought to do all of the virology experiments first and more responsibly". Then, idk, once we've built really powerful AIs or learned everything about virology, we can use this to exit the time of troubles. I don't actually know what eg. Anthropic's plan is for this, but in the case of synthetic virology / gain of function research you might imagine that once you've learned the right stuff about all the potential pathogens, you would be super duper prepared to stop them with all your new medical interventions. 

Like, I guess I am just noting my surprise at not seeing good old "keeping safety at the frontier" / "racing through a minefield" Anthropic show up more in a screed about safety washing. The EA/rat space in general is one of the few places where catastrophic risk from AI is a priority and the conflicts of interest here literally could not run deeper. This whole place is largely funded by one of the Meta cofounders and there are a lot of very influential EAs with a lot of personal connection to and complete financial exposure to existing AI companies. This place was a safety adjacent trade show before it was cool lol. 

Lots of loving people here who really care on some level I'm sure, but if we are talking about mixed signals, then I would reconsider the mote in our team's eye lol. 

***

Beyond that, I guess there is the matter of timelines. 

I do not share your confidence in short timelines and think interventions that take a while to pay off can be super worthwhile. 

Also, idk, I feel like the assumption that it is all right around the corner and that any day now the singularity is about to happen is really central to the views of a lot of people into x-safety in a way that might explain part of why the worldview kind of struggles to spread outside the relatively limited pool of people who are open to that. 

I don't know what you'd call marginal or just fiddling around the edges because I would agree that it is bad if we don't do enough soon enough and someone builds a lethally intelligent super mind and it does rise up and game over. 

Maybe the only way to really push for x-safety is with If Anyone Builds It style "you too should believe in and seek to stop the impending singularity" outreach. That just feels like such a tough sell even if people would believe in the x-safety conditional on believing in the singularity. Agh. I'm conflicted here. No idea.

I would love it if we could do more to ally with people who do not see the singularity as being particularly near without things descending into idle "safety washing" nor "trust and safety"-level corporate bullshit. 
Like the "AI is insane hype" contingency has some real stuff going for them too. I don't think they are all just blind. In my humble opinion, I also think Sam Altman looks like an asshole when he calls ChatGPT "PhD level" and talks about it doing "new science". You know, in some sense, if we're just being cute, then Wikipedia has been PhD level for a while now and it makes less shit up. There is a lot of hype. These people are marketing and sometimes they get excited.
 

Plus, it gives me bad vibes when I am trying to push for x-safety and I encounter (often quite justified) skepticism about the power levels of current LLMs and I end up basically just having to do marketing work or whatever for model providers. Idk. 

I'm pretty sure LLM providers aren't even profitable at this point and general robotics isn't obviously much more "right around the corner" than it would've seemed to disinterested layperson over the past few decades. I'm conflicted on this stuff; idk how much effort should go into "singularity is near" vs "if singularity, then doom by default". 

Red lines and RSPs are actually probably a pretty good way of unifying "singularity near" x-safety people with "singularity far" or even "singularity who?" x-safety allies.

***

As far as strategic takeaways:

I do think it is good sense to "be ready" and have good ideas "sitting around" for when they are needed. I believe there was a recent UN general assembly where world leaders were literally asking around for, like, ideas for AI red lines. If this is a world where intelligent machines are rising, then there is a good chance we continue to see signs of that (until we don't). The natural tide of "oh shit guys" and "wow this is real" may be attenuated somewhat by frog boiling effects, but still. Also, the weirdness of AI Safety regulation and such under consideration will benefit from frog boiling.

Preparedness seems like a great idle time activity when the space isn't receiving the love/attention it deserves :) .

"I dont think its undemocratic for Trump to be elected for a 3rd term, so long as proper procedures are followed here and he wins the election fairly."

I can kind of see where you are coming from. I would invite you to consider that sometimes even that sort of thing could be bullshit / tyranny cf. the Enabling Act of 1933
 

Also, for resolution criteria:


"Other markets i would suggest would be on imprisonment/murder of political opponents and judges. I would suggest markets like "will at least 4 of the following 10 people be imprisoned or murdered by Dec 31 2028", etc."

Do you think specific targets would generally have been easy enough to call in advance for other autocracies / self coups? That seems non-obvious to me?

Ya, I think that's right. I think making bad stuff more salient can make it more likely in certain contexts. 

For example, I can imagine it to be naive to be constantly transmitting all sorts of detailed information, media, and discussion about specific weapons platforms. Raising awareness that you really hope the bad guys don't develop because it might make them too strong. I just read "Power to the People: How Open Technological Innovation Is Arming Tomorrow's Terrorists" by Audrey Kurth Cronin and I think it has a really relevant vibe here. Sometimes I worry about EAs doing unintentional advertisement for eg. bioweapons and superintelligence. 

On the other hand, I think that topics like s-risk are already salient enough for other reasons. Like, I think extreme cruelty and torture have arisen independently at a lot of times throughout history and nature. And there are already ages worth of pretty unhinged torture porn stuff that people write which exist already on a lot of other parts of the internet. For example, the Christian conception of hell or horror fiction.

This seems sufficient to say we are unlikely to significantly increase the likelihood of "blind grabs from the memeplex" leading to mass suffering. Even cruel torture is already pretty salient. And suffering is in some sense simple if it is just "the opposite of pleasure" or whatever. Utilitarians commonly talk in these terms already.

I will agree that I don't think it's good to carelessly spread memes about specific bad stuff sometimes. I don't always know how to navigate the trade offs here; probably there is at least some stuff broadly related to GCRs and s-risks which is better left unsaid. But also a lot of stuff related to s-risk is there whether you acknowledge it or not. I submit to you that surely some level of "raise awareness so that more people and resources can be used on mitigation" is necessary/good?
 

What dynamics do you have in mind specifically?

Always a strong unilateralist curse with infohazard stuff haha.

I think it is reasonably based and there is a lot to be said for hype, infohazards, and the strange futurist x-risk warning to product company pipeline. It may even be especially potent or likely to bite in exactly the EA milieu. 

I find the idea of Waluigi a bit of a stretch given that "what if the robot became evil" is a trope. And so is the Christian devil for example. "Evil" seems at least adjacent to "strong value pessimization". 

Maybe a literal bit flip utility minimizer is rare (outside of eg extortion) and talking about it would spread the memes and some cultist or confused billionaire would try to build it sort of thing?

Thanks for sharing, good to read. I got most excited about 3, 6, 7, and 8. 

As far as 6 goes, I would add that I think it would probably be good if AI Safety had a more mature academic publishing scene in general and some more legit journals. There is a place for the Alignment Forum, arXiv, conference papers, and such but where is Nature AI Safety or equivalent. 

I think there is a lot to be said for basically raising the waterline there. I know there is plenty of AI Safety stuff that has been published for decades in perfectly respectable academic journals and such. I personally like the part in "Computing Machinery and Intelligence" where Turing says that we may need to rise up against the machines to prevent them from taking control. 

Still, it is a space I want to see grow and flourish big time. In general, big ups to more and better journals, forums, and conferences within such fields as AI Safety / Robustly Beneficial AI Research, Emerging Technologies Studies, Pandemic Prevention, and Existential Security. 

EA forum, LW, and the Alignment Forum have their place, but these ideas ofc need to germinate out past this particular clique/bubble/subculture. I think more and better venues for publishing are probably very net good in that sense as well.

7 is hard to think about but sounds potentially very high impact. If any billionaires ever have a scary ChatGPT interaction or a similar come to Jesus moment and google "how to spend 10 billion dollars to make AI safe" (or even ask Deep Research), then you could bias/ frame the whole discussion / investigation heavily from the outset. I am sure there is plenty of equivalent googling by staffers and congresspeople in the process of making legislation now.

8 is right there with AI tools for existential security. I mostly agree that an AI product which didn't push forward AGI, but did increase fact checking would be good. This stuff is so hard to think about. There is so much moral hazard in the water and I feel like I am "vibe captured" by all the Silicon Valley money in the AI x-risk subculture.

Like, for example, I am pretty sure I don't think it is ethical to be an AGI scaling/racing company even if Anthropic has better PR and vibes than Meta. Is it okay to be a fast follower though? Compete in terms of fact checking, sure but is making agents more reliable or teaching Claude to run a vending machine "safety" or is that merely equivocation. 

Should I found a synthetic virology unicorn, but we will be way chiller than other synthetic virology companies. And it's not completely dis-analogous because there are medical uses for synthetic virology and pharma companies are also huge capital intensive high tech operations who spend 100s of millions on a single product. Still, that sounds awful.

Maybe you think armed balance of power with nuclear weapons is a legitimate use case. It would still be bad to do a nuclear bomb research company that lets you scale and reduces costs etc. for nuclear weapons. But idk. What if you really could put in a better control system than the other guy? Should hippies start military tech startups now?

Should I start a competing plantation that, in order to stay profitable and competitive with other slave plantations uses slave labor and does a lot of bad stuff. And if I assume that the demands of the market are fixed and this is pretty much the only profitable way to farm at scale, then so as long as I grow my wares at a lower cruelty per bushel than the average of my competitors am I racing to the top? It gets bad. Same thing could apply to factory farming. 

(edit: I reread this comment and wanted to go more out of my way to say that I don't think this represents a real argument made presently or historically for chattel slavery. It was merely an offhand insensitive example of a horrific tension b/w deontology and simple goodness on the one hand and a slice of galaxy brained utilitarian reasoning on the other.)

Like I said, so much moral hazard in the idea of "AGI company for good stuff", but I think I am very much in favor of "AI for AI Safety" and "AI tools for existential security. I like "fact checking" as a paradigm example of a prosocial use case.

 

Hey, cool stuff! I have ideated and read a lot on similar topics and proposals. Love to see it!
 

Is the "Thinking Tools" concept worth exploring further as a direction for building a more trustworthy AI core?

I am agnostic about whether you will hit technical paydirt. I don''t really understand what you are proposing on a "gears level" I guess and I'm not sure I could make a good guess even if I did. But, I will say that I think the vibe of your approach sounded pleasant and empowering. It was a little abstract to me I guess I'm saying, but that need not be a bad thing maybe you're just visionary. 

It reminds me of the idea of using RAG or Toolformer to get LLMs to "show their work" and "cite their sources" and stuff. There is surely a lot of room for improvement there bc Claude bullshits me with links on the regular.

This also reminds me of Conjecture's Cognitive Emulation work and even just Max Tegmark and Steve Omohundro's emphasis on making inscrutable LLMs to use deterministic proof checkers heavily to win back certain gaurantees. 
 

  • Is the "LED Layer" a potentially feasible and effective approach to maintain transparency within a hybrid AI system, or are there inherent limitations?

I don't have a clear enough sense of what you're even talking about, but there are definitely at least some additional interventions you could run in addition to the thinking tools... eg. monitoring, faithful CoT techniques for marginally truer reasoning traces, you could run probes, Anthropic runs a classifier to help with robust jailbreaking for misuse etc. ... 

I think that something like "defense in depth" is something like the current slogan of AI Safety. So, sure I can imagine all sorts of stuff you could try to run for more transparency beyond deterministic tool use, but w/o a cleaer conception of the finer points it feels like I should say that there are quite an awful lot of inherent limitations, but plenty of options / things to try as well.

Like, "robustly managing interpretability" is more like a holy grail than a design spec in some ways lol.
 

  • What are the biggest practical hurdles in considering the implementation of CCACS, and what potential avenues might exist to overcome them?

I think that a lot of what it is shooting for is aspirational and ambitious and correctly points out limitations in the current approaches and designs of AI. All of that is spot on and there is a lot to like here. 

However, I think the problem of interpeting and building appropriate trust in complex learned algorithmic systems like LLMs is a tall order. "Transparency by design" is truly one of the great technological mandates of our era, but without more context it can feel like a buzzword like "security by design". 

I think the biggest "barrier" I can see is just that this framing just isn't sticky enough to survive memetically and people keep trying to do transparency, tool use, control, reasoning, etc. under different frames.

But still, I think there is a lot of value in this space and you would get paid big bucks if you could even marginally improve current ablity to get trustworthy interpretable work out of LLMs. So, y'know, keep up the good work!

Load more