S

Sharmake

1025 karmaJoined

Comments
337

Topic contributions
2

The main unremovable advantages of AIs over humans will probably be in the following 2 areas:

  1. A serial speed advantage, from 50-1000x, with my median in the 100-500x speed advantage range, and more generally the ability to run slower or faster to do more work proportionally, albeit there are tradeoffs at either extreme of either running slow or fast.

  2. The ability for compute/software improvements to directly convert into more researchers with essentially 0 serial time necessary, unlike basically all of reproduction (about the only cases where it even gets close are the days/hours doubling time of flies and some bacteria/viruses, but these are doing much simpler jobs and it's uncertain whether you could add more compute/learning capability without slowing down their doubling time.)

This is the mechanism by which you can get way more AI researchers very fast, while human researchers don't increase proportionally.

Humans probably do benefit assuming AI is useful enough to automate say AI research away, but these 2 unremovable limitations fundamentally prevent anything like an explosion in research, unlike AI research.

I'm trying to identify why the trend has lasted, so that we can predict when the trend will break down.

That was the purpose of my comment.

Sharmake
2
0
0
100% disagree

Consequentialists should be strong longtermists

 

I disagree, mostly due to the should wording, as believing in consequentialism doesn't obligate you to have any particular discount rate or have any particular discount function, and these are basically free parameters, so discount rates are independent of consequentialism.

Sharmake
2
0
0
50% agree

Bioweapons are an existential risk


I'll just repeat @weeatquince's comment, since he already covered the issue better than I did:

With current technology probably not an x-risk. With future technology I don’t think we can rule out the possibility of bio-sciences reaching the point where extinction is possible. It is a very rapidly evolving field with huge potential.

I mean the trend of very fast compute increases dedicated to AI, and what I mean is that fabs and chip manufacturers have switched their customers to AI companies.

Sharmake
2
0
0
40% disagree

AGI by 2028 is more likely than not

 

While I think AGI by 2028 is reasonably plausible, I think that there are way too many factors that have to go right in order to get AGI by 2028, and this is true even if AI timelines are short.

 

To be clear, I do agree that if we don't get AGI by the early 2030s at latest, AI progress will slow down, I don't have nearly enough credence for the supporting arguments to have my median be in 2028.

The basic reason for the trend continuing so far is that NVIDIA et al have diverted normal compute expenditures into the AI boom.

I agree that the trend will stop, and it will stop around 2027-2033 (my widest uncertainty lies here), and once that happens the probability of having AGI soon will go down quite a bit (if it hasn't happened by then).

@Vasco Grilo🔸's comment is reproduced here for posterity:

Thanks for sharing, Sharmake! Have you considered crossposting the full post? I tend to think this is worth it for short posts.

My own take is that while I don't want to defend the "find a correct utility function" approach to alignment to be sufficient at this time, I do think it is actually necessary, and that the modern era is an anomaly in how much we can get away with misalignment being checked by institutions that go beyond an individual.

The basic reason why we can get away with not solving the alignment problem is that humans depend on other humans, and in particular you cannot replace humans with much cheaper workers that have their preferences controlled arbitrarily.

AI threatens the need to depend on other humans, which is a critical part of how we can get away with not needing the correct utility function.

I like the Intelligence Curse series because it points out that an elite that doesn't need the commoners for anything and the commoners have no selfish value to the elite fundamentally means that by default, the elites starve the commoners to death without them being value aligned.

The Intelligence Curse series is below:

https://intelligence-curse.ai/defining/

The AIs are the elites, and the rest of humanity is the commoners in this analogy.

My own take on AI Safety Classic arguments is I've become convinced by o3/Sonnet 3.7 that the alignment is very easy hypothesis is looking a lot shakier than it used to be, and I suspect future capabilities progress is likely to be at best neutral, and probably worse for alignment being very easy.

I do think you can still remain optimistic based on other cases, but a pretty core crux is I think alignment does need to be solved if AIs are able to automate the economy, and this is pretty robust to variations on what happens with AI.

The big reason for this is that once your labor is valueless, but your land/capital isn't, you have fundamentally knocked out a load-bearing pillar of the argument that expropriation is less useful than trade.

This is to a first approximation why we do not trade with most non-human species, rather than enslaving/killing them.

(For farm animals, their labor is useful, but the stuff lots of humans want from animals fundamentally requires expropriation/violating farm animal property rights)

A good scenario for what happens if we fail is at minimum the intelligence curse scenario elaborated on by Rudolf Lane and Luke Drago below:

https://intelligence-curse.ai/defining/

Load more