JM

Joseph Miller

623 karmaJoined

Comments
40

As a technical person: AI is scary but this paper in particular is a nothing-burger. See my other comments.

No, there is no interesting new method here, it's using LLM scaffolding to copy some files and run a script. It can only duplicate itself within the machine it has been given access to.

In order for AI to spread like a virus it would have to have some way to access new sources of compute, for which it would need be able to get money or the ability to hack into other servers. Neither of which current LLMs appear to be capable of.

Successful self-replication under no human assistance is the essential step for
AI to outsmart the human beings

This seems clearly false. Replication (under their operationalization) is just another programming task that is not especially difficult. There's no clear link between this task and self improvement, which would be a much harder ML task requiring very different types of knowledge and actions.

However, I do separately think we have passed the level of capabilities where it is responsible to keep improving AIs.

I'm confused why the comments aren't more about cause prioritization as that's the primary choice here. Maybe that's too big of a discussion for this comment section.

This just seems like another annoying spam / marketing email. I basically never want any unnecessary emails from any company ever.

EA co-working spaces are the most impactful EA infrastructure that I'm aware of. And they are mostly underfunded.

This is particularly relevant given the recent letter from Anthropic on SB-1047.

I would like to see a steelman of the letter since it appears to me to significantly undermine Anthropic's entire raison d'etre (which I understood to be: "have a seat at the table by being one of the big players - use this power to advocate for safer AI policies"). And I haven't yet heard anyone in the AI Safety community defending it.

https://www.lesswrong.com/posts/s58hDHX2GkFDbpGKD/linch-s-shortform?commentId=RfJsudqwEMwTR5S5q
TL;DR

Anthropic are pushing for two key changes

  • not to be accountable for "pre-harm" enforcement of AI Safety standards (ie. wait for a catastrophe before enforcing any liability).
  • "if a catastrophic event does occur ... the quality of the company’s SSP should be a factor in determining whether the developer exercised 'reasonable care.'". (ie. if your safety protocols look good, you can be let off the hook for the consequences of catastrophe).

Also significantly weakening whistleblower protections.

Ok thanks, I didn't know that.

Nit: Beff Jezos was doxxed and repeating him name seems uncool, even if you don't like him.

Load more