S

StevenKaas

421 karmaJoined

Comments
23

Since somebody was wondering if it's still possible to participate without having signed up through alignmentjam.com:

Yes, people are definitely still welcome to participate today and tomorrow, and are invited to head over to Discord to get up to speed.

Note that Severin is a coauthor on this post, though I haven't been able to find a way to add his EA Forum account on a crosspost from LessWrong.

We tried to write a related answer on Stampy's AI Safety Info:

How could a superintelligent AI use the internet to take over the physical world?

We're interested in any feedback on improving it, since this is a question a lot of people ask. For example, are there major gaps in the argument that could be addressed without giving useful information to bad actors?

Thanks for reporting the broken links. It looks like a problem with the way Stampy is importing the LessWrong tag. Until the Stampy page is fixed, following the links from LessWrong should work.

There's an article on Stampy's AI Safety Info that discusses the differences between FOOM and some other related concepts. FOOM seems to be used synonymously with "hard takeoff" or perhaps with "hard takeoff driven by recursive self-improvement"; I don't think it has a technical definition separate from that. At the time of the FOOM debate, it was taken more for granted that a hard takeoff would involve recursive self-improvement, whereas now there seems to be more emphasis by MIRI people on the possibility that ordinary "other-improvement" (scaling up and improving AI systems) could result in large performance leaps before recursive self-improvement became important.

OK, thanks for the link. People can now use this form instead and I've edited the post to point at it.

Like you say, people who are interested in AI existential risk tend to be secular/atheists, which makes them uninterested in these questions. Conversely, people who see religion as an important part of their lives tend not to be interested in AI safety or technological futurism in general. I think people have been averse to mixing AI existential ideas with religious ideas, for both epistemic reasons (worries that predictions and concepts would start being driven by meaning-making motives) and reputational reasons (worries that it would become easier for critics to dismiss the predictions and concepts as being driven by meaning-making motives).

(I'm happy to be asked questions, but just so people don't get the wrong idea, the general intent of the thread is for questions to be answerable by whoever feels like answering them.)

Thank you! I linked this from the post (last bullet point under "guidelines for questioners"). Let me know if you'd prefer that I change or remove that.

As I understand it, overestimation of sensitivity tails has been understood for a long time, arguably longer than EA has existed, and sources like Wagner & Weitzman were knowably inaccurate even when they were published. Also, as I understand it, although it has gotten more so over time, RCP8.5 has been considered to be much worse than the expected no-policy outcome since the beginning despite often being presented as the expected no-policy outcome. It seems to me that referring to most of the information presented by this post as "news" fails to adequately blame the EA movement and others for not having looked below the surface earlier.

What does an eventual warming of six degrees imply for the amount of warming that will take place in (as opposed to due to emissions in), say, the next century? The amount of global catastrophic risk seems like it depends more on whether warming outpaces humanity's ability to adapt than on how long warming continues.

Load more