A month ago, I predicted that AI systems will be able to access safety plans posted on the Internet and use them for its own purposes. If true, it follows that a likely misaligned-by-default AGI could be able to exploit our safety plans, likely to our detriment.
The post was controversial. On the EA forum, it obtained only 13 net upvotes from 23 voters, and the top comment (which disagreed with the post) obtained 25 net upvotes from 17 voters.
On LessWrong, my post obtained only 3 net upvotes from 13 votes, while the top comment (which also disagreed with the post) obtained 9 upvotes from 3 votes.
I'm writing to report that OpenAI's recent ChatGPT system has corroborated my prediction. Even this non-AGI-scale system was able to access and meaningfully process various safety plans (and even personal information about individual AI safety researchers) posted on the Internet.
Below are some examples:

ChatGPT's ability to list detailed interpretability plans and how to exploit each of them is especially concerning. Interpretability is worked on by a disproportionate percentage of AI safety researchers, despite the fact that it may have a low scientific upside, and may even be net-negative due to its dual-use nature. Specifically, the AGI may be able to exploit the interpretability channels it knows about in advance.

ChatGPT was even able to develop individualized strategies for deceptive misalignment that were tailored to specific AI safety researchers.

Thank you so much for the clarification, Jay! It is extremely fair and valuable.
The underlying question is: does the increase in the amount of AI safety plans resulting from coordinating on the Internet outweigh the decrease in secrecy value of the plans in EV? If the former effect is larger, then we should continue the status-quo strategy. If the latter effect is larger, then we should consider keeping safety plans secret (especially those whose value lies primarily in secrecy, such as safety plans relevant to monitoring).
The disagreeing commenters generally argued that the former effect is larger, and therefore we should continue the status-quo strategy. This is likely because their estimate of the latter effect was quite small and perhaps far-into-the-future.
I think ChatGPT provides evidence that the latter should be a larger concern than many people's prior. Even current-scale models are capable of nontrivial analysis about how specific safety plans can be exploited, and even how specific alignment researchers' idiosyncrasies can be exploited for deceptive misalignment.
I am uncertain about whether the line between 2 and 3 will be narrow. I think the argument of the line between 2 and 3 being narrow often assumes fast takeoff, but I think there is a strong empirical case that takeoff will be slow and constrained by scaling, which suggests the line between 2 and 3 might be larger than one might think. But I think this is a scientific question that we should continue to probe and reduce our uncertainty about!