The fake email I signed up with is su@9w2.com
I need to know that to log in...
And yes, "weird" has negative connotations to most people. Self flagellation once helped highlight areas needing improvement. Now overcorrection has created hesitation among responsible, cautious, and credible people who might otherwise publicly identify as effective altruists. As a result, the label increasingly belongs to those willing to accept high reputational risks or use it opportunistically, weakening the movement’s overall credibility.
If you're aligned with EA’s core principles, thoughtful in your actions, and have no significant reputational risks, then identifying openly as an EA is especially important. Normalising the term matters. When credible and responsible people embrace the label, they anchor it positively and prevent misuse.
Offline I was early to criticise Effective Altruism’s branding and messaging. Admittedly, the name itself is imperfect. Yet at this point, it is established and carries public recognition. We can't discard it without losing valuable continuity and trust. If you genuinely believe in the core ideas and engage thoughtfully with EA’s work, openly identifying yourself as an effective altruist is a logical next step.
Specifically, if you already have a strong public image, align privately with EA values, and have no significant hidden issues, then you're precisely the person who should step forward and put skin in the game. Quiet alignment isn’t enough. The movement’s strength and reputation depend on credible voices publicly standing behind it.
Thanks to your post, I see HLI's 2023 pilot study ("Can we trust wellbeing surveys?") explores methods to correct for interpersonal scale-use differences. Although it doesn’t appear these methods have been incorporated into HLI’s cost-effectiveness models, maybe we’ll see how much scale norming might alter cost-effectiveness results like WELLBYs-per-dollar in time.
A universally provably safe artificial general intelligence is not possible, and the reasoning begins with the halting problem. In 1936, Alan Turing proved that no algorithm can determine, for every possible program and input, whether that program will eventually stop running or run forever. The importance of the halting problem is that it shows there are limits on what can be predicted about the future behavior of general computational systems.
The next key result is Rice’s theorem, which states that any non trivial question about what a program will eventually do is undecidable if the program is powerful enough to represent arbitrary computation. This includes questions such as whether a program will ever produce a certain output, ever enter a certain state, or always avoid a specific class of behaviors.
A highly capable artificial intelligence system, particularly if it's a system with general reasoning ability, falls into this category. Such a system is computationally expressive enough to learn new strategies, modify its own internal structure, and operate in environments that cannot be fully anticipated. Asking whether it will always behave safely is mathematically equivalent to asking whether a general program will always avoid certain behaviors. Rice’s theorem shows that there is no universal method to answer such questions correctly in all cases.
Quantum computing does not change this conclusion. Although quantum computation can accelerate certain classes of algorithms, it does not convert undecidable problems into decidable ones. The halting problem and Rice’s theorem apply to quantum computers just as they apply to classical computers.
Provable safety is possible only when artificial intelligence systems are restricted. If the system cannot self modify, if its environment is fully defined, or if its components are formally verified, then proofs can be constructed. These proofs apply only within the specific boundaries that can be modeled and checked.
The logical conclusion is clear. The halting problem shows that prediction has limits. Rice’s theorem shows that behavioral guarantees are undecidable for general systems. Quantum computing does not remove these limits. Therefore, a fully general artificial intelligence cannot be proven safe in every possible situation. Only constrained systems can receive formal safety guarantees.