I am the director of Tlön, an organization that translates content related to effective altruism, existential risk, and global priorities research into multiple languages.
After living nomadically for many years, I recently moved back to my native Buenos Aires. Feel free to get in touch if you are visiting BA and would like to grab a coffee or need a place to stay.
Every post, comment, or wiki edit I authored is hereby licensed under a Creative Commons Attribution 4.0 International License.
@RobBensinger had a useful chart depicting how EA was influenced by various communities, including the rationalist community.
I think it is undeniable that the rationality community played a significant part in the development of EA in the early days. I’m surprised to see people denying this.
What seems more debatable is whether this influence is best characterized as “rationalism influenced EA” rather than “both rationalism and EA emerged to a significant degree from an earlier and broader community of people that included a sizeable number of both proto-EAs and proto-rationalists”.
An increasing number of people believe that developing powerful AI systems is very dangerous, so companies might want to show that they are being “safe” in their work on AI.
Being safe with AI is hard and potentially costly, so if you’re a company working on AI capabilities, you might want to overstate the extent to which you focus on “safety.”
I think if you think there's a major difference between the candidates, you might put a value on the election in the billions -- let's say $10B for the sake of calculation.
You don't need to think there's a major difference between the candidates to conclude that the election of one candidate adds billions in value. The size of the US discretionary budget over the next four years is roughly three orders of magnitude your $10B figure, and a president can have an impact of the sort EAs care about in ways that go beyond influencing the budget, such as regulating AI, setting immigration policy, eroding government institutions and waging war.
Couldn't secretive agreements be mostly circumvented simply by directly asking the person whether they signed such an agreement? If they fail to answer, the answer is very likely 'Yes', especially if one expects them to answer 'Yes' to a parallel question in scenarios where they had signed a non-secretive agreement.
I’d be very surprised (and very impressed) if the Carl Shulman episodes did not add much to your knowledge of the topic (relative to how much you learned from the listed episodes).