OscarD🔸

1742 karmaJoined Working (0-5 years)Oxford, UK

Comments
274

Very interesting, and props in particular for assembling the cosmic threats dataset - that does seem like a lot of work!

I tend to agree with you and Joseph that there isn't anything on the object level to be done about these things yet, beyond just trying to ensure we get a long reflection before intersetellar colonisation, as you suggest.

On hot take 2, this relies on the risks from each start system being roughly independent, so breaking this assumption seems like a good solution, but then each star system being very correlated maybe seems bad for liberalism and diversity of forms of flourishing and so forth. But maybe some amount of regularity and conformity is the price we need to pay for galactic security.

Acausal trade/cooperation may end up being crucial here too once civilisation is spread across distances where it is hard or impossible to interact normally.

This seems right to me - personally I am more likely to read a post if it is by someone I know (in person or by reputation). I think selfishly this is the right choice as those posts are more likely to be interesting/valuable to me. But it is also perhaps a bad norm as we want new writers to have an easy route in, even if no-one recognises their name. So I try to not index too heavily on whether I know the person.

OscarD🔸
8
4
0
60% disagree

Should EA avoid using AI art for non-research purposes?

Seems somewhat epistemically toxic to give in to a populist backlash against AI art if I don't buy the arguments for it being bad myself.

I just remembered another sub-category that seems important to me: AI-enabled very accurate lie detection. This could be useful for many things, but most of all for helping make credible commitments in high-stakes US-China ASI negotiations.

Thanks Caleb, very useful. @ConnorA I'm interested in your thoughts re how to balance comms on catastrophic/existential risks and things like Deepfakes. (I don't know about the particular past efforts Caleb mentioned, and I think I am more open to comms of Deepfakes being useful to develop a broader coalition, even though deepfakes are a tiny fraction of what I care about wrt AI.)

Have you applied to LTFF? Seems like the sort of thing they would/should fund. @Linch @calebp if you have actually already evaluated this project I would be interested in your thoughts as would others I imagine! (Of course, if you decided not to fund it, I'm not saying the rest of us should defer to you, but it would be interesting to know and take into account.)

Unclear - as they note early on, many people have even shorter timelines than Ege, so not representeative in that sense. But probably many of the debates are at least relevant axes people disagree on.

If these people weren't really helping the companies it seems surprising salaries are so high?

Load more