I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).
I have a website: https://mdickens.me/ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff like [investing](https://mdickens.me/category/finance/) and [fitness](https://mdickens.me/category/fitness/).
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
Harangue old-hand EA types to (i) talk about and engage with EA (at least a bit) if they are doing podcasts, etc; (ii) post on Forum (esp if posting to LW anyway), twitter, etc, engaging in EA ideas; (iii) more generally own their EA affiliation.
I think the carrot is better than the stick. Rather than (or in addition to) haranguing people who don't engage, what if we reward people who do engage? (Although I'm not sure what "reward" means exactly)
You could say I'm an old-hand EA type (I've been involved since 2012) and I still actively engage in the EA Forum. I wouldn't mind a carrot.
Will, I think you deserve a carrot, too. You've written 11 EAF posts in the past year! Most of them were long, too! I've probably cited your "moral error" post about a dozen times since you wrote it. I don't know how exactly I can reward you for your contributions but at a minimum I can give you a well-deserved compliment.
I see many other long-time EAs in this comment thread, most of whom I see regularly commenting/posting on EAF. They're doing a good job, too!
(I feel like this post sounds goofy but I'm trying to make it come across as genuine, I've been up since 4am so I'm not doing my best work right now)
I don't think it's possible to get the PDF because the publisher owns distribution rights. But if you haven't seen it already, you may be interested in this: https://intelligence.org/the-problem/
It's an article explaining MIRI's views on AI risk. It's not as detailed as the book, but the basic concepts are the same.
What changed between yesterday and today? How did you manage to overcome the 5th obstacle? What I get from section 5 is that you overcame social pressure essentially by deciding to. But why did you decide to, and why now rather than (say) a month ago? Do you think are any lessons others could take from your experience about how to overcome social pressure?
A list of ideas:
Separately from the debate of veganism vs eating meat, we have strong evidence that high intake of fiber, low intake of saturated fats, and low intake of red and/or processed meats are all correlated with positive health benefits.
There is also causal evidence, e.g. the Cochrane review, Reduction in saturated fat intake for cardiovascular disease.
My rough impression is that there are indeed some "AI safety" orgs that operate in the way you describe, where they are focused more on promoting US hegemony and less on preventing AI from killing everyone.* But CAIS is more on the notkilleveryoneism side of things.
*from what I've seen, the biggest offenders are CSET, Horizon Institute, and Fathom
You have a list of "learn to learn" methods, and then you said "Can we haz nice thingss? Futureburger n real organk lief maybs?" I'm not sure I'm interpreting you correctly, but it sounds like you're saying something like
If that's what you mean then I disagree, I don't think our current understanding of the science of learning is remotely near where it would need to be to keep up with ASI, and in fact I would guess that even a perfect-learner human brain would still never be able to keep up with ASI regardless of how good a job it does. Human brains still have physical limits. An ASI need not have physical limits because it can (e.g.) add more transistors to its brain.