I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).
I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
We should try to make some EA sentiments and principles (e.g., scope sensitivity, thinking hard about ethics) a core part of the AI safety field
On a literal interpretation of this statement, I disagree, because I don't think trying to inject those principles will be cost-effective. But I do think people should adopt those principles in AI safety (and also in every other cause area).
Please correct me if I'm misunderstanding you but this idea seems to follow from a chain of logic that goes like this:
I disagree with #2. It's sufficient to make a smaller amount of really good content and distribute it widely. I think right now the bottleneck isn't a lack of content for public consumption, it's a lack of high-quality content.
And I appreciate some of the efforts to fix this, for example Existential Risk Observatory has written some articles in national magazines, MIRI is developing some new public materials, and there's a documentary in the works. I think those are the sorts of things we need. I don't think AI is good enough to produce content at the level of quality that I expect/hope those groups will achieve.
(Take this comment as a weak endorsement of those three things but not a strong endorsement. I think they're doing the right kinds of things; I'm not strongly confident that the results will be high quality, but I hope they will be.)
Although, I do agree with you that LLMs can speed up writing, and you can make the writing high-quality as long as there's enough human oversight. (TBH I am not sure how to do this myself, I've tried but I always end up writing ~everything by hand. But many people have had success with LLM-assisted writing.)
I am somewhat against a norm of reaching out to people before criticizing their work. Dynomight's Arguing Without Warning has IMO the strongest arguments on this topic.
First: Criticism is difficult; requiring more effort on the part of critics makes criticism less likely to happen. OP did acknowledge this with
The best reason not to involve the criticised person or org is if doing so would in practice stop you from posting your criticism.
But realistically, there is a social penalty to saying "I would not have posted this criticism if I'd been required to reach out to orgs first." It makes you look lazy. A norm of "reach out to people, unless it would stop you from posting your criticism" is not a viable norm because critics who would've been stopped from posting criticism are unable to defend themselves. In some cases, I expects this norm to stop criticisms from getting written. So I think a better norm is "you don't have to reach out to people".
Second: A big reason to reach out to people is to resolve misunderstandings. But it's even better to resolve misunderstandings in public, after publishing the criticism. Readers may have the same misunderstandings, and writing a public back-and-forth is better for readers.
(Dynomight's post also gives a couple other arguments that I don't think are quite as important.)
My preferred norms are:
That's what I did for my recent critical review of one of Social Change Lab's reports.
There are some circumstances where it makes sense to reach out to people before publishing, but I don't think that should be the norm, and I don't think we should have any expectation that critics do it.
Independent as in not affiliated with any org? If that's what it means then I probably agree