DR

Dylan Richardson

Bio

Participation
2

Graduate student at Johns Hopkins SAIS. Looking for part-time work.

How others can help me

If you can direct me to any open jobs, internships or entry-level work that you know of, that would be very helpful!

Comments
28

Dylan Richardson
1
0
0
36% ➔ 29% disagree

While I don't entirely disregard x-risks, I have been unimpressed by the tractability of most interventions, excepting perhaps for bio-security ones.

The prevalent notion of "solving" the alignment problem as though it's a particularly hard math problem strikes me as overly represented, which entails neglecting other, more in-direct safety measures, like stable, transparent and trustworthy institutions, whether political or geopolitical (US-China war means what for AI?).

Relatedly, the harm aversion/moral purity signaling around working in AI companies (especially Anthropic!) seems counterproductive and yet has received ~no push back. It seems obvious to me that having concerned individuals (rather than E-acc ideologues) in high positions in companies is very important! I suspect that the dominance of Doomerism in AI Safety over more epistemically sound concerns is to blame. 

This will be controversial, but I think that another consideration for this question has to be to interrogate why we consider our future selves deserving of current sacrifice. If you accept the reductionist account of self-hood as being merely psychological continuity, not as a constant, the case for actions that affect your future self being justified on self-interested grounds becomes less tenable. Instead, something like saving for retirement becomes more and more like saving for someone else's retirement, the greater the gap. 

I think the instinctive, common sense case for retirement savings is something like "prudence", which isn't a moral concept really. It's more "you'll regret it if you don't". So, sure, maybe save if you are retiring in 5-10 years. But beyond that? No.

Just something to consider in addition to TAI.

I actually like that you did this; there's such little information on the news "firehose" right now that a possible accuracy/content tradeoff is entirely reasonable! 

I'd love to read a deep-dive into a non-PEPFAR USAID program. This Future Perfect article mentioned a few. But it doesn't even have to be an especially great program, there are probably plenty of examples which don't near the 100-fold improvement over the average charity (or the marginal government expenditure), but are still very respectable nonetheless. 

There's in general a bit of knowledge gap in EA on the subject of more typical good-doing endeavors. Everyone knows about PlayPumps and Malaria nets, but what about all the stuff in-between? This likely biases our understanding of non-fungible good-doing.

I second this. Mostly because I have doubts about the 80,000 hours cause area. I love their podcast, but I suspect they get a bit shielded from criticism in a way other cause areas aren't by virtue of being such a core EA organization. A more extensive and critical inquiry into "replaceability" would be welcome, whatever the conclusion.

A much of the debate on this topic comes down to questions about risk-aversion and the relevant psychology and decision theory thereof, ex: https://forum.effectivealtruism.org/posts/vnsoy47psQ5KaeHhB/difference-making-risk-aversion-an-exploration
 

Although there are other considerations of course.

How about grantmaking to support investigatory journalism on USAID recipients and USAID workers in the developing world? Anyone already doing this?

There would have to (at the very least) be a worse counterfactual here, where they have a hard time finding a replacement and I don't see that happening. I worked in the restaurant industry for a time during undergrad, despite animal welfare concerns. If anything, this was an improvement, since I used the meal discounts to purchase meat-free food. I think this fits analogously, you just have paychecks and a resume builder instead of a meal discount. 

It has been my impression that the general AI safety community had over time shifted somewhat against an US-China AI race dynamic being a concern. But with the recent success of DeepSeek, it seems to me that the race is back on. 

Has anyone not updated accordingly? If so, why? One implication of this development would seem to be that a merely domestic AI Pause is no longer a good idea. Is there agreement on this?

I'm still a bit confused - that's a lot of books, especially since they are all in Russian! And 18k hardcover! I'm a bit more credulous about the impact of such an effort than others - actual insight in the books is less important than having a fun attraction to adjacent ideas. It's worked before: the growth of less wrong may be partly attributable to this and analogously some films, eg The China Syndrome, film/sci-fi novel nuclear doom conceptions may have had significant impact in molding the attitudes of the public.

But still that's a lot of books! And if I understand correctly, with no connection to the ones which were (or weren't?) successfully distributed by the 28k in grant money, before the project ended.

Why so many? What fraction of original copies made have been successfully distributed? I understand that this wasn't from grant money, I'm just curious about the story here is all. 

Edit: saw this. So apparently 68k originally. Wow! 

Load more