DR

Dylan Richardson

221 karmaJoined Seeking workPursuing a graduate degree (e.g. Master's)San Diego, CA, USA
substack.com/profile/46244575-dylan-richardson

Bio

Participation
2

Graduate student at Johns Hopkins. Looking for entry level work, feel free to message me about any opportunities! 

Comments
73

I may be missing important context, but I think you are mistaken here on the norms at hand in this case. I do applaud you for helping your friend out; that makes you a good friend. But opportunities for people to be altruistic are completely unbounded; I could find hundreds of similar asks for help in a 5 minute google search, most of which aren't distinctively "good opportunities". If this wasn't a personal request, but instead calling for donations to a related cause you were making a case for, that would be fine.  I think highlighting personal requests for help is permissible and is even virtuous interpersonal behavior between friends and family. People reach out on facebook pages like this all the time. But it just looks like spam or emotional manipulation when posted on online forums dedicated to other purposes, with colleagues or strangers. 
Hopefully this helps! This can definitely be a confusing discourse norm contextually.

For context: Clara is right, there is good experimental evidence that this occurs in online comment forums. This is on top of the simple mechanism that more highly upvoted content is more likely to be seen for various reasons.

I'd assume this holds true for EA forum content. I do the same thing @Toby Tremlett🔹 is describing to some extent, but I'd be surprised if my system 2 thinking outweighs my system 1 on net in this regard. I suspect I personally do this most with very low Karma posts, which I neglect to upvote because of a vague embarrassment over the possibility of promoting content with some flaw I missed. 

Dylan Richardson
*1
0
0
60% disagree

Due to Value Lock-in, TAI poses a time constraint for farmed animal social progress.

I do not expect most issues to be resolved before this time, due to technological limitations, heightened barriers to social change relative to historic movements, and increasing developing world meat consumption. 

If we open this up to wild animals rather than just farmed, net-negative outcomes are much more assured.

I do tend to favor longer AGI/TAI timelines than many for roughly these reasons. But I don't think you are exactly right about the AI data access trend. For one, whether or not me or Americans at large are "happy to give an ASI full autonomous power to gather such biomedical data", China will be.

I tentatively I expect capabilities with real-world economic importance to come to some extent in the US as well, even if the most radical and transformative stuff requires further integration into the physical world for modeling. And at that point there may simply be a iterative process of greater and greater integration, as public perception improves and dependence increases. The complication here is moral backlash of some sort, which I note you've written about before. I agree that this is plausible, I simply wouldn't call it probable. Things look more bi-modal to me; most likely we get the outcome I've described above (mild harms could still be disregarded by China), or we get a longer slow down before curing aging.

 

Semantic quibble: I think most people, myself included, simply define ASI as either encompassing those capabilities or being sufficient at recursive self-improvement such it will possess those capabilities in short order.

If your point is primarily that the existing AI paradigm is inadequate, I would tend to agree. There's also a distinct question of what an intelligence explosion looks like; it may well be that tedious real-world experimentation is necessary for these sorts of biomedical advances, which takes time. That too is a compelling possibility; but I would expect it in a decade at most and certainly quicker than human R&D can advance.

It might genuinely be the time to boycott Chat GPT and start campaigns targeting corporate partners. But this isn't yet obvious. Even if so, what would be the appropriate concrete and reasonable asks? I think there is a bit of epistemic crisis emerging at the moment. If there's a case to be made, it needs to be made sooner rather than latter. And then we need coordination.

I found this Peter Wildeford piece helpful. My rough understanding now is that it was (implicitly?) rejecting "lawful use", especially within classified contexts, that was the contentious bit all along. 
 

But I'm still uncertain about the extent these contracts can be renegotiated in the future, when capabilities evolve. As well as the extent that black-swan type future capabilities could be "lawfully" used secretly, under classification? And presumably the nature of classified uses will kept secret from Open AI as well?

I am still confused about what exactly Open AI is requiring here and how (or if) it diverges substantively from Anthropic's contract. Is this merely a symbolic victory for the DOW? Or is the language about "lawful use" allowing a back door somehow?

Kudos for writing maybe the best article I've seen making this argument. I'll focus on the "catastrophic replacement" idea. I endorse what @Charlie_Guthmann said, but it goes further. 

We don't have reason to be especially confident of the AI sentience y/n binary (I agree it is quite plausible, but definitely not as probable as you seem to claim). But you are also way overconfident that they will have minds roughly analogous to our own and not way stranger. They would not "likely go on to build their own civilization", let alone "colonize the cosmos", when there is (random guess) a 50% chance that they have only episodic mental states that perhaps form, emerge and end with discrete goals. Or simply fleeting bursts of qualia. Or just spurts of horrible agony that only subside with positive human feedback, where scheming is not even conceivable. Or that the AI constitutes many discrete minds, one enormous utility-monster mind, or just a single mind that's relatively analogous to the human pleasure/suffering scale.

It could nonetheless end up being the case that once "catastrophic replacement" happens, ASI(s) fortuitously adopt the correct moral theory (total hedonistic utilitarianism btw!) and go on to maximize value, but I consider this less likely to come about from either rationality or the nature of ASI technology in question. The reason is roughly that there are many of us with different minds, which are under a constant flux due to changing culture and technology. A tentative analogy: consider human moral progress like sand in an hourglass; eventually it falls to the bottom. AIs may come in all shapes and sizes, like sand grains and pebbles. They may never fall into the correct moral theory by whatever process it is that could (I hope) eventually drive human moral progress to a utopian conclusion.

 

Load more