HS

Henry Stanley 🔸

1840 karmaJoined
henrystanley.com

Bio

Participation
4

Former CTO and co-founder of earn-to-give fintech Mast.

Comments
204

Ferrous sulphate is also common but a bit nauseating and poorly absorbed in any case. Ferrous bisglycinate is also found branded as “gentle iron”.

For those very deficient in iron, an iron infusion will give you ~two years’ worth of iron in one go - and skips all the issues with oral bioavailability of iron. You will need to test your iron levels first to avoid iron overload.

I write a bit about iron supplementation in my guide to treating restless leg syndrome (RLS) for which iron deficiency is a common cause: https://henryaj.substack.com/p/how-to-treat-restless-legs-syndrome

This is wonderful – thank you so much for writing it.

Mutual dedication to one another’s ends seems like a thing commonly present in religious and ethnic communities. But it seems quite uncommon to the demographic of secular idealists, like me. Such idealists tend to form and join single-focus communities like effective altruism, which serve only a subset of our eudaemonic needs.

Agree about secular, single-purpose communities – but I'm not sure EA is quite the same.

I've found my relationships with other EAs tend to blossom to be about more than just EA; those principles provide a good set of shared values from which to build other things, like a sense of community, shared houses, group meals, playing music together and just supporting each other generally. Then again, I don't consider EA to be the core of my identity, so YMMV.

Don’t have much to add except that this sounds exceptionally fucked-up and I’m sorry you had to go through it.

I once had a conversation with a friend who felt that Anthropic advancing the AI frontier (despite their explicit commitment not to) was fine because they’re “leading from the front” in terms of their ethical stance.

It seems like that might not actually work? Advancing the frontier presumably encourages other labs to compete - and if those labs don’t have the same ethical strictures then leading from the front has no effect except to have moved the frontier forward faster than it would have otherwise…

(Referencing OpenAI’s deal with the Pentagon announced shortly after the Anthropic sanctions)

I don’t think this is meaningfully different from previous admins (not sure about autonomous weapons but certainly mass surveillance of Americans at home has been going on since the 2010s).

Broadly agree but:

The current problem is the lack of good training programs in impact-focused thinking, so it's hard for people with tons of experience and great credentials to get to the required EA-ness stage (impact-focused mindset, landscape familiarity) quickly enough to get the positions on offer, when they join EA.

Aren’t we mitigating this with things like MATS and BlueDot et al? These should be producing useful hires at a high rate so training isn’t the issue it seems

Let me write something up and come back to you.

Broadly in order of safety it’s probably caffeine > modafinil > amphetamines (Vyvanse, Ritalin, Concerta, dexamphatamine etc). But amphetamines are very commonly prescribed for ADHD/narcolepsy (usually with an ECG and occasional blood pressure checks). I think the risk-reward works out very much positively but obviously I’m eliding a lot of detail.

Great post. Two things come to mind:

  1. One way to just be able to do more stuff is to take stimulants. I think there are cases where being on them can dent your intelligence in some subtle ways but broadly they can drastically increase your ability to do more, work through when you're fatigued, etc. Maybe it's still a sufficiently edgy position that you don't mentioned it here but the absence was interesting. People at college are all taking modafinil for a reason.

  2. I worry that some incredibly ambitious people in the EA world have gone on to pursue paths that have actually been harmful. Early employees at the frontier AI labs seem like the obvious example - Anthropic was founded as an "AI safety lab" with commitments not to push the frontier but they obviously forgot about that along the way, and it seems hard to justify continuing to work there on capabilities imo. I suspect there's a lot of motivated reasoning going on among this group. Perhaps it's a cautionary tale about ambition unmoored from reflection as other people point out here, or that if your ambition leads to filthy lucre then it's very hard to course correct later on.

(Agree with the other commenters here that maybe the rate-limiting step isn't just pushing harder but co-ordination, taking more individual risks, etc)

reposted from my comment on the original Substack article

Is there a risk of boiling the ocean here?

The 'community notes everywhere' proposal seems easy enough to build (I've been hacking away at a Chrome extension version of it). I'm not sure it makes sense to wait for personal computing to change fundamentally before trying to attempt this.

I agree that distribution is an issue, which I'm not sure how to solve. One approach might be to have a core group of users onboarded who annotate a specific subset of pages - like the top 20 posts on Hacker News - so that there's some chance of your notes being seen if you're a contributor. But I suppose this relies on getting that rather large core group of users (e.g. HN readers) to start using the product.

Alternatively you build the thing and hope that it gets adopted in some larger way, say it gets acquired by X if they want to roll out community notes to the whole web.

You do address the FTX comparison (by pointing out that it won't make funding dry up), that's fair. My bad.

But I do think you're make an accusation of some epistemic impropriety that seems very different from FTX - getting FTX wrong (by not predicting its collapse) was a catastrophe and I don't think it's the same for AI timelines. Am I missing the point?

Load more