Head of the CEA Online Team, which runs this Forum.
A bit about me, to help you get to know me: Prior to CEA, I was a data engineer at an aerospace startup. I got into EA through reading the entire archive of Slate Star Codex in 2015. I found EA naturally compelling, and donated to AMF, then GFI, before settling on my current cause prioritization of meta-EA, with AI x-risk as my object-level preference. I try to have a wholehearted approach to morality, rather than thinking of it as an obligation or opportunity. You see my LessWrong profile here.
I love this Forum a bunch. I've been working on it for 5 years as of this writing, and founded the EA Forum 2.0. (Remember 1.0?) I have an intellectual belief in it as an impactful project, but also a deep love for it as an open platform where anyone can come participate in the project of effective altruism. We're open 24/7, anywhere there is an internet connection.
In my personal life, I hang out in the Boston EA and Gaymer communities, enjoy houseplants, table tennis, and playing coop games with my partner, who has more karma than me.
Fair, I was probably too loose there. I believe specifically that posts which were copied from google docs[1] failed to wrap at the screen width. But I wasn't a much of a mobile reader at the time.
Also thank you! I didn't realize you were the one who added mobile support.
IIRC, maybe it was some other cause that affected a subset of posts
Nice, I like this. Have you considered crossposting the full content? Usually those get a lot more people reading them, and more visibility, though do note the CC-BY restriction.
We're issuing @NobodyInteresting a warning for the above comment. The comment does not meet the expectations for civility and engaging in good faith. I would not recommend a public warning for this comment on it's own (though I would downvote it for the above reasons, and recommend a discrete DM), but we have less tolerance for users who have not yet shown that they can engage productively.
I expect that "labs" usefully communicates to most of my interlocutors that I'm talking about the companies developing frontier models and not something like Palantir. There's a lot of hype-based incentive for companies to claim to be "AI companies", which creates confusion. (Indeed, I didn't know before I chose Palantir as an example, but of course they're marketing themselves as an AI company.)
—
That said, I agree with the consideration in your post. I don't claim which is the bigger consideration, only that they trade off.
Much of the credit belongs to LessWrong!