huh. I recalled "xyz ways to become unstoppably agentic", an old EA forum post, that me and some friends liked quite a lot.
https://forum.effectivealtruism.org/posts/Pc3CFbYxPXgyjoDpB/seven-ways-to-become-unstoppably-agentic
it appears the author has retracted it? or some other notion of it being no longer readable. I'm curious for their take on why they did that.
(in general when this happens to one of your posts, seems much better to edit the title from xyz to [retracted] xyz and leave a comment explaining why your mind changed)
the Aerolamp rental system is up and running for events like EAG, as evidenced by their service at EAGxDC this weekend--- thought you'd want to know!
obligatory https://thingofthings.substack.com/p/movie-review-the-story-of-louis-pasteur (i've watched this like 5 times i adore it, thanks ozy for the rec)
i'm confused about tithing. I yearn for the diamond emoji from GWWC, and I'm not comfortable enough to do it since I took like a 50% pay cut to do AI safety nonprofit stuff. Seems weird to make such a financial commitment, which implicates my future wife, who I have presumably not met yet, especially when I'm scraping by without too many savings per paycheck.
Is there a sense in which I already am diamond emoji eligible, because I'm "donating 50% of my income" in the sense of opportunity cost? 50 is, famously, greater than 10.
Some people think FTX not collapsing would've been on net worse for EA than FTX collapsing, cuz not collapsing would've led to such a grifter problem. You can find people who saw early signs of people just getting into it cuz of the free flow of money.
I'm pretty prepared to be worried about this, if we get another couple foundations out of Anthropic alums it could be FTX all over again (without the gambling, which makes it better. But with the AI race accelerant, which makes it worse).
Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
weird that one of their "red lines" is a moral line in the sand based on convictions in political philosophy, while the other one is a "not wrong but early" thing about reliability. I'm reading this as Dario pretty clearly saying that when AIs are reliable enough to have human-out-of-the-loop killchains, Anthropic will be happy to power it.
And I'm worried this is a nuance that not all Anthropic employees or https://notdivided.org/ signers are noticing and disagree with.
Lean synthesis capabilities aren't maximally elicited right now, because a lot of people view it as a text-to-text problem which leads to a bunch of stress about low amount of syntax in the pretraining data and the high code velocity (until about a year ago, language models still hadn't fully internalized migrations from Lean3 to Lean4). Techniques like Cobblestone or the stuff that Higher Order Company talks about (logic programming / language model API call hybrid architecture) seem really promising to me (and, especially as HOC points out, so much cheaper!).
Thanks for your comment. I had broken my ankle in three places and was on too much oxycodone to engage the first time I read it. I continue to recommend your essay a lot.
huh. I recalled "xyz ways to become unstoppably agentic", an old EA forum post, that me and some friends liked quite a lot.
https://forum.effectivealtruism.org/posts/Pc3CFbYxPXgyjoDpB/seven-ways-to-become-unstoppably-agentic
it appears the author has retracted it? or some other notion of it being no longer readable. I'm curious for their take on why they did that.
(in general when this happens to one of your posts, seems much better to edit the title from
xyzto[retracted] xyzand leave a comment explaining why your mind changed)