Former Reliability Engineer with expertise in data analysis, facilitation, incident investigation, technical writing, and more. Currently working with the Communications team at MIRI. Former Teaching Fellow and current facilitator at BlueDot Impact. I volunteer with AI Safety Quest, giving Navigation Calls and running the MAGIS mentorship program.
I have, and am willing to offer to EA members and organizations, the following generalist skills:
Specifying impact. In Reliability we often have to outline exactly how a proposed expenditure will improve an organization's bottom line. I have gotten very good at asking and answering questions like "how many people might this affect?" or "how likely is this outcome?" This could be especially useful for people with project ideas writing grant proposals.
Facilitation. Organize and run a meeting, take notes, email follow-ups and reminders, whatever you need. I don't need to be an expert in the topic, I don't need to personally know the participants. I do need a clear picture of the meeting's purpose and what contributions you're hoping to elicit from the participants.
Technical writing. More specifically, editing and proofreading, which don't require I fully understand the subject matter. I am a human Hemingway Editor. I have been known to cut a third of the text out of a corporate document while retaining all relevant information to the owner's satisfaction. I viciously stamp out typos.
Presentation review and speech coaching. I used to be terrified of public speaking. I still am, but now I'm pretty good at it anyway. I have given prepared and impromptu talks to audiences of dozens-to-hundreds and I have coached speakers giving company TED talks to thousands. A friend who reached out to me for input said my feedback was "exceedingly helpful". If you plan to give a talk and want feedback on your content, slides, or technique, I would be delighted to advise.
I am willing to take one-off or recurring requests. I reserve the right to start charging if this starts taking up more than a couple hours a week, but for now I'm volunteering my time and the first consult will always be free (so you can gauge my awesomeness for yourself). Contact me at optimiser.joe@gmail.com if you're interested.
E.g. Ajeya’s median estimate is 99% automation of fully-remote jobs in roughly 6-8 years, 5+ years earlier than her 2023 estimate.
This seems more extreme than the linked comment suggests? I can't find anything in the comment justifying "99% automation of fully-remote jobs".
Frankly I think we get ASI and everyone dies before we get anything like 99% automation of current remote jobs, due to bureaucratic inertia and slow adoption. Automation of AI research comes first on the jagged frontier. I don't think Ajeya disagrees?
It's often in the nature of thought experiments to try to reduce complicated things to simple choices. In reality, humans rarely know enough to do an explicit EV calculation about a decision correctly. It can still be an ideal that can help guide our decisions, such that "this seems like a poor trade of EV" is a red flag the same way "oh, I notice I could be Dutch booked by this set of preferences" is a good way to notice there may be a flaw in our thinking somewhere.
Impact Colabs started something similar but then abandoned it. They have a forum post and more detailed write-up on why. Our aim is less ambitious (for now, just listing and ranking project ideas with some filtering options) though we do hope to expand the list to include more active volunteer management options eventually. Of note, this database is divided into "quick wins" - roughly, things someone could do with less than a week's work without being part of a particular organization - and "larger projects" - which typically involve starting a full-time group or or supporting an existing one.
If you know of a project not listed, feel free to add it!
I'd like to discuss a similar "metaproject" I have in the works. Currently my goal for a "minimum viable product" is just the list, with volunteer matching added later if it works, but also including smaller "quick win" projects and immediate contributions that could be made. Would you be willing to share further and discuss lessons learned on this one?
Not sure if prewritten material counts, but I'd like to enter my Trial of the Automaton if it qualifies. I can transfer it to Google docs if need be.
(Cross-posted on the EA Anywhere Slack and a few other places)
I have, and am willing to offer to EA members and organizations upon request, the following generalist skills:
I am willing to take one-off or recurring requests. I reserve the right to start charging if this starts taking up more than a couple hours a week, but for now I'm volunteering my time and the first consult will always be free (so you can gauge my awesomeness for yourself). Message me or email me at optimiser.joe@gmail.com if you're interested.
Yes, "let's not fail with abandon" is a good summary of my argument to fellow omnivores.
That's a really good overview by Rethink Priorities. The Invertebrate Sentience Table shifted my credence a little bit in favor of insects, but I think I tend to weight more highly the argument that some sentience criteria can prove too much. I'm not super impressed by a criteria that shares a "Yes" answer with plants and/or prokaryotes. In the same vein, contextual learning sounds impressive, but if I'm understanding that description correctly then it also applies to the recommendation feature of Google Search. I do, however, agree we should take the possibility seriously and continue looking for hard evidence either way.
Here's a thought: is anyone currently testing where language models like GPT-4 fall on the sentience table?
Thanks, those are some great resources! I can read the post on insect sentience but the link to the paper throws an error. I'd love to read the definitions they use for their criteria.
Honestly this writeup did update me somewhat in favor of at least a few competent safety-conscious people working at major labs, if only so the safety movement has some access to what's going on inside the labs if/when secrecy grows. The marginal extra researcher going to Anthropic, though? Probably not.