Bio

Participation
5

I hope you've smiled today :) 

I really want to experience and learn about as much of the world as I can, and pride myself on working to become a sort of modern day renaissance man, a bridge builder between very different people if you will. Some not-commonly-seen-in-the-same-person things: I've slaughtered pigs on my family farm and become a vegan, done HVAC (manual labor) work and academic research, been a member of both the Republican and Democratic clubs at my university. 

Discovering EA has been one of the best things to happen to me in my life. I think I likely share something really important with all the people that consider themselves under this umbrella. EA can be a question, sure, but I hope more than that that EA can be a community, one that really works towards making the world a little better than it was. 

Below are some random interests of mine. I'm happy to connect over any of them, and over anything EA, please feel free to book a time whenever is open on my calendly

  • Philosophy (anything Plato is up my alley, but also most interested in ethical and political texts)
  • Psychology (not a big fan of psychotropic medication, also writing a paper on a interesting, niche brand of therapy called logotherapy that analyses its overlap with religion and thinking about how religion, specifically Judaism, could itself be considered a therapeutic practice)
  • Music (Lastfm, Spotify, Rateyourmusic; have deep interests in all genres but especially electronic and indie, have been to Bonnaroo and have plans to attend more festivals)
  • Politics (especially American)
  • Drug Policy (current reading Drugs Without the Hot Air by David Nutt)
  • Gaming (mostly League these days, but shamefully still Fortnight and COD from time to time)
  • Cooking (have been a head chef, have experience working with vegan food too and like to cook a lot)
  • Photography (recently completed a project on community with older people (just the text), arguing that the way we treat the elderly in the US is fairly alarming)
  • Meditation (specifically mindfulness, which I have both practiced and looked at in my RA work, which involved trying to set forth a categorization scheme for the meditative literature)
  • Home (writing a book on different conceptions of it and how relationships intertwine, with a fairly long side endeavor into what forms of relationships should be open to us)
  • Speaking Spanish (Voy a Espana por un ano a dar clases de ingles, porque quiero hablar en Espanol con fluidez)
  • Traveling (have hit a fair bit of Europe and the US, as well as some random other places like Morocco)
  • Reading (I think I currently have over 200 books to read, and have been struggling getting through fantasy recently finding myself continually pulled to non-fiction, largely due to EA reasoning I think)

How others can help me

I don't have domain expertise by any means, but I have thought a good bit about AI policy and next best steps that I'd be happy to share about (i.e. how bad is risk from AI misinformation really?). Beyond EA related things, I have deep knowledge in Philosophy, Psychology and Meditation, and can potentially help with questions generally related to these disciplines. I would say the best thing I can offer is a strong desire to dive deeper into EA, preferably with others who are also interested. I can also offer my experience with personal cause prioritization, and help others on that journey (as well as connect with those trying to find work).

How I can help others

I don't have domain expertise by any means, but I have thought a good bit about AI policy and next best steps that I'd be happy to share about (i.e. how bad is risk from AI misinformation really?). Beyond EA related things, I have deep knowledge in Philosophy, Psychology and Meditation, and can potentially help with questions generally related to these disciplines. I would say the best thing I can offer is a strong desire to dive deeper into EA, preferably with others who are also interested. I can also offer my experience with personal cause prioritization, and help others on that journey (as well as connect with those trying to find work).

Comments
162

What sort of things did the AIS group do that gave the impression they were taking ideas more seriously? Was it more events surrounding taking action (e.g. Hackathons)? Members engaging more with the ideas in the time outside of the club meetings? More seriousness in reorienting their careers based on the ideas? 

At a previous EAG, one talk was basically GovAI fellows summarizing their work, and I really enjoyed it. Given that there's tons of fellowships that are slated to start in the coming months, I wonder if there's a way to have them effectively communicate about their work on the forum? A lot of the content will be more focused on traditional AIS topics, but I expect some of the work to focus on topics more amenable to a post-AGI governance framing, and that work could be particularly encouraged. 

A light touch version might just be reaching out to those running the fellowships and having them encourage fellows to post their final product (or some insights for their work if they're not working on some singular research piece) to the forum (and ideally having them include a high-quality executive summary). The medium touch would be having someone curate the projects, e.g. highlighting the 10 best from across the fellowships. The full version could take many different forms, but one might be having the authors then engage with one another's work, encouraging disagreement and public reasoning on why certain paths might be more promising. 

Just as one random bit of advice, I'd say don't feel like you have to be sure you can make the 10% every year before you can take it. I took the pledge as a student, and have had some years where I've fallen behind, but will work to compensate for those in the years where I make more.

Really well put Elliot, and great to hear someone's else's thoughts on this. 

I wonder, do you wish you had thought about this sooner, and potentially have moved back sooner, or do you think your mid 30s is a good time to take action on it?

Ah okay yeah, the idea is that the success of the business itself is something they'll be apt to really care about, and on top of that there's a huge upside for positive impact if there's financial success because they can then deploy that towards further charitable ends. 

Do you know off the top of your head how big a stake Dustin has in Anthropic? I think the amount would play a significant role here. 

I hadn't seen this before, thanks for sharing Patrick! From what I understand this seems to be aimed at a broader version of advocacy, which includes work like journalism, that's aimed at more broadly trying to influence the public's opinion on AI right? And is this aimed to apply across contexts or more Europe-focused? 

Either way I see that you've got a week on engaging with policymakers and grassroots advocacy which is great, and definitely a super significant improvement over the status quo. Super interested in hearing more, will reach out. 

Two things here. When I use risky, I'm using it in the sense of "this action could cause net-negative consequences in the world" rather than risky in the "risky bet" sort of sense where risky means something like "high odds this doesn't actually work out". In the first sense I think most (but not all) technical work seems to be less risky than advocacy work, but I totally agree with you that it's not clear in the second sense. 

The other thing is that I'm using advocacy in this post to mean a more narrow version of the word, what's also know as direct lobbying or "arguing for a cause or policy and trying to influence decisionmakers towards that policy". MIRI's current work is certainly advocacy in the broad sense, but (to my knowledge) they're not engaging much in advocacy in this more narrow sense that I'm focused on here. 

Ah okay, I better understand what you mean when you say Natsec now. On the China front, do you think there's any advantage at all to delaying their capacity for developing AI? To put that another way, is there any degree of increased risk of a US-China conflict that you'd be willing to accept for delaying China's AI development? 

As a small point, the world in which China has developed TAI before we have, and has taken back Taiwan, doesn't seem stable at all to me. There is a sense in which what will happen is less clear just by virtue of the world now having TAI, but it seems fairly clear to me that China taking Taiwan would raise tensions, and that it would be arguably worse in such a time because the US would already have to be contending with the fact it lost the AI race. So I don't think you can dismiss the risks such a situation would pose so quickly, and it's not clear to me that even if opposing war between the two powers was your only objective that it would be safer to not take any protective action now. 

On the last point, I wonder if your definition of nat sec is more broad than mine? I think that most of the examples you cite seem to much more squarely fit into reducing the spread of communism, which I see as fair distinct from what I traditionally view as nat sec policy.  It feels like the inherently global and non-isolationist actions taken to prevent the spread of communism stand as a partially opposed approach even, because nat sec policy often seems to involve walking back the US's connection to other places in the world. At the very least, I think that current nat sec people would look fairly distinct from those pushing these past policies. 

Thanks for the multiple sources though, I think some of the citations there do seem to paint a very negative picture of US actions in those places, though I think it's only properly clear in the case of the Indonesia mass murders and seems to be a bit more uncertain elsewhere. Do you know any good book or singular resource that really digs into these cases (and those similar) that you would recommend? 

That's a really interesting example, it does seem plausible to me that there's some selection pressure not just for more researchers but more AI-company-friendly views. What do you think would be other visible effects of a bias towards being friendly towards the AI companies? 

I'll flag that the actual amount it potentially a bit larger (the 2% is my quick estimate just based on public, rather than private reports), but yeah either way it's likely quite small. 

FWIW I don't think it's likely that potential profit is playing a role per say, but, put slightly differently, that some major players in the space are more bought into the idea that the AI companies can be responsible and thus we might be jumping the gun to begin lobbying for safety measures they don't see as productive. 

Load more