Bio

Participation
4

​​I have received funding from the LTFF and the SFF and am also doing work for an EA-adjacent organization.

My EA journey started in 2007 as I considered switching from a Wall Street career to instead help tackle climate change by making wind energy cheaper – unfortunately, the University of Pennsylvania did not have an EA chapter back then! A few years later, I started having doubts about my decision that climate change was the best use of my time. After reading a few books on philosophy and psychology, I decided that moral circle expansion was neglected but important and donated a few thousand sterling pounds of my modest income to a somewhat evidence-based organisation. Serendipitously, my boss stumbled upon EA in a thread on Stack Exchange around 2014 and sent me a link. After reading up on EA, I then pursued E2G with my modest income, donating ~USD35k to AMF. I have done some limited volunteering for building the EA community here in Stockholm, Sweden. Additionally, I set up and was an admin of the ~1k member EA system change Facebook group (apologies for not having time to make more of it!). Lastly, (and I am leaving out a lot of smaller stuff like giving career guidance, etc.) I have coordinated with other people interested in doing EA community building in UWC high schools and have even run a couple of EA events at these schools.

How others can help me

Lately, and in consultation with 80k hours and some “EA veterans”, I have concluded that I should consider instead working directly on EA priority causes. Thus, I am determined to keep seeking opportunities for entrepreneurship within EA, especially considering if I could contribute to launching new projects. Therefore, if you have a project where you think I could contribute, please do not hesitate to reach out (even if I am engaged in a current project - my time might be better used getting another project up and running and handing over the reins of my current project to a successor)!

How I can help others

I can share my experience working at the intersection of people and technology in deploying infrastructure/a new technology/wind energy globally. I can also share my experience in coming from "industry" and doing EA entrepreneurship/direct work. Or anything else you think I can help with.

I am also concerned about the "Diversity and Inclusion" aspects of EA and would be keen to contribute to make EA a place where even more people from all walks of life feel safe and at home. Please DM me if you think there is any way I can help. Currently, I expect to have ~5 hrs/month to contribute to this (a number that will grow as my kids become older and more independent).

Comments
331

Topic contributions
1

No, my comments are completely novice and naïve. I think I am just baffled that all of the funding of AI Safety is done by individuals who will profit massively from accelerating AI. Or, I think what baffles me most is how little focus there is on it - I listen to a few AI podcasts and browse the forum now and then - why am I only hearing about it now after a couple of years? Not sure what to think of it - my main feeling is just that the relative silence about this is somehow strange, especially in an environment that places importance on epistemics and biases.

Thanks that is super helpful although some downvotes could have come from what might be perceived as a slightly infantilizing tone - haha! (no offense taken as you are right that the information is really accessible but I guess I am just a bit surprised that this is not more often mentioned on the podcasts I listen to, or perhaps I have just missed several EAF posts on this).

Ok so all major funders of AI safety are personally, and probably quite significantly going to profit from the large AI companies making AI powerful and pervasive. 

I guess the good thing is then as AI grows they will have more money to put towards making it safe - it might not be all bad. 

I really liked this episode. Less big ideas and more interesting and helpful stories from "the field". More of this please!

Yeah apologies for the vague wording - I guess I am just trying to say this is something I know very little about. Perhaps I am biased from my work on Climate Change where there is a track record of those who would lose economically (or not profit) from action on climate change making attempts to slow down progress on solving CC. If there might be mechanisms like this at play in AI safety (and that is a big if!) I feel (and this should be looked more deeply into) like there is value to directing only a minimal stream of funding to have someone just pay attention to the fact that there is some chance such mechanisms might be beginning to play out in AI safety as well. I would not say it makes people with COI's impact bad or not trustworthy, but it might point at gaps in what is not funded. I mean this was all inspired by the OP that Pause AI seems to struggle to get funding. Maybe it is true that Pause AI is not the best use of marginal money. But at the same time, I think it could be true that at least partially, such a funding decisions might be due to incentives playing out in subtle ways. I am really unsure about all this but think it is worth looking into funding someone with "no strings attached" to pay attention to this, especially given the stakes and how previously EA has suffered from too much trust especially with the FTX scandal.

Not sure why this is tagged Community? Ticking one of these makes it EA Community:

 

  • The post is about EA as a cultural phenomenon (as opposed to EA as a project of doing good)
    • I think this is clearly about doing good, it does not rely on EA at all, only AI safety.
  • The post is about norms, attitudes or practices you'd like to see more or less of within the EA community
    • This is a practice that might be relevant to AI safety independent of EA.
  • The post would be irrelevant to someone who was interested in doing good effectively, but NOT interested in the effective altruism community
    • If this is indeed something that would help AI safety, I think someone interested in this topic but without any knowledge or interest in the EA Community would be highly relevant. I would welcome any explanation about why, given this, this question is about community?
  • The post concerns an ongoing conversation, scandal or discourse that would not be relevant to someone who doesn't care about the EA community.

    • Again, this should be relevant to people that have no interest in EA but an interest in AI safety

     

I don't know if this analogy holds but that sounds a bit like how in certain news organizations, "lower down" journalists self censor - they do not need to be told what not to publish. Instead they independently just anticipate what they can and cannot say based on how their career might be affected by their superiors' reactions to their work. And I think if that is actually going on it might not even be conscious.

I also saw some pretty strong downvotes on my comment above. Just to make clear and in case this is the reason for my downvotes: I am not insinuating anything - I really hope and want to believe there is no big conflicts of interest. I might have been scarred by working on climate change where the polluters for years, if not decades really spent time and money slowing down action on cutting down CO2 emissions. Hopefully these patterns are not repeated with AI. Also I have much less knowledge about AI and have only heard a few times that Google etc. are sponsoring safety conferences etc. 

In any case, I believe that in addition to technical and policy work, it would be really valuable to have someone funded to really pay attention and dig into details on any conflicts of interest and skewed incentives - it set action on climate change back significantly something we might not afford with AI as it might be more binary in terms of the onset of a catastrophe. Regarding funding week - if the big donors are not currently sponsoring anyone to do this, I think this is an excellent opportunity for smaller donors to put in place a crucially missing piece of the puzzle - I would be keen to support something like this myself.

Maybe this is a cop-out but I am thinking more and more of a pluralistic and mutually respectful future. Some people might take off on a spaceship to settle a nearby solar system. Some others might live lower-tech in eco villages. Animals will be free to pursue their goals. And each of these people will pursue their version of a worthwhile future with minimal reduction in the potential of others to fulfill theirs. I think anything else will just lead to oppressions of everyone that is not onboard with some specific wild project - I think most people's dreams of a future are pretty wild and not something I would want for myself!

If this is true, or even just likely to, and someone has data on this, making this data public, even in anonymous form will be extremely high impact. I do recognize that such moves could come at great personal cost but in case it is true I just wanted to put it out there that such disclosures could be a single action that might by far outstrip even the lifetime impact of almost any other person working to reduce x-risk from AI. Also, my impression is that any evidence of this going on is absent form public information. I really hope absence of such information is actually just because nothing of this sort is actually going on but it is worth being vigilant. 

Just the other day I had some question looking back at some older documents and was wondering who could answer it when Seb came to mind. So in a sense he is still teaching me lessons - this time that my work builds on the work of so many others. At the same time it also shows that when one is doing potentially important work, it is highly likely that one's legacy will live on in a myriad ways with real-world impact.

Not sure what to make of it, but one of 80k hrs top recommendations is Government and policy - here it seems like several top careers could need to consider what party to work for. I might agree that discussions about who to vote for might not be high priority (although I think Rob Wiblin made a really good point that in "swing districts" voting might be really high value in expectation). That said, there might be a trade-off for many people, perhaps even EA as a whole between whether to try to make those issues that are still not partisan (like AI perhaps) stay non-partisan and using our resources to galvanize a political faction around issues that are partisan.
 

Load more