Bio

Participation
4

​​I have received funding from the LTFF and the SFF and am also doing work for an EA-adjacent organization.

My EA journey started in 2007 as I considered switching from a Wall Street career to instead help tackle climate change by making wind energy cheaper – unfortunately, the University of Pennsylvania did not have an EA chapter back then! A few years later, I started having doubts about my decision that climate change was the best use of my time. After reading a few books on philosophy and psychology, I decided that moral circle expansion was neglected but important and donated a few thousand sterling pounds of my modest income to a somewhat evidence-based organisation. Serendipitously, my boss stumbled upon EA in a thread on Stack Exchange around 2014 and sent me a link. After reading up on EA, I then pursued E2G with my modest income, donating ~USD35k to AMF. I have done some limited volunteering for building the EA community here in Stockholm, Sweden. Additionally, I set up and was an admin of the ~1k member EA system change Facebook group (apologies for not having time to make more of it!). Lastly, (and I am leaving out a lot of smaller stuff like giving career guidance, etc.) I have coordinated with other people interested in doing EA community building in UWC high schools and have even run a couple of EA events at these schools.

How others can help me

Lately, and in consultation with 80k hours and some “EA veterans”, I have concluded that I should consider instead working directly on EA priority causes. Thus, I am determined to keep seeking opportunities for entrepreneurship within EA, especially considering if I could contribute to launching new projects. Therefore, if you have a project where you think I could contribute, please do not hesitate to reach out (even if I am engaged in a current project - my time might be better used getting another project up and running and handing over the reins of my current project to a successor)!

How I can help others

I can share my experience working at the intersection of people and technology in deploying infrastructure/a new technology/wind energy globally. I can also share my experience in coming from "industry" and doing EA entrepreneurship/direct work. Or anything else you think I can help with.

I am also concerned about the "Diversity and Inclusion" aspects of EA and would be keen to contribute to make EA a place where even more people from all walks of life feel safe and at home. Please DM me if you think there is any way I can help. Currently, I expect to have ~5 hrs/month to contribute to this (a number that will grow as my kids become older and more independent).

Comments
336

Topic contributions
1

Sorry for posting it here too, but a FLI podcast just dropped that seems relevant, it mentions 24 minutes in some push by several actors to use China to motivate action.

FYI, weirdly timely podcast episode out from FLI that includes discussion of CoIs in AI Safety.

Thanks for doing that and I look forward to hopefully publishing your findings. It would be valuable at least to me for the doc to show clearly, if you have time for that, if there might be biases in funding - it might be as important what is not funded as what is funded. For example, if some collection of smaller donors put 40% of funding towards considering slowing down AI, while a larger donor spends less than 2%, that might might be interesting at least as a pointer towards investigating such disparities in more detail (I noticed that Pause AI was a bit higher up in the donation election results, for example).

Strong upvote. I feel like the China-US AI race debate is laden with ideology and confused lack of specificity. It is like how "capitalism" is used. People throw the term around and it can mean hundred different things. Every mention of China in relation to AI should be specific. Are we talking about AI-enabled cyberattacks against civilian infrastructure in the West? Are we talking about some weird pathway where they will create a communist super-bot that will both rule China and the rest of the world? Are we talking about only the GDP impact? Or how increased GDP will allow them to build a larger military? Or some subset of these multitude of ways in which "China winning on AI" is important?

Currently, without the lack of specificity I feel that the China AI debate is less about any specific threat and more about creating some appealing, overly simplified narrative that can lubricate the bureaucratic machine, get buy-in from a range of private sector stakeholders and fabricate some sense of urgency for people to push through various agendas. My fear is that this sounds eerily similar to "threat of the communists" during the Cold war which can be argued led to the disastrous outcome of thousands of nuclear warheads still being pointed at some of the most dense clusters of civilians around the world.

I have read far from everything about AI, so if someone has pointers to material on why China as a one-word concept is useful to point to I would be grateful. I see this issue has been raised many times on the Forum and have not read everything but decided to comment any way as I think signal boosting here is important, especially as the China one-worder is being quite casually thrown around by people with lots of influence.

No, my comments are completely novice and naïve. I think I am just baffled that all of the funding of AI Safety is done by individuals who will profit massively from accelerating AI. Or, I think what baffles me most is how little focus there is on this peculiar combination of incentives - I listen to a few AI podcasts and browse the forum now and then - why am I only hearing about it now after a couple of years? Not sure what to think of it - my main feeling is just that the relative silence about this is somehow strange, especially in an environment that places importance on epistemics and biases.

Thanks that is super helpful although some downvotes could have come from what might be perceived as a slightly infantilizing tone - haha! (no offense taken as you are right that the information is really accessible but I guess I am just a bit surprised that this is not more often mentioned on the podcasts I listen to, or perhaps I have just missed several EAF posts on this).

Ok so all major funders of AI safety are personally, and probably quite significantly going to profit from the large AI companies making AI powerful and pervasive. 

I guess the good thing is then as AI grows they will have more money to put towards making it safe - it might not be all bad. 

I really liked this episode. Less big ideas and more interesting and helpful stories from "the field". More of this please!

Yeah apologies for the vague wording - I guess I am just trying to say this is something I know very little about. Perhaps I am biased from my work on Climate Change where there is a track record of those who would lose economically (or not profit) from action on climate change making attempts to slow down progress on solving CC. If there might be mechanisms like this at play in AI safety (and that is a big if!) I feel (and this should be looked more deeply into) like there is value to directing only a minimal stream of funding to have someone just pay attention to the fact that there is some chance such mechanisms might be beginning to play out in AI safety as well. I would not say it makes people with COI's impact bad or not trustworthy, but it might point at gaps in what is not funded. I mean this was all inspired by the OP that Pause AI seems to struggle to get funding. Maybe it is true that Pause AI is not the best use of marginal money. But at the same time, I think it could be true that at least partially, such a funding decisions might be due to incentives playing out in subtle ways. I am really unsure about all this but think it is worth looking into funding someone with "no strings attached" to pay attention to this, especially given the stakes and how previously EA has suffered from too much trust especially with the FTX scandal.

Not sure why this is tagged Community? Ticking one of these makes it EA Community:

 

  • The post is about EA as a cultural phenomenon (as opposed to EA as a project of doing good)
    • I think this is clearly about doing good, it does not rely on EA at all, only AI safety.
  • The post is about norms, attitudes or practices you'd like to see more or less of within the EA community
    • This is a practice that might be relevant to AI safety independent of EA.
  • The post would be irrelevant to someone who was interested in doing good effectively, but NOT interested in the effective altruism community
    • If this is indeed something that would help AI safety, I think someone interested in this topic but without any knowledge or interest in the EA Community would be highly relevant. I would welcome any explanation about why, given this, this question is about community?
  • The post concerns an ongoing conversation, scandal or discourse that would not be relevant to someone who doesn't care about the EA community.

    • Again, this should be relevant to people that have no interest in EA but an interest in AI safety

     

Load more