Ulrik Horn

1044 karmaJoined Apr 2021Working (6-15 years)

Bio

Participation
4

​​I have received funding from the LTFF and the SFF and am also doing work for an EA-adjacent organization.

My EA journey started in 2007 as I considered switching from a Wall Street career to instead help tackle climate change by making wind energy cheaper – unfortunately, the University of Pennsylvania did not have an EA chapter back then! A few years later, I started having doubts about my decision that climate change was the best use of my time. After reading a few books on philosophy and psychology, I decided that moral circle expansion was neglected but important and donated a few thousand sterling pounds of my modest income to a somewhat evidence-based organisation. Serendipitously, my boss stumbled upon EA in a thread on Stack Exchange around 2014 and sent me a link. After reading up on EA, I then pursued E2G with my modest income, donating ~USD35k to AMF. I have done some limited volunteering for building the EA community here in Stockholm, Sweden. Additionally, I set up and was an admin of the ~1k member EA system change Facebook group (apologies for not having time to make more of it!). Lastly, (and I am leaving out a lot of smaller stuff like giving career guidance, etc.) I have coordinated with other people interested in doing EA community building in UWC high schools and have even run a couple of EA events at these schools.

How others can help me

Lately, and in consultation with 80k hours and some “EA veterans”, I have concluded that I should consider instead working directly on EA priority causes. Thus, I am determined to keep seeking opportunities for entrepreneurship within EA, especially considering if I could contribute to launching new projects. Therefore, if you have a project where you think I could contribute, please do not hesitate to reach out (even if I am engaged in a current project - my time might be better used getting another project up and running and handing over the reins of my current project to a successor)!

How I can help others

I can share my experience working at the intersection of people and technology in deploying infrastructure/a new technology/wind energy globally. I can also share my experience in coming from "industry" and doing EA entrepreneurship/direct work. Or anything else you think I can help with.

I am also concerned about the "Diversity and Inclusion" aspects of EA and would be keen to contribute to make EA a place where even more people from all walks of life feel safe and at home. Please DM me if you think there is any way I can help. Currently, I expect to have ~5 hrs/month to contribute to this (a number that will grow as my kids become older and more independent).

Comments
272

I think your observations about a Western feel to most of EA is important. Being born in a Western country myself I can see that everything from the choice of music on podcasts to perhaps more importantly the philosophers and ideologies referenced is very Western-centric. I think there are many other philosophical traditions and historical communities we can draw inspiration from beyond Europe - it is not like EA is the first attempt at doing the most good in the world (I have some familiarity with Tibetan Buddhism and they have fairly strong opinions on everything from machine consciousness to how to help the most people most effectively). I like how many EA organizations use the concept of Ikigai, for example, but think we can do more. I think it is important both for talent like yourself, but also for engaging effectively on global AI policy, animal welfare in the global south and of course global health and poverty alleviation efforts. I also think there might be lessons worth highlighting on podcasts, in talks etc. from the many current EA-associated organizations interacting with stakeholders across a variety of cultures - given their success it feels like they must have found ways to be culturally sensitive and accommodating non-Western viewpoints. Perhaps we simply just do not highlight this part of EA enough and instead focus on intellectually interesting meta ideas which are a bit more distance from EAs "contact surface" across the globe. Sorry for the rant, I hope this comment might be useful!

I am also curious if you think the field of anthropology (and perhaps linguistics and other similar fields) might have something to offer the field of AI safety/alignment? Caveat: My understanding of both AI safety and anthropology is that of an informed lay person. 

A perhaps a bit of a poor analogy: The movie "Arrival" features a linguist and/or anthropologist as the main character and I think that might have been a good observation from the script writer. Thus, one example output I could imagine anthropologists to contribute would be to push back on the framing of the binary or "AIs" and humans. It might be that in terms of culture, the difference between different AIs is larger than between humans and the most "human-like" AI.

I love this work, especially because you investigated something I have been curious about for a while - the impact that diversity might have on AI safety. I have a few reactions so I thought I would provide them in separate comments (not sure what the forum norm is). 

I am curious if you think there are dimensions of diversity you have not captured, that might be important? One thought that came to my mind when reading this post is geographic/cultural diversity. I am not 100% sure if it is important, but reasons it might be include both:

1 - That different cultures might have different views on what it is important to focus on (a bit like women might focus more on coexistence and less on control)

2 - That it is a global problem and international policy initiatives might be more successful if one can anticipate how various stakeholders react to such initiatives.

I also had a bit of a harder time following than with "pro podcasts", but I think that is because I have a default 1.8x speed increase and aggressive trimming of silences. That works fine for the typical podcast sound and cadence but I agree it got a bit intense with these (sorry, I could not be bothered with changing the playback speed).

I am surprised I only now discovered this paper. In addition to Jeff's excellent points above, what stood out to me was that the paper contained both likelihoods of different scenarios as well as what I think is some of the more transparent reasoning behind these likelihood numbers. And the numbers are uncomfortably high!

There is more detail on how the likelihoods were arrived at in the paper itself - the last column is only a summary.

I really liked your latest especially because you discussed how you think about careers, the uncertainties you have, etc. I felt that was super helpful and gave me new perspectives and confidence in making career choices.

Ok that's good to know - I will probably be pretty vegan going forward. By the way I love all the hard evidence here on the EAF about animal welfare. It really makes me viscerally upset about the scale of abuse we currently inflict on our feathered and four-legged friends. So thanks to you and everyone else on further opening my eyes and heart to this.

Thanks for writing this, it drives home to me the point of taking a broad perspective when making ethical choices. I am wondering if you take animal product consumption a step further and look at only eating animal products where you know both of the below are true?

  1. The animals have a very high degree of welfare (think small, local farms you can visit, you know the farmer, etc.)
  2. The way they are slaughtered is the most humane possible, ideally on-farm etc. so they more or less have no idea what is coming for them until they are gone - in my mind this more or less has no suffering from a utilitarian perspective (unless the animals somehow are able to anticipate the slaughter and have increased anxiety throughout their lives because of it).

I have been pretty vegan so far, but people around me are arguing for the type of animal products above and I have a hard time pushing back on it.

Perhaps it is included in existing comments but there is a forum post on climate change vs global development showing that one should hesitate about always prioritizing the former. Then, as I understand it, if one gives only some weight to animals compared to people, I would expect is very roughly follows that one should definitely be cautious about prioritizing climate change over animal welfare. Hopefully we can find a solution that lets us avoid this trade-off though!

I must admit I did not have time to re-read your post carefully, but thought it worth pointing out that after reading it I am left a bit confused by the multiple "culture wars" references. Could you please expand on this a bit?

I guess my confusion is that "culture wars" seem to be "attention grabbing" words you used in the beginning of your post, but I feel that after reading the full post that they were not fully addressed. I would be keen to understand if you only meant these to be rhetorical devices to make the reading more captivating, or if you have opinions on the frequent "white boys" criticisms of EA? It is fine if it is the former, I just felt a bit like I was left hanging after reading the post which I think otherwise did some good analysis on financial motives for criticism, comparing AI to e.g Climate Change.

I think others might be interested in this topic as well, especially as JEID concerns was raised by many EAs, and especially women and non-binary EAs. I also think some EAs might think that the "white boys"/culture wars criticisms of EA is actually criticism we should take seriously, although the tone in which these criticisms are made are often not the most optimal for engaging in fruitful dialogue (but I can understand if people with bad experiences can find it hard to suppress their anger - and perhaps sometimes anger is appropriate).

Load more