If you have something to share that doesn't feel like a full post, add it here! 

(You can also create a Shortform post.)


If you're new to the EA Forum, consider using this thread to introduce yourself! 

You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all. 

(You can also put this info into your Forum bio.)


Open threads are also a place to share good news, big or small. See this post for ideas.

14

0
0

Reactions

0
0
Comments14


Sorted by Click to highlight new comments since:
Caro
22
0
0

I found the EA forum really lively and thriving these last few months. It's really a pleasure hanging out here!  I also feel more at ease to comment/post thanks to the aliveness and welcoming community. Congrats to the CEA team for making an awesome job at developing a great space for EA discussions!

Hello! As long as I can remember, I have been interested in the long term future and have asked myself if there is any possibility to direct the future of humankind in a positive direction. Every once in a while I searched the internet for a community of like-minded people. A few month ago I discovered that many effective altruists are interested in longtermism. 

Since then, I often take a look at this forum and have read 'The Precipice' by Toby Ord. I am not quite sure if I agree with every belief that is common among EAs. Nevertheless, I think that we can agree on many things. 

My highest priorities are avoiding existential risks and improving decision making. Moreover, I think about the consequences of technological stagnation and the question if there are possible events far in the future that can only be influenced positively if we start working soon. At the moment my  time is very constrained, but I hope that I will be able to participate in the discussion.  

Hello! I'm one of the folks who only recently found out about EA but have been hacking my own independent version for years... It's incredible to find what I had hoped was in existence after so long. (My portal key was this podcast episode: https://samharris.org/podcasts/228-doing-good/

It feels like walking into a room of strangers but realizing, 'Now these are my people'. The most parallel experience I've had was being invited to an Ecoversities gathering a few years back, arriving in a Costa Rica and within a few hours feeling a sense of soul belonging. (If you believe in rooted, whole-self development.. Here they are http://ecoversities.org/

My passion is supporting young people in finding their path and their autonomy - and I'm in the process of interviewing folks who do feel they've found a way to live an inspired, impactful life about how they got there. Anyone here want to participate? *

*Disclaimer: You don't to have all your everything figured out, would love to talk to most anyone who's found their way here.

While in my last year of high school, I independently came up with the idea that we should try to maximize aggregate utility over time: . A few weeks later, I heard about EA from a teacher.

I would love to see how you'd solve that equation now, compared to when you first wrote it. Glad your teacher knew where to point you!

I felt the same thing when I discovered (and met) EAs :-). Welcome!

Hi everyone, I'm Marta (she/her) and I work as a PhD student at University of Groningen in the Netherlands. I study psychology of creativity and the mechanisms through which people generate creative ideas. I write about how effective altruists, advocates and activists can use creativity to make the world a better place and I share findings from creativity and animal advocacy research at my website www.bullshitfreecreativity.com

I discovered EA almost four years ago, when I started doing my PhD. I wanted to find out how my research could contribute to decreasing animal suffering and ending factory farming, and so I joined a local EA group here in Groningen, started reading some books and listening to podcasts. I also gave a talk about creativity and factory farming at Conference on Animal Rights in Europe 2019: https://www.youtube.com/watch?v=KK7113XPkLg&t=1405s 

My WANBAM mentor suggested that the creativity-related knowledge might be important in the EA community, so I decided to join the forum! I'm also curious to read more about topics related to factory farming, social change and behavioural change, and perhaps get some inspiration for my future research :) 

Welcome Marta! :)

Bill Gates has been under fire for inappropriate behavior toward women. While I admire Bill Gates as an entrepreneur and philanthropist, I don't condone those actions and I hope this community doesn't either.

Personal views, not speaking for/about CEA.

Epistemic status: I haven't read much about this story and I don't have a considered opinion about the allegations. My prior is that these things usually turn out to be true after more investigation, and the below was written from the perspective of "I assume that Gates did in fact behave inappropriately in some way."

The link is paywalled to me, but I'm disappointed to see the news. (Though I'm happy to see that Bill and Melinda say they plan to continue the Gates Foundation's work.)

This kind of incident often makes me think of this quote from Holden Karnofsky:

In general, I try to behave as I would like others to behave: I try to perform very well on “standard” generosity and ethics, and overlay my more personal, debatable, potentially-biased agenda on top of that rather than in replacement of it. I wouldn’t steal money to give it to our top charities; I wouldn’t skip an important family event (even one that had little meaning for me) in order to save time for GiveWell work.

I think this is a very common position within EA — that we should behave ethically in "standard" ways and avoid using altruistic work to cover or excuse unethical behavior. (See this great comment from Julia Wise or "Everyday Longtermism" for more on that view.)

I don't remember seeing anyone in the community condone someone's unethical behavior on the basis of their impact (vs. contesting whether the behavior itself was unethical, as in debates over Peter Singer's most controversial views). Are there any examples I'm missing? 

*****

The story also makes me think of Thomas Pogge, who was involved in EA early on but doesn't seem to have been involved after being accused of sexual harassment. I'd guess that wasn't a coincidence, though I only know my own story: the Yale EA group, which I led at the time, dropped him as an advisor after this happened. (It never occurred to us to defend his behavior.) 

This isn't to say that EA should avoid future contact with Gates. But I don't expect to see anyone say "it's fine he did that stuff, because he saved so many lives".

first, not condoning bill's behavior. My intuition is that it is good to be trustworthy, not sexually harass anyone, etc. That being said, I didn't find any of the arguments linked particularly convincing. 

"In general, I try to behave as I would like others to behave: I try to perform very well on “standard” generosity and ethics, and overlay my more personal, debatable, potentially-biased agenda on top of that rather than in replacement of it." 

Sure generally you shouldn't be a jerk, but generally being kind isn't mutually exclusive to achieving goals. Beyond that what does 'overlay' mean? The statement is quite vague, and I'm actually sure there is some bar of family event that he would skip. I'm sure 99%+  of his work w/ givewell is not time sensitive in the way a family event is, so this statement somewhat amounts to a perversion of opportunity cost. In fact, Holden even says in the blog that nothing is absolute. It's potentially presentist also because I would love for people to treat me with respect and kindness, but I would probably prefer if past people just built infrastructure. 

And again with julia's statement, she's just saying "Because we believe that trust, cooperation, and accurate information are essential to doing good". Ok, that could be true but isn't that the core of the questions we are asking- When we talk about these types of situations we are to some extent asking: is it possible x person or group did more good by not being trustworthy, cooperative, etc. Maybe this feels less relevant for EA research, but what about EAs running businesses? Microsoft got to the top with extremely scummy tactics, and now we think bill gates may be on of the greatest EAs ever, which isn't supposed to be a steel counterargument but I'm just pointing out its not that hard to spin a sentence that contradicts that point. And to swing back to the original topic, it seems extremely unlikely that sexually harassing people is ever essential or even helpful to having more impact, so it seems fair to say don't sexually harass people, but not under the grounds that "you should always default to standard generosity, only overlaying your biased agenda on top of the first level generosity." However, what about having an affair? What if he was miserable and looking for love. If the affair made him .5% more productive, there is at least some sort of surface level utilitarian argument in favor. The same for his money manager, If he thought Larson was gonna make .5% higher returns then the next best person, most of which is going to high impact charity stuff, you can once again spin a (potentially nuance-lacking) argument in favor. And what is the nuance here? Well the nuance is about how not being standardly good affects your reputation, affects culture, affects institutions, hurts peoples feelings, etc.  

*I also want to point out that julia is making a utilitarian backed claim, that trust, etc. are instrumentally important while Holden is backing some sort of moral pluarlism (though maybe also endorsing the kindness/standard goodness as instrumental hypothesis).

So while I agree with Holden and Julia generally on an intuitional level, I think that it would be nice if someone actually presented some sort of steelmanned argument (maybe someone has) for what types of unethical behavior could be condoned, or where the edges of these decisions lied.  The EA brand may not want to be associated with that essay though. 

It feels a bit to me like EAs are often naturally not 'standardly kind' or at least are not utility maximizing because they are so awkward/bad at socializing (in part due to the standard complaints about dark-web, rational types) which has bad affects on our connections and careers as well as EAs general reputation, and so Central EA is saying, lets push people in the direction so that we have a reputation of being nice rather than thinking critically about the edge cases because it will put our group  more at the correct value of not being weirdos and not getting cancelled(+ there are potentially more important topics to explore when you consider that being kind is a fairly safe bet). 

This is a good comment! Upvoted for making a reasonable challenge to a point that often goes unchallenged.

There are trade-offs to honesty and cooperation, and sometimes  those virtues won't be worth the loss of impact or potential risk. I suspect that Holden!2013 would endorse this; he may come off as fairly absolutist here, but I think you could imagine scenarios where he would, in fact, miss a family event to accomplish some work-related objective (e.g. if a billion-dollar grant were at stake).

I don't know how relevant this fact is to the Gates case, though.

While I don't have the time to respond point-by-point, I'll share some related thoughts:

  • My initial comment was meant to be descriptive rather than prescriptive: in my experience, most people in EA seem to be aligned with Holden's view. Whether they should be is a different question. 
    • I include myself in the list of those aligned, but like anyone, I have my own sense of what constitutes "standard", and my own rules for when a trade-off is worthwhile or when I've hit the limit of "trying". 
    • Still, I think I ascribe a higher value than most people to "EA being an unusually kind and honest community, even outside its direct impact".
  • I don't understand what would result from an analysis of "what types of unethical behavior could be condoned":
    • Whatever result someone comes up with, their view is unlikely to be widely adopted, even within EA (given differences in people's ethical standards)
    • In cases where someone behaves unethically within the EA community, there are so many small details we'll know about that trying to argue for any kind of general rule seems foolhardy. (Especially since "not condoning" can mean so many different things -- whether someone is fired, whether they speak at a given event, whether a given org decides to fund them...)
    • In cases outside EA (e.g. that of Gates), the opinion of some random people in EA has effectively no impact.

All in all, I'd rather replace questions like "should we condone person/behavior X?" with "should this person X be invited to speak at a conference?" or "should an organization still take grant money from a person who did X?" Or, in a broader sense, "is it acceptable to lie in a situation like X if the likely impact is Y?"

As a very little boy I learned of my patron saint's story: Child Saint Dominic Savio intervened between two warring families (Think of "Romeo and Juliet") and brought them to sensible dialogue.
That touched me. Our soon to be Prime Minister had been awarded the Nobel Peace Prize for having organized UN military forces to intervene in the "Suez Crisis". I was just very young, but that made sense. What I couldn't explain to myself? why France was inserting itself to violently re-impose colonialism in Indo-China, after WWII. (As a French Canadian, that came home to me. Also, I was born 5MAY1954. Dien Bien Phu surrened 2 days after my birth.) Fore-shadowing the ghastly war to come.

Hungary ... Soviet tanks rolling in to crush democracy. 
Chile ... no assistance in overthrowing the mafia regime, but an invasion to crush the new administration, which effectly put that people's history into the Soviet sphere.

1960s ... murderous in every way.

I turned to Canadian military as a way of turning away from bourgeois society and culture. (My thinking was simply this: perhaps our society would be less bloody-minded if we effectively interdicted Soviet  assets to drive them back.)

I didn't tangle with consumerism and the abuse of Freudian psychiatry. Above my pay grade!
I did tangle with "culture wars" ... as far back as 1970s.
All I could think of was how everyone around me, schoolyard and later, always indulged or at least ignored bullies and other villains.

"Malicious" might be rare. (Trump, however charismatic, is just a gifty psychopath. Nothing mystical here.) But "malilgnance" is not. Sick cultures produce sick individuals.

Bodhisattva aspiration is never other than simply sensible!

mangalam
--KC

Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 32m read
 · 
Authors: Joel McGuire (analysis, drafts) and Lily Ottinger (editing)  Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, T