Hide table of contents

A week ago @Rockwell wrote this list of things she thought EAs shoudn't do. I am glad this discussion is happening. I ran a poll (results here) and here is my attempt at a list more people can agree with: .

We can do better, so someone feel free to write version 3.

The list

  • These are not norms for what it is to be a good EA, but rather some boundaries around things that would damage trust. When someone doesn't do these things we widely agree it is a bad sign
  • EAs should report relevant conflicts of interest
  • Eas should not date coworkers they report to or who report to them
  • EAs should not use sexist or racist epithets
  • EAs should not date their funders/grantees
  • EAs should not retain someone as a full-time contractor or grant recipient for the long term, where this is illegal
  • EAs should not promote illegal drug use to their colleagues who report to them

Commentary

Beyond racism, crime and conflicts of interest the clear theme is "take employment power relations seriously". 

Some people might want other things on this list, but I don't think there is widespread enough agreement to push those things as norms. Some examples:

  • Illegal drugs - "EAs should not promote illegal drug use to their colleagues" - 41% agreed, 20% disagreed, 35% said "it's complicated", 4% skipped
  • Romance during business hours - "EA, should in general, be as romanceless a place as possible during business hours" - 40% Agreed, 21% disagreed, 36% said "it's complicated", 2% skipped
  • Housing - "EAs should not offer employer-provided housing for more than a predefined and very short period of time" - 27% Agreed 37% Disagreed 31% said "it's complicated", 6% skipped.

I know not everyone loves my use of polls or my vibes as a person. But consensus is a really useful tool for moving forward. Sure we can push aside those who disagree, put if we find things that are 70% + agreed, then that tends to move forward much more quickly and painlessly. And it builds trust that we don't steamroll opposition.

So I suggest that rather a big list of things that some parts of the community think are obvious and others think are awful, we try and get a short list of things that most people think are pretty good/fine/obvious.

Once we have a "checkpoint" that is widely agreed, we can tackle some thornier questions.

Full poll results

60

4
0

Reactions

4
0
Comments26


Sorted by Click to highlight new comments since:
vin
15
0
0

What was the sample size?

59 people.

Thanks! And do you think the sample was representative?

Probably not that representative, no. I guess like 3-6/10

I think these polls would benefit from a clause along the lines of "On balance, EAs should X" because a lot of the discourse collapses into examples and corner cases about when the behaviour is acceptable (e.g. the discussion over illegal actions ending up being around melatonin). I think having a conversation centred about where the probability mass of these phenomena actually are is important. 

I dunno, we're talking about risks worth monitoring or being bad signals. That's about the edge cases. 

I don't want a big list of rules of things EAs should or shouldn't do.

I think the should/shouldn't do list is too binary. To make it onto this list it needs to be bag in almost all circumstances, which necessarily makes the list and narrow.

A list of things that you should think carefully before doing, and attempt to mitigate the downside risks if you decide to proceed, is more useful IMO. This can be broader and cover more of the grey area issues.

Can you suggest how you'd word it?

"Here is a list of behaviors/circumstances that tend to be risky. You should give serious consideration to avoiding these circumstances unless you have reason to believe that the risks don't apply to you. Be very careful if you choose to engage with these."

In my mind I'm thinking that it is roughly parallel to certain sports or certain financial investments: plenty of people come out fine, but the risks are much more elevated compared to the average/norm in that field (compared to the sports that people normally play, or compared to similar investments). I think that the personal circumstances matter a lot: to continue the financial and sport analogy, some people have the discipline to not pull money out of a bear market, or have years of practice walking a tightrope, and thus they are less likely to be hurt/damaged from certain behaviors.

Something like the following (I don't like this wording, but the vibe I'm going for):

EAs and EA organizations taking actions on this list should perform a risk analysis. If they decide to proceed, put in-place mitigations where reasonable and appropriate. If necessary, review the risk analysis; for example, if circumstances change or the situation lasts longer than expected.

I think Rockwell's list was a good basis for discussion, and this poll and post can help move that discussion - but a priori consensus is just one (albeit important) criterion for choosing norms. The expected effect from their adoption or rejection is another.

There should probably be some place and time where this can be discussed with more focus. Something akin to a conditional constitutional convention.

What would that convention look like?

Barring autocorrect (see edit to my comment), I imagine it'd be some collection of EAs who have discussion groups for a week or two on specific topics, and at the same time try to reach consensus in the full group on a set of norms.

I think until we choose that group this is a non-awful way of doing that?

This not dating funders/grantees is a little strange to me as phrased, although I certainly strongly agree in the cases most people are imagining.

As phrased it sounds like there is a problem (for example) of paying a girlfriend/boyfriend with your own funds to do an extended project. Which is sort of weird and unusual but what exactly is the problem with that? I think what this is getting at is you shouldn't date a grantee that you are deciding to pay with someone else's money or on behalf of a larger organization. Correct?

What "EAs think EAs should do" might not be a great way of dealing with these questions, but it is valuable information. People might also get thrown off by the title when the post seems to care more about signals pointing towards risks (as opposed to monitoring EA behaviour).

Sorry, what would you title it? I was trying to be in the same vein as the first post.

I’d be keen to hear your views and whether they differed from the poll results in any aspects.

I found the 17% of people who agreed that there shouldn't be discussion of polyamory a little upsetting. I doubt they really meant it the way it came across but it felt judgemental. 

I think in general I dislike much of the EA is too weird discussion tonally. As if weirdness is something that's cheap to change rather than very expensive.

I think it is extremely ambiguous about what "talk about polyamory" is, for example I imagine many people (tbh I'd guess more than 17% of EAs) would find it unpleasant if there's regular and unavoidable discussions of whether polyamory* is net bad for society, EA, etc, in their workplace. I'd personally be fine with it if other people are, but there's always going to be a part of me that'd be tracking whether people are likely to be non-visibly upset etc.

Now whether a non-work topic being upsetting means people shouldn't discuss it in the workplace is debatable. I think it'd be too draconian to have workplace rules against it (at least by what I understood to be coastal American norms), but having soft norms against it seems probably preferable.

*other examples that might fall into this category: monogamy, body positivity, feminism, Christianity; I'm sure people can generate other examples.

To be clear, when I voted that talking about polyamory in the workplace is OK, I meant someone telling a coworker about their own life/preferences/experiences.

For context on my own vote: I’d give the same answer for talking about monogamy.

  • People should clearly be able to say “my partner(s) and I are celebrating my birthday tonight” and “it’s my anniversary!” and look at this cute picture of my metamour’s dog!” and then answer questions if a colleague says, “what’s a metamour?” Just like all colleagues should be able to talk about their families at work.

  • People should be aware that it’s risky to spend work time nerding out about dating, romantic issues, sex, hitting on people, etc. People should be aware that mono people in the Bay have often reported feeling pressured or judged for not being poly. But just like with any relation type, discussing romance at work is very likely to make someone feel uncomfortable and junior people often won’t feel like they can say so.

Maybe this would provide a little more context. Politics, sexual and romantic relationships, money, and religion are topics that are traditionally considered somewhat private in the USA, and are widely viewed as somewhat rude to talk about in public. I would feel fine talking about any of these topics with a close friend, but I wouldn't want to hear a colleague discuss the details of their romantic relationship anymore than I want to hear the particulars of their money issues or their faith. Naturally, these norms can vary across cultures, but there is a fairly strong norm to not discuss these topics in a workplace in the USA, at least.

The other big factor that comes to mind for me is the difference between a mere mention in passing and a repeated/regular topic of conversation. On a very superficial level, we are there to work, not to talk about relationships. On a more social/conversational level, I don't want to be repeatedly badgered with an someone else's relationship status or romantic adventures. I don't think that polyamory should be a prohibited topic any more than "do you want to have kids someday" or "I'm excited for a date this weekend" should be prohibited. But if any of those are repeatedly brought up in the workplace... Well, I'd like to have a workplace free from that type of annoyance. So (for me at least) it is less about there shouldn't be discussion of polyamory in the workplace, and more about there shouldn't be regular and extended discussions of people's personal relationships in the workplace.

  1. ^

    I'm assuming that the colleague is an acquaintance, rather than a friend.

I think this is something to be careful of but I think putting it on a risk register or saying people shouldn't do it is a big step. And not what people do with other relationships.

Seems more of a post hoc justification than a coherent position regardless of relationship type.

Talking about about a partner's existence or day to day life with them is not widely considered private or rude (source: an American). Getting specific about feelings or sex is private, but serious partners come up in a lot of casual ways (what'd you do this weekend? Went roller skating with my girlfriend).

Elizabeth, if the meaning coming across is that I am proposing the mere acknowledgement of a partner's existence as rude, then I have phrased my writing poorly. I agree that talking about about a partner's existence or day to day life with them is not widely considered private or rude. It seems that we both agree that mentioning (What'd you do this weekend? Went roller skating with my girlfriend) is fine, and getting into specifics is more private.

I think maybe the misunderstanding might be focused on what "talking" means. 

Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 32m read
 · 
Authors: Joel McGuire (analysis, drafts) and Lily Ottinger (editing)  Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, T