This is a special post for quick takes by Camille. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

We should prepare for a hypothetical generalized EA-bashing.

As time goes by, we should expect EA to be the target of more and more criticism. More than that, we should probably also plan for spans of time during which EA will be, by default, considered an evil thing. This line of scenario does not seem far-fetched to me, as it already seems to start concretizing itself in France.

We need a plan, it's not costly to build one, and I think that it is plausible enough for EA's reputation to keep degrading in the next three years for time spent on this in local groups to have net-positive expected value.

1-Cultivate resilience

I think that the best thing we can do is to never, never abandon the principles of charitability, respect and rationality that inhabit the EA space. Some people will try to do it, they will try to push us so as to make us angry, to say things that are unwarranted. But we should never commit this crime. Yann Lecun is a good example of how someone can end up exploiting (voluntarily or not) one's anger : on twitter, he's borderline violent, while in real life, he retracts and dicusses calmly. This could manifest itself with violent interlocutors presenting in real life, in front of his calm version, resenting from the near-violence he displayed online. This would be disastrous.

On all sides and with all interlocutors, even the most abhorrent ones, we should strive to be calm and respectful. I think that Eliezer Yudkowsky's exchanges with Yann Lecun are, sadly, an example of the opposite happening. Maybe Eliezer sounds like a calm person to you -but I can very easily empathize with Lecun on why his replies sound arrogant and dismissive. You cannot say the same about someone like, e.g., Anthony Magnabosco, which is a better model to strive towards in this setting (I'm not talking about the method but the general tone and gentleness).

2-Do not loose the purpose.

Something worth noting is that, as EA is going to be the center of many critiques, some of them might have a point. We should always keep a clear eye and remember ourselves that what we're trying to do here is to have true beliefs and act morally. If someone is stating « A », you should not be stating « not A ». You should, instead ask yourself, « What kind of evidence is more plausible given that A is true than given that A is false ? Does it exist ? ».

Ideally, you'd want the observation of a third party to be :

« Wow, this person seems mad and angry, yet the EA in front of them is so nice, constructive, respectful and empathetic. Maybe EAs are wrong, but you should admit that they're outstanding conversation partners. » 

3-Know when to answer

I think that the biggest blindspot as things stand right now is that no one has a clear model of when to answer. Normally, we shouldn't be going only with our gut instincts about this. There is surely a certain amount of data on when, what and how to answer to false statements. What is important to know is also conditions in which not answering is clearly a dominated move. In some circumstances, someone can avoid answering because the point is unimportant, unconsequential, and it would basically just be polluting the debate (say, a flat-earther disagrees with an astrophycisist. There is something more important to be done). But sometimes, someone can avoid answering because the point is completely right (say, a flat-earther that has just been debunked by an astrophycisist, because they know they'll loose).

I think that EAs have no idea of what the public perceive as each non-answer comes by. Does the public think that EA is admitting being wrong and pretend ignoring it, or that it means the critique is ridiculous ? No one knows, yet we should make an effort to know.

4-Know how to answer
If someone is angry, we should listen to them and help them calm down.

One of the biggest mental blocks I meet when I talk about answering critiques is the presumption that it has to be a rebuttal, or even, a four-page-long debunking published in the Times. It doesn't need to be. There exist several evidence-based techniques that are quite apt at managing tense situations and none of them imply active counterargumentation, nor publishing in mass media. They even actively recommend not to do that. It can sometimes be as simple as sending a DM and offerring to meet, or check that you have understood them well. I think a lot more people should consider aligning a large margin of their interactions with these models.

5-One failure and we're done

I think it is acceptable to lend some probability to the fact that, if EA is generally perceived negatively in one powerful country, then it is enough to hamper all efforts in EA-related topics, specifically because they require so much coordination. Currently, France is headed towards becoming an Anti-Safety hub. Many people in the US might think that this is inconsequential, but remember that it doesn't take more than one country refusing to slow down AGI progress to bring back the race on a global scale, and it doesn't take more than one powerful country refusing to ban gain-of-function research to give reasons for foreign countries not to ditch their lab. If the world was to meet in order to sign a convention on AI Safety, I currently expect France to refuse signing it, or negotiate over it until it's useless, or even consider it as a hostile and unfair proposal.

More than that, since tense disucssions are hugely more mediatized than calm ones, I suspect that it wouldn't take more than a 1:20 ratio of bad discussions to depict EA in a very negative light, possibly even less.
 

6-Summary : See yourself as a peace moderator

The coming times might turn out to be dark. Please, do not let yourself merely counter-argue on social media. You should engage amicably, and genuinely discuss whether their hypothesis is right, how to test that, and build friendly and trustful bonds with them. 

AI Safety Audience Dialog Initiative : Call to Alpha-Testers

AISADI is a potential online program aiming to teach effective discussion techniques to AI Safety workers in order to handle disagreement effectively. 
The program is currently at an alpha stage but it requires testers, both for knowing the length of the program (estimated 1h30) and measuring whether its effects are significant. The test will consist in following a presentation and completing exercices about various conversational methods. If interested please consider emailing camille.berger@psl.eu. Your help will be incredibly appreciated !

FAQ : 
1-What is AISADI, exactly ? 
AISADI aims to teach conversational methods that improve epistemic rationality and rapport, e.g, relying on Street Epistemology, Deep Canvassing, Cooling Conversations and Principled Negotiation. However, this teaching is to be delivered through a Deliberate Practice framework, with timely, feedback-filled exercice.

2-How developed is the program ?
The program consists in an introduction to the general phases of an effective dialog as well as fast-feedback exercices. On the beta stage, the exercices will have been selected for their effectiveness and discussed with scientific experts of each of the techniques.

3-Is it manipulative ?
No. The program will eventually be open to all sides on the AI Safety debate, its goal is to maximize  espitemic rationality for the time of the discussion, on the topic of the discussion. I believe this requires a good handling of rapport, which in turns requires the technique to not be manipulative.

4-Why do you want to do this ?
With AIS becoming mainstream, I believe that good skills for interacting with non-rationalist, non-EA people and yet having a rational discussion are soon to be required for a very wide proportion of AI Safety workers (rather than a few communicators), and that the community currently lack those skills.

5-Why "potential" ?
The program will be subjected to funding and evaluated empirically. If the empirical results are not convincing, or that funders identify a core issue with the program, the program will be abandonned.

6-Is this massive outreach ?
No. The program aims for AI Safety workers and teaches them to respond accordingly to live, non-mediatic criticism, not to outreach for sensitizing people on AI Safety.

Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 8m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It’s deadline time. Over the last decade, many of the world’s largest food companies — from McDonald’s to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just seven months left, it turns out that many are not. Walmart backtracked first, blaming both its customers and suppliers, who “have not kept pace with our aspiration to transition to a full cage-free egg supply chain.” Kroger soon followed suit. Others, like Target, waited until the last minute, when they could blame bird flu and high egg prices for their backtracks. Then there are those who have just gone quiet. Some, like Subway and Best Western, still insist they’ll be 100% cage-free by year’s end, but haven’t shared updates on their progress in years. Others, like Albertsons and Marriott, are sharing their progress, but have quietly removed their pledges to reach 100% cage-free. Opportunistic politicians are now getting in on the act. Nevada’s Republican governor recently delayed his state’s impending ban on caged eggs by 120 days. Arizona’s Democratic governor then did one better by delaying her state’s ban by seven years. US Secretary of Agriculture Brooke Rollins is trying to outdo them all by pushing Congress to wipe out all stat