D

Denis

306 karmaJoined Apr 2023

Comments
92

These are great posts, thank you for writing them!

IMHO EA's can vastly improve our effectiveness by focusing more on effective communication. Your articles are definitely a step in the right direction. 

There is an opportunity to go a lot further if we can also do more to adjust the style of our communication. 

To me, and to most EA's, your articles are beautifully written, with crystal-clear reasoning. However, we need to keep reminding ourselves that, in a sense, we are outliers, we people who like to communicate in this way. We focus on content and precision and logic and data. 

We can learn a lot from people who communicate very effectively in very different ways. Look at beer commercials. Look at TicToc influencers. Look at Donald Trump (really - he is obnoxious and wrong, but he is a very effective communicator in the sense that his communication serves his cynical, obnoxious purpose very well within his target audience - the fact that so many liberals refuse to admit this and fight fire with fire is a big reason why he could still win). 

Most people hated maths in school, they didn't study philosophy or logic. When we communicate only in the style that we feel comfortable with, we're almost excluding them - while allowing others to communicate to them. So they end up believing the wrong people. 

We want to be reasonable and logical, and convince people one by one. But most great communicators (including many good people, like MLK or JFK or Obama) realise that many people want to feel part of something bigger than themselves. It is part of what we are as humans. Trump knows this. His ardent supporters value loyalty to their group more than they value truth or logic or science. If you've ever been a fan of a sports team, you will know this feeling. Obama's "Yes we can" was also a movement. it allowed adherents to answer "yes we can" when they faced obstacles, even obstacles that seemed impossible to overcome. 

As EA's, we're not comfortable with this kind of talk. Every article starts with reasons why it might not be correct. This is great from a philosophical POV, but not great for mass audiences. 

Recently Daniel Kahneman died. He wrote the wonderful book "Thinking Fast and Slow" which talks about how, most of the time, people will jump to an immediately obvious conclusion - which is often wrong - rather than analysing a question in detail. Great mass-communicators realise this - they do not depend on people making the mental effort to study an issue, but rather they look at ways to manipulate their fast-thinking mode. Beer-commercials create a mental link between drinking beer and being surrounded by fun, attractive people in exciting locations. Laundry commercials create a mental link between using their products and having a nice suburban home and a happy family. And so on. 

The SBF communication is a perfect example. Millions of people form the easy connection between SBF, Fraud, Opulence, EA. They conclude that EA is an excuse for rich people to justify getting really rich while making themselves feel good about themselves. This is based on exactly one data-point - but it's the one that the public knows. Most people are not interested enough in EA to invest time to read complex arguments about why this is wrong. It may even make them feel happy to see "self-righteous do-gooders" taken down a peg.** 

Ironically, most charities are seen positively. This is because they communicate in a very different way to EA's. They show pictures of individuals suffering, they present themselves as caring and empathetic and emotional. They show the sacrifices they make to help others. Most EA's would not be comfortable communicating about EA in this way, but maybe we need to focus on the word "Effective" in our title, and get out of our comfort zone. Because this kind of communication can be much more powerful than our logic / facts / data-based communication. Certainly, it can powerfully complement it. 

I always cite climate-denial as the great example of our time. There is no doubt that the scientists are right, that the IPCC recommendations are correct. Even the oil-companies and the vehement climate deniers know this. But still, in terms of communication, they beat the scientists hands down. 

We scientists focus on facts and logic and data, and that makes us lazy. It's convenient for us, it's our language. Deniers know that they don't have logic on their side, so they are forced to optimise their communication. They find ways to make it about tradition, about pride, about emotions. They find stories of individuals who will be harmed by climate-action, and turn them into victim-heroes, fighting against the cynical scientists. The obfuscate the data, not randomly, but in a way that they have learned from focus groups will create just enough doubt among most people. They strategically do not deny things that can easily be proven to non-scientists, but instead propose things like "let's wait until we have more evidence" which sound reasonable to anyone who doesn't have the time and energy to delve deeply into what it really means (more climate-damage). 

There is a certain irony that I'm making this point while writing badly in the style I'm saying isn't very effective for mass-communication. But it's what I'm comfortable with too. But my point is: there are people out there who have studied, scientifically, which methods of communication are effective with "the public". Politicians learn from them. The EA movement could do so too. 

Ultimately, we are right (I think) on most of the points we argue; it would be very valuable to get more and more people thinking the way EA's do. We should not limit this to people who also like to communicate the way EA's do. 

 

**The very existence of the term "do-gooder" is proof of this - there is no conceivable logical reason why people should hate a person who does good, but they do. Bono is consistently considered the most hated person in Ireland, in a close contest with Bob Geldof - because both are classed as "do-gooders" who need to get off their high-horses. It's not about people thinking deeply about the good they actually do, or questioning whether they truly add value. it's about people not being comfortable with the idea of people making them feel uncomfortable about themselves. By criticising them, we're allowing ourselves to feel better about not doing anything. I think that sometimes EA's could be seen in a similar way. 

 

I upvoted this comment because it is a very valuable contribution to the debate. However, I also gave it an "x" vote (what is that called? disagree?) because I strongly disagree with the conclusion and recommendation. 

Very briefly, everything you write here is factually (as best I know) true. There are serious obstacles to creating and to enforcing strict liability. And to do so would probably be unfair to some AI researchers who do not intend harm. 

However, we need to think in a slightly more utilitarian manner. Maybe being unfair to some AI developers is the lesser of two evils in an imperfect world. 

I come from the world of chemical engineering, and I've worked some time in Pharma. In these areas, there is not "strict liability" as such, in the sense that you typically do not go to jail if you can demonstrate that you have done everything by the book. 

BUT - the "book" for chemical engineering or pharma is a much, much longer book, based on many decades of harsh lessons. Whatever project I might want to do, I would have to follow very strict, detailed guidelines every step of the way. If I develop a new drug, it might require more than a decade of testing before I can put it on the market, and if I make one flaw in that decade, I can be held criminally liable. If I build a factory and there is an accident, they can check every detail of every pump and pipe and reactor, they can check every calculation and every assumption I've made, and if just one of them is mistaken, or if just one time (even with a very valid reason) I have chosen not to follow the recommended standards, I can be criminally and civilly liable. 

We have far more knowledge about how to create and test drugs than we have on how to create and test AI models. And in our wisdom, we believe it takes a decade to prove that a drug is safe to be released on the market. 

We don't have anything analogous to this for AI. So nobody (credible) is arguing that strict liability is an ideal solution or a fair solution. The argument is that, until we have a much better AI Governance system in place, with standards and protocols and monitoring systems and so on, then strict liability is one of the best ways we can ensure that people act responsibly in developing, testing and releasing models. 

The AI developers like to argue that we're stifling innovation if we don't give them totally free-rein to do whatever they find interesting or promising. But this is not how the world works. There are thousands of frustrated pharmacologists who have ideas for drugs that might do wonders for some patients, but which are 3 years into a 10-year testing cycle instead of already saving lives. But they understand that this is necessary to create a world in which patients know that any drug that is prescribed by their doctor is safe for them (or that it's potential risks are understood). 

Strict liability is, in a way, telling AI model developers: "You say that your model is safe. OK, put your money where your mouth is. If you're so sure that it's safe, then you shouldn't have any worries about strict-liability. If you're not sure that it's safe, then you shouldn't be releasing it."

This feels to me like a reasonable starting point. If AI-labs have a model which they believe is valuable but flawed (e.g. risk of bias), they do have the option to release it with that warning - for example to refuse to accept liability for certain identified risks. Lawmakers can then decide if that's OK or not. It may take time, but eventually we'll move forward. 

Right now, it's the Wild West. I can understand the frustration of people with brilliant models which could do much good in the world, but we need to apply the same safety standards that we apply to everyone else. 

Strict liability is neither ideal nor fair. It's just, right now, the best option until we find a better one.

This is awesome. If every recruiter gave feedback like that, it would help so much. Thanks for setting such a great example!

This is a great article. It is really unfortunate when a good candidate puts a lot of work into an application and it is rejected for a reason that doesn't reflect their ability to do the job. 

That said, we all need to accept that we live in a bizarre world in which we say we want engaged, motivated, qualified people working on impactful areas, but then, when they choose to do so, it can be extremely difficult for those engaged, motivated people to actually find impactful roles

It seems like many EA roles get 100's of applications (literally). And because hirers are open-minded, they encourage everyone to apply, even if they're not sure they're a good fit. 

One result of this is that a vast amount of the energy and commitment of EA's is invested into the task of searching for work (on one side) or in evaluating and selecting applicants on the other side. 

It just feels unfortunate, in the sense that if this energy could be invested in something impactful, it would be better. Ultimately a great CV and cover-letter doesn't help any humans or animals. 

I don't have a solution. Obviously there are just not so many roles out there, and we can't just create roles without funding and organisations and managers and so on. And we don't want to discourage people from applying for roles they think they could do well. 

This has been a pet peeve of mine since my pre-EA days. I wrote about it from the perspective of a recruiter on Quora, and more than 1000 people upvoted my answer. So it's definitely not an EA-specific problem. 

In fact, I would go further and say that EA organisations do a lot of things far better than most organisations:

  1. They often put a lot of emphasis on work-tests, which are far better than interviews at assessing a person's fit for a role - and which are also a great learning experience even for the people who don't get hired. 
  2. Many recruiters do give feedback. Useful, tangible feedback. Often this only happens after the initial screening. 
  3. Some recruiters even go out of their way to help applicants find an impactful role, because, unlike corporations, we're all rooting for each other to succeed. 

But even still, it would be great if there were a better way to get more people into roles (even if initially low-paid roles, with the potential for upgrading) in which they learn and get experience they can put on their CV's, rather than have them desperately trying to find a role.

I kind of imagine that in some EA-hub locations, this is what happens. That lots of people know each other and can recommend roles for each other. I see something like this in the Brussels EU bubble, where once you're part of the community, it seems like there are always roles opening up for people who need to or want to move. So maybe what I'm writing refers more to people living away from EA hubs, who would like to switch to more impactful roles, but struggle to find one. Unfortunately, if we don't find a way to include these people, the potential growth of EA will be limited. 

For now, all I can do is strongly encourage any recruiter to provide any critical feedback they can. Maybe not to everyone, but if there is someone who is clearly doing something wrong (several typos on their CV for example), please tell them. I've reviewed a lot of CV's and job applications, and I can say that I've never had a negative reaction when I sent someone a quick note to explain how they could improve their chances to get other roles (always phrased this way to avoid suggesting that was the reason they weren't hired by us). 


I am also very curiously and closely following the new Moral Circles created by Rutger Bregman in the Netherlands to try to convince highly experienced professionals to move to more impactful roles, to see if they have a good solution to this. There seems to be a lot of people hearing his message, I want to see how they manage the challenge of making sure that all the very capable people who want to do something more impactful actually find a role where they can do so. 

 

Thank you for this comment. 

I really appreciate when someone puts an explanation for why they down-voted something I wrote :D 

Indeed, I knew that what I wrote would be unpopular when I wrote it. And maybe it just looks like I'm an old cynic polluting the idealism of youth. But I don't agree that it's naive. If anything, the naivete lies on the other side. 

How can an EA not realise that damaging the EA movement is damaging to the world? 

So you need to balance the potential damage to the world thought damage to EA vs the potential of avoiding damage to the world from the investigation. I have not seen any comments mentioning this, so I wrote about it, because it is important. 

I'm not clear in what sense anything the EA movement did with SBF has damaged the world, unless you believe that SBF would have behaved ethically were it not for the EA movement, and that EA's actively egged him on to commit fraud. I presume that when you refer to "naive-consequentialist reasoning", you are referring to what happened within FTX (in addition to my own reasoning of course!), rather than to something that someone in the EA movement (other than SBF) did? 

I don't know the details, but I would expect that the donations that we received from him were spent very effectively and had a positive impact on many people. (If that is not the case, that should be investigated, I'd agree!). So it is highly likely that the impact of the EA movement was to make the net impact of SBF's fraud significantly less negative overall. 

Of course, I may be wrong - I am interested to hear any specific ways in which people believe that the EA movement might be responsible for the damage SBF caused to investors, or to anyone other than the EA movement itself. 

But my reading of this is that SBF caused damage to EA, and not the other way round. And there was very little that EA could have done to prevent that damage other than somehow realising, unlike plenty of very experienced investors, that he was committing fraud. 

So (and again I may be wrong) I don't see how an EA investigation will prevent harm to the world. 

But I do very clearly see how an investigation could cause damage to the EA movement. The notion that we can do an investigation of what we did wrong in the SBF case and not have it perceived externally as a validation of the negative stereotype that the SBF case has projected on the EA movement is optimistic at best. 

I'm not sure if this position comes from people who mostly associate with other EA's and are just unaware of the PR problems that SBF has caused the EA movement. 

Remember that there as been a long and very public trial, so all the facts are out there and public. People are already convinced that SBF did bad things. 

The EA movement just needs to keep doing what we can to minimise the public's connection between SBF and EA. 

Again, to finish, I do appreciate that many people disagree with this perspective. It seems like ethically we should investigate, especially if we believe we have nothing to hide. But that's just not how the world works. 

And I really appreciate that you explained your disagreement. 
 

Wow, I expected to disagree with a lot of what you wrote, but instead I loved it, and especially I appreciated how you applied the more general concept of making good use of your time to language-learning. 

I really liked your list of reasons to learn a language, and that you didn't limit it to when it is "useful", which is so often the flaw I see in articles about language, which focus on how many dollars more you could earn if you spoke Mandarin or Spanish. 

I fully agree that if you do not get energized by learning languages, if it's a chore that leaves you tired and frustrated, then maybe your energy is better spent on other vital tasks. 

One way to look at this is on a spectrum. On the left are things that are vitally important and that you do even if they are no fun. Like taxes, work-outs or dental visits. On the right are things that energize or relax you, like watching football or doing Wordle, where you don't look for any "value" in them, you just enjoy them. 

The secret of a happy, successful life is to find as many activities as possible that you could fit at both ends of the spectrum. Like playing soccer, which is both fun and healthy. 

For some of us, learning foreign languages is in this category. I started learning for fun, out of intellectual curiosity, but they have turned out helping me in many tangible ways that I hadn't expected. 

But for many people, learning languages doesn't fit at either end. You don't enjoy it, and, at least at the level you're reaching, it doesn't add much value to your life. For those, it probably isn't a good use of your time compared to the many opportunities out there. 

It would be great to get more people to read your article and think about it and how it applies to them - maybe even not just related to languages, but to all the things that we're encouraged to do because they are "good" in some abstract sense. 






 

Wow, Sarah, what a wonderful essay!

(don't feel obliged to read or reply to this long and convoluted comment, just sharing as I've been pondering this since our discussion)

As I said when we spoke, there are some ideas I don’t agree with, but here you have made a very clear and compelling case, which is highly valuable and thought-provoking. 

Let me first say that I agree with a lot of what you write, and my only objections to the parts I agree with would be that those who do not agree maybe do very simplistic analyses. For example, anyone who thinks that being a great teacher cannot be a super-impactful role is just wrong. But if you do a very simplistic analysis, you could conclude that. It’s only when you follow through all the complex chain of influences that the teacher has on the pupils, and that the pupils have on others, and so on, that you see the potential impact. So I would agree when you argue that someone who claims that in their role, they are 100x more impactful than a great teacher would be making a case that is at best empirically impossible to demonstrate. And so, a person who believes they can make the world better by becoming a great teacher should probably become a teacher. 

And I’d probably generalise that to many other professions. If you’re doing a good job and helping other people, you’re probably having an above-average impact. 

I also agree with you that the impacts of any one individual are necessarily the result of not just that individual, but also of all the influences that have made the impact possible (societal things) and of all the individuals who have enabled that person to become who they are (parents, teachers, friends, ). But I don’t think most EA’s would disagree with this. 

The real question, even of not always posed very precisely, is: for individuals who, for whatever reason, finds themselves in a particular situation, are there choices or actions that might make them 100x more impactful?

And maybe if I disagree on this, it’s because I’ve spent my career doing upstream research, and in upstream research, it’s often not about incremental progress, but rather about 9 failures (which add very little value) and one huge success which has a huge impact. And there are tangible choice which impact both the likelihood of success and the potential impact of that success. You can make a choice between working on a cure for cancer or on a cure for baldness. You can make a choice between following a safe route with a good chance of incremental success, or a low-probability, untested route with a high risk but the potential for a major impact. 

I also think there is some confusion between the questions “can one choice make a huge impact?” and “who deserves credit for the impact?” On the latter question, I would totally that we would be wrong to attribute all the credit to one individual. But this is different from saying that there are no cases where one individual can have an outsized impact in the tangible sense that, in the counterfactual situation where this individual did not exist, the situation would be much worse for many people. 

When we talked about this before (after you had given Sam and me your 30-second version of the argument you present here 😊), I think I focused on scientific research (my area of expertise). I agreed that most scientists had at best an incremental impact. Often one scientist gets the public credit for the work of 100’s of scientists, technicians, teachers and others, maybe because they happened to be the ones to take the last step. Even Nobel prize-winners are sometimes just in the right place at the right time. 

But I also argued that there were cases, with Einstein being the most famous one, where there was a broad consensus that one individual had had an outsized impact. That the counterfactual case (Einstein was never born) would lead to a very different world. This is not to say that Einstein did not build on the work of many others, like Lorentz, which he himself acknowledged, or that his work was not greatly enhanced by the experimental and theoretical work of other scientists who came later, or even that some of the patents he evaluated in his patent-office role did not majorly impact his thinking. But it still remains that his impact was massive, and that if he had decided to give up physics and become a lumberjack, physics could have developed much more slowly, and we might still be struggling with technical challenges that have now been resolved for decades, like how to manage the relativistic time-differences we observe on satellites which we now use for so many routine things from tv to car navigation. 

For a famous, non-scientific (well, kind of scientific) example: one of the most famous people I almost interacted with online was Dick Fosbury. One of my friends worked with him on the US Olympic committee and one time he replied to one of my comments on facebook, which is about my greatest claim to fame! It is possible (though unlikely) that if he hadn’t existed, humans might still be doing high-jumping the way they did before him. Maybe it wasn’t him specifically but one of his coaches, or maybe some random physics student, who got the idea of the Fosbury flop, but it was likely one person, with one idea, or a small group of people working on a very simple question (how to maximise the height that a jumper can clear given a fixed maximum height of the centre of gravity). Of course people jumping higher doesn’t really impact the world greatly, but it’s just a very clear example of one individual having an outsized influence on future generations. 

I would argue that there are many more mundane examples of outsize impact compared to the counterfactual case. 

A great teacher compared to a “good” teacher can have an outsize impact, maybe inspiring them to change the world rather than just to succeed in their careers, or maybe teaching them statistics in a way that they can actually understand and enabling them to teach others. 

A great boss compared to a good boss is another example. I was lucky enough to work for one boss who almost single-handedly changed the way people were managed across a massive corporation. In a 20th century culture of command & control, of bosses taking credit for subordinates’ work, but not taking the blame, of micromanaging, and of many other now-out-dated styles, he was the first one to come in and manage like an enlightened 21st century manager, as a “servant leader”. He would always take the blame personally and pass on the credit, which at the time was unheard of. At first this hurt his career, but he persevered and suddenly the senior managers noticed that his projects always did better, his teams were more motivated, his reports were more honest (without “positioning”) and so on. And suddenly many others realised that his was the way forward. And in literally a few years, there was a major change in the organisation culture. Senior old-style managers were basically told to change their ways or to leave.

This was one individual with an outsized influence. It was not obvious to most people that he personally had had that much impact, but I just happened to be right there in the middle (in the right place at the right time) and got to observe the impact he was having, to hear the conversations with him and about him, and to see how people started first to respect and then to imitate him. 

So I’m not convinced in general that one person cannot have outsized impact, or that one role or one decision cannot have outsized impact. 

However, maybe our views are not totally disparate. Because in many cases, I would agree that those who have outsized impact could not have predicted that they would have outsize impact, and in many cases weren’t even trying to have outsize impact. My boss was just a person who believed in treating everyone with respect and trust, and could not imagine doing differently even if it had been better for his career. Einstein was a physicist who was passionately curious, he wasn’t trying to change the world as much as to answer questions that bothered him. Fosbury wanted to win competitions, he didn’t care whether others copied him or not. 

And maybe when people to have outsize impact, it’s less about their being strategic outliers (who chose to have outsize impact) and more that they are statistical outliers. In some fields, if 1000 people work on something, then each one moves it forward a bit. In other fields, if 1000 people set out to work on a problem, maybe one of them will solve it, without any help from the others. You could argue that that one person has had 1000x the impact of the others. But maybe it’s fairer to say that “if 1000 people work on a problem, there is a good chance that one of them will solve it, but the impact will be the result of “1000 people worked on it” rather than focusing on the one person who found the solution, even if this solution was unrelated to what the other 999 people were doing. In the same way that if you buy 1000 lottery tickets you have 1000x the chance of winning, but there is no meaningful sense in which the winning lottery ticket was strategically better than the others before the draw was made. 

And yet, it feels like there are choices we make which can greatly increase or decrease the odds that we can make a positive and even an outsize contribution. And I’m not convinced by (what I understand to be) your position that just doing good without thinking too much about potential impact is the best strategy. Right now, I could choose to take a typical project-management job or I could choose to work leading the R&D role for a climate-start-up or I could work on AI Governance. There is no way I can be sure that one role will be much more impactful, but it is pretty clear that in two of those roles at least have strong potential to be very impactful in a direct way, while for the project-management role, unless the project itself is impactful, it’s much less likely I could have major impact. 

I’m pretty sure by now I’m writing for myself having long lost any efforts to follow my circuitous reasoning. But let me finish (I beg myself, and graciously accede). 

I come away with the following conclusions:

  1. It is true that we often credit individuals with impacts that were in fact the results of contributions from many people, often over long times. 
  2. However, there are still cases where individuals can have outsize impact compared to the counterfactual case where they do not exist. 
  3. It is not easy to say in advance which choices or which individuals will have these outsize influences …
  4. … but there are some choices which seem to greatly increase the chance of being impactful. 

Other than that, I broadly agree with the general principle that we should all look to do good in our own way, and that if you’re doing good and helping people, it’s likely that you are being impactful in a positive way, and probably you don’t need to stress about trying to find a more impactful role. 

I know. :( 

But as a scientist, I feel it's valuable to speak the truth sometimes, to put my personal credibility on the line in service of the greater good. Venus is an Earth-sized planet which is 400C warmer than Earth, and only a tiny fraction of this is due to it being closer to the sun. The majority is about the % of the sun's heat that it absorbs vs. reflects. It is an extreme case of global warming. I'm not saying that Earth can be like Venus anytime soon, I'm saying that we have the illusion that Earth has a natural, "stable" temperature, and while it might vary, eventually we'll return to that temperature. But there is absolutely no scientific or empirical evidence for this. 

Earth's temperature is like a ball balanced in a shallow groove on the top of a steep hill. We've never experienced anything outside the narrow groove, so we imagine that it is impossible. But we've also never dramatically changed the atmosphere the way we're doing now. There is, like I said, no fundamental reason why global-warming could not go totally out of control, way beyond 1.5C or 3C or even 20C. 

I have struggled to explain this concept, even to very educated, open-minded people who fundamentally agree with my concerns about climate change. So I don't expect many people to believe me. But intellectually, I want to be honest. 

I think it is valuable to keep trying to explain this, even knowing the low probability of success, because right now, statements like "1.5C temperature increase" are just not having the impact of changing people's habits. And if we do cross a tipping point, it will be too late to start realising this. 

 

I'm not sure. IMHO a major disaster is happening with the climate. Essentially, people have a false belief that there is some kind of set-point, and that after a while the temperature will return to that, but this isn't the case. Venus is an extreme example of an Earth-like planet with a very different climate. There is nothing in physics or chemistry that says Earth's temperature could not one day exceed 100 C. 

It's always interesting to ask people how high they think sea-level might rise if all the ice melted. This is an uncontroversial calculation which involves no modelling - just looking at how much ice there is, and how much sea-surface area there is. People tend to think it would be maybe a couple of metres. It would actually be 60 m (200 feet). That will take time, but very little time on a cosmic scale, maybe a couple of thousand years. 

Right now, if anything what we're seeing is worse than the average prediction. The glaciers and ice sheets are melting faster. The temperature is increasing faster. Etc. Feedback loops are starting to be powerful. There's a real chance that the Gulf Stream will stop or reverse, which would be a disaster for Europe, ironically freezing us as a result of global warming ... 

Among serious climate scientists, the feeling of doom is palpable. I wouldn't say they are exaggerating. But we, as a global society, have decided that we'd rather have our oil and gas and steaks than prevent the climate disaster. The US seems likely to elect a president who makes it a point of honour to support climate-damaging technologies, just to piss off the scientists and liberals. 

Load more