D

Denis

346 karmaJoined Apr 2023

Comments
95

Is it random that this appeared in the New York Times yesterday, or are the two related?

How Do We Know What Animals Are Really Feeling? - The New York Times (nytimes.com)

Regardless, it is great to see more realisation and communication around this topic. Most people just do not make any mental association between "food" and "animal suffering". One day this will all appear utterly barbaric, the way slavery appears barbaric to us today even though some highly reputed figures throughout history owned slaves. 

The more communication we have around animal consciousness and suffering, the faster that will happen. 

The best kind of communication may well be the kind that is not "accusatory" - just informative. Let people think about it for themselves rather than telling them what to think. 

Ultimately, maybe the best hope for ending animal suffering is alternative protein, and it is shocking how little money and effort is committed to this, given that it's also critical for climate, for hunger-reduction, for resilience. Alternative protein offers the potential to tell people "here is a cheaper, healthier, tastier, climate-friendlier... alternative to meat, which also avoids animal suffering." 

There are thousands of people who would jump on that statement and say it's unrealistic, but it's absolutely not. It's just that we're not treating it like the emergency that it is, we're not putting the same resources into it that we're putting into making more powerful iphones. We could choose to. 

 

I've had quite a few disagreements with other EA's about this, but I will repeat it here, and maybe get more downvotes. But I've worked for 20 years in a multinational and I know how companies deal with potential reputational damage, and I think we need to at least ask ourselves if it would be wise for us to do differently. 

EA is part of a real world which isn't necessarily fair and logical. Our reputation in this real world is vitally important to the good work we plan to do - it impacts our ability to get donations, to carry out projects, to influence policy. 

We all believe we're willing to make sacrifices to help EA succeed. 

Here's the hard part: Sometimes the sacrifice we have to make is to go against our own natural desire to do what feels right. 

It feels right that Will and other people from EA should make public statements about how bad we feel about FTX and how we'll try to do better in future and so on. 

But the legal advice Will got was correct, and was also what was best for EA. 

There was zero chance that the FTX scandal could reflect positively on EA. But there were steps Will and others could take to minimise the damage to the EA movement. 

The most important of these is to distance ourselves from the crimes that SBF committed. He committed those crimes. Not EA. Not Will. SBF caused massive harm to EA and to Will. 

I see a lot of EA's soul-searching and asking what we could have done differently. Which is good in a way. But we need to be very careful. Admitting that we (EA movement) should have done better is tantamount to admitting that we did something wrong, which is quickly conflated in public opinion with "SBF and EA are closely intertwined, one and the same." (Remember how low public awareness of EA is in general). 

The communication needs to be: EA was defrauded by SBF. He has done us massive harm. We want to make sure nobody will ever do that to EA again. We need to ensure that any public communication puts SBF on one side, and EA on the other side, a victim of his crimes just like the millions of investors. 

The fact that he saw himself as an EA is not the point. Nobody in EA encouraged him to commit fraud. People in EA may have been a bit naive, but nobody in EA was guilty of defrauding millions of investors. That was SBF. 

So Will's legal advice was spot on. Any immediate statement would have seemed defensive, as if he had something to feel guilty about, which would have resulted in more harm to the public perception of EA because of association with SBF.  

  • SBF committed crimes. 
  • Will or EA did not commit crimes, or contribute to SBF's crimes. 
  • SBF defrauded and harmed millions of investors.
  • SBF also defrauded and harmed the EA movement. 
  • The EA movement is angry with SBF. We want to make sure that nobody ever does that to us again. 

As "good people", we all want to look back and ask if there was something we could have done differently that would have prevented Sam from harming those millions of innocent investors. It is natural to wonder, the same way we see any tragedy and wonder if we could have prevented it. But we need to be very careful about the PR aspects of this (and yes, we all hate PR, but it is reality - read Pirandello if you don't believe me!). If we start making statements that suggest that we did something wrong, we're just going to be directing some of the public anger away from SBF and towards EA. I don't think that's helpful. 

There is one caveat: if someone acting on behalf on an EA organisation truly did something wrong which contributed to this fraud, then obviously we need to investigate that. But I am not aware of any evidence to suggest that happened. 

Answer by Denis Apr 17, 20242
1
0

This is a brilliant and necessary post - as is the link you share to the 2019 post. Thank you! 

When I first because interested in EA, the message I saw everywhere was "pivot! devote your career to being impactful!"

The implication was that EA is massively talent-limited. I now know that this was not the case. 

 

There are a lot of people who would like to do impactful work

But it's not just typical EA work. The same holds true for wanting to work on climate-change - an area which includes many people who have never heard of EA. Or animal-welfare. Or whatever. I am on a Slack containing 29,000 people, many highly qualified and motivated, who want to work on climate. 

I suppose we should not be surprised - indeed we should be encouraged - to find that impactful careers are much in demand. It's a sign that there are many people out in this world who are not as cynical and self-centred as some politicians would have us believe. 

An economist might look at it in this way: the satisfaction of knowing that you are doing good is a form of payment, which makes the job more appealing and/or enables the job to be filled at lower salary and/or with tougher requirements. If you have a very impactful position for a role that would normally command $100K / year on the normal job-market, you can probably offer $60K and get lots of great candidates, and you could even insist that they come to the office every day by 8.00 a.m. (please don't!)

 

Impactful roles are resource-limited, not talent-limited

Looked at from a broader perspective, we can all see that impactful roles are resource limited. If we compare the number of people working on climate-change, or alternative protein, or AI governance, to the number of people who should be working in these areas in a world in which resources were distributed according to the value of the work being done, there might be 100x as many EA roles as there are today. 

If there were a carbon-market which reflected the true cost of carbon, then work to reduce or eliminate CO2 emissions, or to capture carbon, would be highly lucrative, and many more roles would be funded. If governments truly understood the dangers of AI (or if public-opinion forced them to understand it), it's likely that much more funding would be put into work in this field. And so on. 

But it's not happening right now. And so the majority of EA's who would like to work in impactful roles just don't have that opportunity. So what should we do? 

 

What should we do? 

One option is to give up. Very few EA's will do that. Because EA comes from caring about the world's problems, you can't just decide to stop caring. 

Two very practical options are earning-to-give and/or volunteering. Both of these involve "separating" your career-path from your EA role, but still using your career to enable you to help EA by freeing up your time or your money. [A good analogy here is how many people who dream of being writers, actors, musicians, etc. find much more happiness and freedom when they decide that this need not be their primary source of income. They sometimes end up writing better books and making better music too.] 

But, in parallel with this, there are areas where commitment and grit are far more critical than brilliance, and maybe (perhaps while earning in a normal job), these are areas where we all could focus. 

 

Policy / Political Agitation / Grassroots work

Maybe the most promising area (IMHO) for us mediocre EA's to focus is in policy and even politics. This can be done in parallel to a "real" job. It can be about joining local groups (EA or not) and pushing for policy changes. It can be about writing to our local politicians and attending their meetings. It can be about getting involved at grass-roots level. None of these things require not being mediocre. 

For example, any sensible analysis tells me that we should be investing huge resources into artificial protein. For so many reasons. But not only is this not happening, but there are places where the agriculture lobby is pushing for alternative protein-based foods to be banned, or to be forced to have off-putting labels. And they are winning. It's absurd. Maybe 1000 mediocre EA's, even without tractors, could protest in Brussels or London or Washington to fight this short-sighted type of policy-making. But maybe one or two passionate mediocre EA's could start a movement, or join a political party and start something within that party. If 5 mediocre EA's who are struggling to find roles within EA were to decide to form a group in their local country, to get some advice from groups like GFI who work in the area, and to just agitate for policy change with more support and fewer misguided restrictions for alternative protein, it could be hugely impactful. 

I'm sure there are plenty of other examples. But my point is: success in this area is probably much more related to commitment and grit than it is to brilliance.

 

Counterfactuals

Before concluding that anything that isn't a direct EA-job is somehow less impactful, it's important to consider counterfactual value add. 

Maybe in the current situation, with so many brilliant people wanting to work in EA, the counterfactual value I might add by working in an "EA job" could even be negative, if the person who might have taken that job instead might have been better than me in that role. 

On the other hand, the counterfactual value of taking the GWWC pledge and earning to give while doing valuable work (even if not super-impactful by EA standards) is definitely very positive. And the counterfactual value of doing the unglamourous work of pushing your local politicians towards voting for better policies on vital issues like alternative protein might be huge - even if nobody (not even you) will ever realise or recognise how much value you've added. 

Add to this that in many roles (teaching, health-care, public-service, ...) there is a great capacity for doing good, for being impactful. And even in roles which are seen as the most mundane (think a "middle-manager" in a soap-company), there can be huge potential to help individuals, to improve sustainability, to coach young employees to be better members of society, to promote more inclusive policies, or whatever. There is so much potential to do good and have an impact if we choose to.

 

Apologies 

  1. I've been thinking about this a lot, so the above is a rather incoherent attempt to put together some thoughts after midnight, and it's ended up longer than I intended. 
  2. I've used the word "mediocre" as it was used in the title. I think both the post-author and I fully realise that nobody is mediocre. I appreciated the use of a provocative term to make us think more deeply about it! So at least in my usage, I was being intentionally ironic (in case that wasn't obvious). And even among people who are not mediocre, because none of us are, people who choose to devote their careers to EA are even less mediocre than others, in a sense that is illogical but still kind of says what I mean to say. Whether they have the specific, narrow skill-set for a specific role, whether they happen to be in the right place at the right time to get that role, etc., are just details. 

These are great posts, thank you for writing them!

IMHO EA's can vastly improve our effectiveness by focusing more on effective communication. Your articles are definitely a step in the right direction. 

There is an opportunity to go a lot further if we can also do more to adjust the style of our communication. 

To me, and to most EA's, your articles are beautifully written, with crystal-clear reasoning. However, we need to keep reminding ourselves that, in a sense, we are outliers, we people who like to communicate in this way. We focus on content and precision and logic and data. 

We can learn a lot from people who communicate very effectively in very different ways. Look at beer commercials. Look at TicToc influencers. Look at Donald Trump (really - he is obnoxious and wrong, but he is a very effective communicator in the sense that his communication serves his cynical, obnoxious purpose very well within his target audience - the fact that so many liberals refuse to admit this and fight fire with fire is a big reason why he could still win). 

Most people hated maths in school, they didn't study philosophy or logic. When we communicate only in the style that we feel comfortable with, we're almost excluding them - while allowing others to communicate to them. So they end up believing the wrong people. 

We want to be reasonable and logical, and convince people one by one. But most great communicators (including many good people, like MLK or JFK or Obama) realise that many people want to feel part of something bigger than themselves. It is part of what we are as humans. Trump knows this. His ardent supporters value loyalty to their group more than they value truth or logic or science. If you've ever been a fan of a sports team, you will know this feeling. Obama's "Yes we can" was also a movement. it allowed adherents to answer "yes we can" when they faced obstacles, even obstacles that seemed impossible to overcome. 

As EA's, we're not comfortable with this kind of talk. Every article starts with reasons why it might not be correct. This is great from a philosophical POV, but not great for mass audiences. 

Recently Daniel Kahneman died. He wrote the wonderful book "Thinking Fast and Slow" which talks about how, most of the time, people will jump to an immediately obvious conclusion - which is often wrong - rather than analysing a question in detail. Great mass-communicators realise this - they do not depend on people making the mental effort to study an issue, but rather they look at ways to manipulate their fast-thinking mode. Beer-commercials create a mental link between drinking beer and being surrounded by fun, attractive people in exciting locations. Laundry commercials create a mental link between using their products and having a nice suburban home and a happy family. And so on. 

The SBF communication is a perfect example. Millions of people form the easy connection between SBF, Fraud, Opulence, EA. They conclude that EA is an excuse for rich people to justify getting really rich while making themselves feel good about themselves. This is based on exactly one data-point - but it's the one that the public knows. Most people are not interested enough in EA to invest time to read complex arguments about why this is wrong. It may even make them feel happy to see "self-righteous do-gooders" taken down a peg.** 

Ironically, most charities are seen positively. This is because they communicate in a very different way to EA's. They show pictures of individuals suffering, they present themselves as caring and empathetic and emotional. They show the sacrifices they make to help others. Most EA's would not be comfortable communicating about EA in this way, but maybe we need to focus on the word "Effective" in our title, and get out of our comfort zone. Because this kind of communication can be much more powerful than our logic / facts / data-based communication. Certainly, it can powerfully complement it. 

I always cite climate-denial as the great example of our time. There is no doubt that the scientists are right, that the IPCC recommendations are correct. Even the oil-companies and the vehement climate deniers know this. But still, in terms of communication, they beat the scientists hands down. 

We scientists focus on facts and logic and data, and that makes us lazy. It's convenient for us, it's our language. Deniers know that they don't have logic on their side, so they are forced to optimise their communication. They find ways to make it about tradition, about pride, about emotions. They find stories of individuals who will be harmed by climate-action, and turn them into victim-heroes, fighting against the cynical scientists. The obfuscate the data, not randomly, but in a way that they have learned from focus groups will create just enough doubt among most people. They strategically do not deny things that can easily be proven to non-scientists, but instead propose things like "let's wait until we have more evidence" which sound reasonable to anyone who doesn't have the time and energy to delve deeply into what it really means (more climate-damage). 

There is a certain irony that I'm making this point while writing badly in the style I'm saying isn't very effective for mass-communication. But it's what I'm comfortable with too. But my point is: there are people out there who have studied, scientifically, which methods of communication are effective with "the public". Politicians learn from them. The EA movement could do so too. 

Ultimately, we are right (I think) on most of the points we argue; it would be very valuable to get more and more people thinking the way EA's do. We should not limit this to people who also like to communicate the way EA's do. 

 

**The very existence of the term "do-gooder" is proof of this - there is no conceivable logical reason why people should hate a person who does good, but they do. Bono is consistently considered the most hated person in Ireland, in a close contest with Bob Geldof - because both are classed as "do-gooders" who need to get off their high-horses. It's not about people thinking deeply about the good they actually do, or questioning whether they truly add value. it's about people not being comfortable with the idea of people making them feel uncomfortable about themselves. By criticising them, we're allowing ourselves to feel better about not doing anything. I think that sometimes EA's could be seen in a similar way. 

 

I upvoted this comment because it is a very valuable contribution to the debate. However, I also gave it an "x" vote (what is that called? disagree?) because I strongly disagree with the conclusion and recommendation. 

Very briefly, everything you write here is factually (as best I know) true. There are serious obstacles to creating and to enforcing strict liability. And to do so would probably be unfair to some AI researchers who do not intend harm. 

However, we need to think in a slightly more utilitarian manner. Maybe being unfair to some AI developers is the lesser of two evils in an imperfect world. 

I come from the world of chemical engineering, and I've worked some time in Pharma. In these areas, there is not "strict liability" as such, in the sense that you typically do not go to jail if you can demonstrate that you have done everything by the book. 

BUT - the "book" for chemical engineering or pharma is a much, much longer book, based on many decades of harsh lessons. Whatever project I might want to do, I would have to follow very strict, detailed guidelines every step of the way. If I develop a new drug, it might require more than a decade of testing before I can put it on the market, and if I make one flaw in that decade, I can be held criminally liable. If I build a factory and there is an accident, they can check every detail of every pump and pipe and reactor, they can check every calculation and every assumption I've made, and if just one of them is mistaken, or if just one time (even with a very valid reason) I have chosen not to follow the recommended standards, I can be criminally and civilly liable. 

We have far more knowledge about how to create and test drugs than we have on how to create and test AI models. And in our wisdom, we believe it takes a decade to prove that a drug is safe to be released on the market. 

We don't have anything analogous to this for AI. So nobody (credible) is arguing that strict liability is an ideal solution or a fair solution. The argument is that, until we have a much better AI Governance system in place, with standards and protocols and monitoring systems and so on, then strict liability is one of the best ways we can ensure that people act responsibly in developing, testing and releasing models. 

The AI developers like to argue that we're stifling innovation if we don't give them totally free-rein to do whatever they find interesting or promising. But this is not how the world works. There are thousands of frustrated pharmacologists who have ideas for drugs that might do wonders for some patients, but which are 3 years into a 10-year testing cycle instead of already saving lives. But they understand that this is necessary to create a world in which patients know that any drug that is prescribed by their doctor is safe for them (or that it's potential risks are understood). 

Strict liability is, in a way, telling AI model developers: "You say that your model is safe. OK, put your money where your mouth is. If you're so sure that it's safe, then you shouldn't have any worries about strict-liability. If you're not sure that it's safe, then you shouldn't be releasing it."

This feels to me like a reasonable starting point. If AI-labs have a model which they believe is valuable but flawed (e.g. risk of bias), they do have the option to release it with that warning - for example to refuse to accept liability for certain identified risks. Lawmakers can then decide if that's OK or not. It may take time, but eventually we'll move forward. 

Right now, it's the Wild West. I can understand the frustration of people with brilliant models which could do much good in the world, but we need to apply the same safety standards that we apply to everyone else. 

Strict liability is neither ideal nor fair. It's just, right now, the best option until we find a better one.

This is awesome. If every recruiter gave feedback like that, it would help so much. Thanks for setting such a great example!

This is a great article. It is really unfortunate when a good candidate puts a lot of work into an application and it is rejected for a reason that doesn't reflect their ability to do the job. 

That said, we all need to accept that we live in a bizarre world in which we say we want engaged, motivated, qualified people working on impactful areas, but then, when they choose to do so, it can be extremely difficult for those engaged, motivated people to actually find impactful roles

It seems like many EA roles get 100's of applications (literally). And because hirers are open-minded, they encourage everyone to apply, even if they're not sure they're a good fit. 

One result of this is that a vast amount of the energy and commitment of EA's is invested into the task of searching for work (on one side) or in evaluating and selecting applicants on the other side. 

It just feels unfortunate, in the sense that if this energy could be invested in something impactful, it would be better. Ultimately a great CV and cover-letter doesn't help any humans or animals. 

I don't have a solution. Obviously there are just not so many roles out there, and we can't just create roles without funding and organisations and managers and so on. And we don't want to discourage people from applying for roles they think they could do well. 

This has been a pet peeve of mine since my pre-EA days. I wrote about it from the perspective of a recruiter on Quora, and more than 1000 people upvoted my answer. So it's definitely not an EA-specific problem. 

In fact, I would go further and say that EA organisations do a lot of things far better than most organisations:

  1. They often put a lot of emphasis on work-tests, which are far better than interviews at assessing a person's fit for a role - and which are also a great learning experience even for the people who don't get hired. 
  2. Many recruiters do give feedback. Useful, tangible feedback. Often this only happens after the initial screening. 
  3. Some recruiters even go out of their way to help applicants find an impactful role, because, unlike corporations, we're all rooting for each other to succeed. 

But even still, it would be great if there were a better way to get more people into roles (even if initially low-paid roles, with the potential for upgrading) in which they learn and get experience they can put on their CV's, rather than have them desperately trying to find a role.

I kind of imagine that in some EA-hub locations, this is what happens. That lots of people know each other and can recommend roles for each other. I see something like this in the Brussels EU bubble, where once you're part of the community, it seems like there are always roles opening up for people who need to or want to move. So maybe what I'm writing refers more to people living away from EA hubs, who would like to switch to more impactful roles, but struggle to find one. Unfortunately, if we don't find a way to include these people, the potential growth of EA will be limited. 

For now, all I can do is strongly encourage any recruiter to provide any critical feedback they can. Maybe not to everyone, but if there is someone who is clearly doing something wrong (several typos on their CV for example), please tell them. I've reviewed a lot of CV's and job applications, and I can say that I've never had a negative reaction when I sent someone a quick note to explain how they could improve their chances to get other roles (always phrased this way to avoid suggesting that was the reason they weren't hired by us). 


I am also very curiously and closely following the new Moral Circles created by Rutger Bregman in the Netherlands to try to convince highly experienced professionals to move to more impactful roles, to see if they have a good solution to this. There seems to be a lot of people hearing his message, I want to see how they manage the challenge of making sure that all the very capable people who want to do something more impactful actually find a role where they can do so. 

 

Thank you for this comment. 

I really appreciate when someone puts an explanation for why they down-voted something I wrote :D 

Indeed, I knew that what I wrote would be unpopular when I wrote it. And maybe it just looks like I'm an old cynic polluting the idealism of youth. But I don't agree that it's naive. If anything, the naivete lies on the other side. 

How can an EA not realise that damaging the EA movement is damaging to the world? 

So you need to balance the potential damage to the world thought damage to EA vs the potential of avoiding damage to the world from the investigation. I have not seen any comments mentioning this, so I wrote about it, because it is important. 

I'm not clear in what sense anything the EA movement did with SBF has damaged the world, unless you believe that SBF would have behaved ethically were it not for the EA movement, and that EA's actively egged him on to commit fraud. I presume that when you refer to "naive-consequentialist reasoning", you are referring to what happened within FTX (in addition to my own reasoning of course!), rather than to something that someone in the EA movement (other than SBF) did? 

I don't know the details, but I would expect that the donations that we received from him were spent very effectively and had a positive impact on many people. (If that is not the case, that should be investigated, I'd agree!). So it is highly likely that the impact of the EA movement was to make the net impact of SBF's fraud significantly less negative overall. 

Of course, I may be wrong - I am interested to hear any specific ways in which people believe that the EA movement might be responsible for the damage SBF caused to investors, or to anyone other than the EA movement itself. 

But my reading of this is that SBF caused damage to EA, and not the other way round. And there was very little that EA could have done to prevent that damage other than somehow realising, unlike plenty of very experienced investors, that he was committing fraud. 

So (and again I may be wrong) I don't see how an EA investigation will prevent harm to the world. 

But I do very clearly see how an investigation could cause damage to the EA movement. The notion that we can do an investigation of what we did wrong in the SBF case and not have it perceived externally as a validation of the negative stereotype that the SBF case has projected on the EA movement is optimistic at best. 

I'm not sure if this position comes from people who mostly associate with other EA's and are just unaware of the PR problems that SBF has caused the EA movement. 

Remember that there as been a long and very public trial, so all the facts are out there and public. People are already convinced that SBF did bad things. 

The EA movement just needs to keep doing what we can to minimise the public's connection between SBF and EA. 

Again, to finish, I do appreciate that many people disagree with this perspective. It seems like ethically we should investigate, especially if we believe we have nothing to hide. But that's just not how the world works. 

And I really appreciate that you explained your disagreement. 
 

The first consideration here is that EA needs to focus, primarily, on impact. That is the whole point of the movement, to maximise the positive impact we can have. 

So any investigation should focus on how the SBF fiasco impacted EA's ability to do good, and how we might address that. And also, if we'd want to change (something about EA) in order to minimise future events that could adversely impact our ability to do good. i.e. Actionable recommendations. 

IMHO, looking from outside, SBF has done a lot of PR damage to EA, and we have not done a good job of responding to that. Maybe this would be a good area to focus an investigation. 

One tangible example of each: 

  • I have seen countless references to EA as an excuse to justify being rich and living in luxury by saying you are "earning to give," with SBF cited as an example. This is actively harming the EA movement. We need to get the word out that many more EA's are like EA founder Toby Ord, who chooses to donate most of his salary and lives a spartan existence. But how? 
  • Do we want to create some criteria for accepting donations? Honestly, I would be very hesitant to do this, since donations do a massive amount of good, so unless they're coming from really bad people, the balance often favours accepting the donations. But if we feel that some sources will end up doing more harm to the movement than any tangible good they do, we could set up clear rules to manage such situation. Or do we want to have rules that state that, under some conditions, we'd return donations? Again, factoring in the good that each donation can do, it's not easy. 

On a more general note, we need to make it very clear that Effective Altruism is not some kind of closed society where you get accepted or rejected. The EA community is no more to blame for SBF's crimes than the New York Yankees are to blame if one of their fans commits a homicide while on vacation in Japan. 

Ultimately, if we do consider investigating this, we need to be clear that the investigation isn't going to do further harm to the EA movement (and therefore, to all the causes that depend on the EU movement). Is there any reason to believe that doing an internal investigation will help? I mean, will anyone outside the movement feel reassured or will the trust an investigation that shows we did nothing wrong? And if some EA's did do something wrong, or even cannot prove conclusively that they didn't, isn't there a risk that publishing that will massively damage the movement, disproportionately relative to any bad things done. 

I don't want to appear cynical. But right now, SBF has given the EA movement a massive PR problem. Whatever we do needs to factor that into consideration. 

If there were some smoking gun type evidence suggesting that several EA's probably did bad things, then obviously we'd need to investigate that to provide reassurance (which is also important for PR). But I haven't heard anyone accuse anyone of that. So what do we gain? 

Wow, I expected to disagree with a lot of what you wrote, but instead I loved it, and especially I appreciated how you applied the more general concept of making good use of your time to language-learning. 

I really liked your list of reasons to learn a language, and that you didn't limit it to when it is "useful", which is so often the flaw I see in articles about language, which focus on how many dollars more you could earn if you spoke Mandarin or Spanish. 

I fully agree that if you do not get energized by learning languages, if it's a chore that leaves you tired and frustrated, then maybe your energy is better spent on other vital tasks. 

One way to look at this is on a spectrum. On the left are things that are vitally important and that you do even if they are no fun. Like taxes, work-outs or dental visits. On the right are things that energize or relax you, like watching football or doing Wordle, where you don't look for any "value" in them, you just enjoy them. 

The secret of a happy, successful life is to find as many activities as possible that you could fit at both ends of the spectrum. Like playing soccer, which is both fun and healthy. 

For some of us, learning foreign languages is in this category. I started learning for fun, out of intellectual curiosity, but they have turned out helping me in many tangible ways that I hadn't expected. 

But for many people, learning languages doesn't fit at either end. You don't enjoy it, and, at least at the level you're reaching, it doesn't add much value to your life. For those, it probably isn't a good use of your time compared to the many opportunities out there. 

It would be great to get more people to read your article and think about it and how it applies to them - maybe even not just related to languages, but to all the things that we're encouraged to do because they are "good" in some abstract sense. 






 

Load more