Hide table of contents
by [anonymous]
2 min read 14

14

Before I address the title of the article I'm going to quickly outlined why I think brands and products grow and why behaviours in general become more popular. This post involves some meandering so reader beware.


I've been doing marketing for about 15 years and as far as I can tell there are 3 models of communication that change people's behaviour:

Model 1: "SALIENCE": 

communication changes behaviour by creating salience between an intervention or product or brand and the memories people access at a point of purchase or engagement (i.e. 'when they are in-market'). For example, when I want to make "pasta bolognaise" the associated memories my brain surfaces could be "barilla", "italian" and "beyond meat" (for all the nerds this leans into associative network theory if you're interested in learning more)

Model 2: "PERSUASION":  

communication changes behaviour by persuading or telling a story. This leans into System 2 thinking and Narrative Transportation Theory respectively (I recommend checking out Thinking Fast and Slow and https://en.wikipedia.org/wiki/Transportation_theory_(psychology))

Model 3: "CULTURAL IMPRINTING": communication changes behaviour because all consumption is actually about building and maintaining status within a desired group and all products are consumed in social settings (for more I recommend "ads don't work that way" > https://meltingasphalt.com/ads-dont-work-that-way/)

In classic marketing theory these are all bucketed under "Promotion" (i.e. communication). There are 3 other "P's" to marketing (and reasons why products or brands grow):

Product & Price (which IMO only need to be 'good enough' rather than 'the best' - see satisficing)

Physical availability (Is the thing I want to buy easy to find?)

Ok but what does this have to do with non human animals?


AFAICT about 30% of EAs are vegan but my model says that if non human animals have any hope this number should be closer to 90%.

Let's review the model:

M1 - "SALIENCE": if you're in EA you're hearing about animals suffering and vegan alternatives all the time

M2- "PERSUASION": EAs are especially rational people and not eating animals is obviously the more rational choice for 90%+ people reading this

M3: "CULTURAL IMPRINTING": it is a higher status move in the EA community to be vegan than not

Product & Price: vegan food tastes fine, and EAs can afford it (i.e. they're relatively rich)

Physical availability: the hardest part of any product adoption is to get people to try it once and you can't go to an EA event without trying vegan food

So how is any of this useful?


In behaviour change and marketing strategy a common practice to get a deeper or different view of peoples decision making is by studying its extreme users instead of the general population.

Some examples of how this has worked elsewhere:

  1. Transmen and transwomen for feminine care innovation
  2. Hikikomori for future social spaces
  3. Amish for clothing sustainability
  4. Arthritis sufferers for kitchen utensils

Anyway, I think looking deeply at why EAs do and do not eat farm grown meat (at an individual level) and why vegan adoption is so low 'in EA culture' could provide lots of insight.

14

0
0

Reactions

0
0

More posts like this

Comments14


Sorted by Click to highlight new comments since:

EAs are especially rational people and not eating animals is obviously the more rational choice for 90%+ people reading this

I'm about 99% bivalve vegan (occasionally I eat fish for cognitive reasons). However, I think it doesn't make sense for strongly longtermist individuals in terms of the direct straightforward benefits of veganism. The direct animal suffering is negligable relative to the future. I'm strongly longtermist, but I stay vegan for a combination of less direct reasons like signaling to myself and generally being cooperative (for reasons like acausal decision theory and directly being cooperative with current people).

Why doesn't it make sense for strongly longtermist individuals? Any longtermists from 200 years ago who were slavery supporters would feel pretty dumb and embarrassed now.

Thanks for your post. One place where you might get pushback is in whether or not “not eating animals is obviously the more rational choice”. I’m vegan, but I think there are somewhat persuasive arguments that promoting veganism may not be net positive: https://reducing-suffering.org/vegetarianism-and-wild-animals/

I expect that if plant based alternatives ever were to become as available, tasty, and cheap as animal products, a large proportion of people and likely nearly all EAs would become vegan. Cultural effects do matter, but in the end I expect them to be mostly downstream of technology in this particular case. Moral appeals have unfortunately had limited success on this issue.

I am one of those meat-eating EA's, so I figured I'd give some reasons why I'm not vegan, to aid the goals of this post in finding out about these things.

Price: While I can technically afford it, I still prefer to save money when possible.
Availability: A lot of food out there, especially frozen foods which I buy a lot of since I don't like cooking, involves meat. It's simply easier to decide on meals when meat is an option.
Knowledge: If I were to go vegan, I would be unsure how to go vegan safely for an extended period, and how to make sure I got a decent variety rather than eating the same foods over and over (which comes into taste - I don't mind vegan food but there's much more variety I can find in meat-based dishes)
Convenience: Similarly to above - it takes resources to seek out vegan options, more resources than to just eat normally.

The harms are real, but the harms are far away and abstract. So when I feel vaguely guilty about eating meat, I think about all the hassle and cost it would take to swap diets, and I shy away from it and don't do it. 

I'm not quite sure why those harms are far away and abstract, whereas the harms caused by malaria or AI risk don't invoke the same feelings in me. I think it's because I can use maths to determine the number of humans impacted and then put myself in the place of one of those humans - it's harder to do that with chickens. Also, giving away 10% of my income is actually less of a day-to-day drain on my resources than going vegan would be. I feel aversion to spending money, but I only give away money once a month, and it doesn't cause me financial hardship. By contrast, veganism requires daily effort.

As a micro-example of where these considerations don't apply - there are some plant meat based strips that I can get at my local supermarket. I find them tastier than actual meat when put into curry, and they're just as cheap when on special. So whenever they're on special, I pick a bunch of them up and they become my default option for a while. I know how to cook them, I know where to get them, they're just as cheap (sometimes) and I enjoy the taste. So I end up avoiding meat by default. I hope plant-based meat will eventually reach that saturation point for all kinds of dishes too.

"Knowledge: If I were to go vegan, I would be unsure how to go vegan safely for an extended period, and how to make sure I got a decent variety rather than eating the same foods over and over (which comes into taste - I don't mind vegan food but there's much more variety I can find in meat-based dishes)"

If knowledge is one of your preventative factors, I have been vegetarian since 2005 and vegan since 2015. I am happy to help. I compete in various sports and am in good health. I am happy to communicate with you directly and provide evidence of such claims if that helps quell your concerns about "[going] vegan safely for an extended period." 

I would ecstatically give you my time in the pursuit of sharing knowledge and helping to reduce barriers to veganism.

Thanks Elle, I appreciate that. I believe your claims - I fully believe it's possible to safely go vegan for an extended period, I'm just not sure how difficult it is (i.e, what's the default outcome, if one tries without doing research first) and what ways there are to prevent that outcome if the outcome is not good.

I shall message you, and welcome to the forum!

Although I'm not sure (because I don't have enough knowledge) if your claim on "not eating animals is the more rational choice for 90%+ people reading this" is correct, I liked this mental model. 

As a member of EA community and an altruist, I eat meat. Not because it's cheaper (it's not, at least in Brazil), it's convenient (I live in a massive city with a lot of vegan options), or it's rational (I'm not sure if it is) but because eating is more then putting calories in...

Eating is a social activity. It's about belonging and connections. 

And what is our most important need after physiological and safety, according to Maslow's Hierarchy of Needs?

Belonging and Love. 

So I believe most people don't go vegan (EAs or not) because it's hard to adapt to this lifestyle socially,  unless other vegans surround you. 

I don't have a definitive solution to this puzzle (to have more vegans, we need more vegans). Still, I would like to share my view on that because I think what most vegan advocates get wrong is framing "eating" as a purely rational and physiological activity. 

It's like soccer. 40.000 people doesn't go to the stadium because it's rational to watch 22 guys running for a soccer ball, but because their social lives are embedded on it. 

I agree with the thrust of the argument but I think its a little too pessimistic. A lot of EAs aren't especially altruistic people. Tons of EAs got involved because of Xrisk. And it requires very little altruism to care about whether you and everyone you know will die. You can look at the data on EA donations and notice they aren't that high. EAs dont donate 10% until they have a pre-tax income of around one million dollars per year!

Emm sorry, what? Out of 8,000 GWWC pledgers, who have at least pledged to give 10%, very few earn $1M?

I think your graph actually agrees with what sapphire's comment was arguing? Among the GWWC pledgers, donations don't actually hit the pledged 10% of income until well past an income of $100k/year. It's hard to eyeball the combined pledger/non-pledger average donation percentage from the graph, but it seems fair to say it's under 10% at the vast majority of income levels. 

I mean that 'at what income do GWWC pledgers actually start donating 10%+'. Or more precisely 'consider the set of GWWC pledge takers who make at least X per year, for what value X does is the mean donation at least X/10'. The value of X you get is around one million per year. Donations are of course even lower for people who didn't take the pledge! Giving 10% when you make one million PER YEAR is not a very big ask. You will notice EAs making large, but not absurd salaries, like 100-200K give around 5%. Some EAs are extremely altruistic, but the average EA isn't that altruistic imo. 

Looking at the chart henrith posted, it looks to me like the GWWC=yes line crosses 10% just below $300k/y, which is still high but well below $1M/y.

Additionally, eyeballing the points on the chart, it looks to me like there's an issue with the way the fit works, where people earning less donating less makes it look like people who earn more also donate less?

It looks like the chart came from Rethink Priorities EA Survey 2020 Series: Donation Data. Maybe the data is public and I can check this...

https://github.com/rethinkpriorities/ea_data_public has "The actual code and data is in the EA-data private repo. A line in the main_2020.R file there copies the content to this repo in a parallel folder on one's hard drive, to be pushed here. ... No data will be shared here, for now at least."

Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 32m read
 · 
Authors: Joel McGuire (analysis, drafts) and Lily Ottinger (editing)  Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, T