Hide table of contents

Note:

This is my first post to this forum. I am not entirely familiar with the norms and customs or previous topics of discussion. I am also still new to the effective altruism community in general. Please feel free to point me to prior discussions of this topic and give me advice on how to better present my message on this forum.

Problem

Many problems society faces today can be classified as coordination problems (see 80k hours podcast with Vitalik Buterin). These are a class of issue where the biggest hurdle is not technical nor even social, but simply a lack of coordination. A prime example of a problem in this class is meat eating. Many people believe that eating meat is wrong, however continue to eat meat because they believe their individual action would not be sufficient to have any affect on the problem as a whole (one person stopping eating meat will not hurt the meat industry). However, if all of the people with this belief could coordinate and mutually commit to stop eating meat then they likely would due to the increased likelihood of having a non negligible effect on the meat industry collectively. The problem is not that people believe that eating meat is acceptable, it is simply that they are not coordinated enough to take action. (I don't mean that every meat eater has this belief, but I do believe many do)

Proposed Solution

I propose an online service and community which would serve as a hub for coordinating groups of people committed to making small changes to their habits and every day behaviors in order to help solve coordination problems.

The purpose of this service would be to coordinate groups of people to take small actions in their everyday life which in isolation would have negligible effects on a problem, but when multiplied by many people have non negligible effects.

Think of it as kickstarter for individual action. Realizing that an individual's actions have minimal impact on large problems, many people refuse to take these individual actions. However, if these people could be assured that many other people were also committed to taking these actions along side them they could be convinced to join in.

Basic Functionality

I will describe the basic way I imagine a service like this functioning. I still have many questions regarding what might be the optimal details regarding its functioning, however I am more interested in receiving feedback on the idea in general right now.

Users or moderators submit campaigns which are posts which outline the proposed action people would take. For example a campaign could be submitted to reduce your meat consumption to less than 1 pound per week or to drive less than x miles per month or anything along those lines. These campaigns would also include a expiration date and minimum commitment number. The minimum commitment number is the minimum number of people who must conditionally commit to adopt the campaign for the campaign to go into effect. If the minimum commitment number is not achieved by the expiration date than the campaign does not go into effect and no one who conditionally committed is obligated to adopt the campaign. If the minimum commitment number is achieved then the campaign begins and conditional committers are obligated to adopt the campaign.

This is a very basic proposal of how such a service may function to demonstrate the main idea. I have more ideas on how to improve the details of the process however right now I am primarily interested in feedback on the big idea itself and not so much on the details.

Primary Issues

1. How do we verify people who committed to a campaign are following through? The service only works if a commitment can be trusted. If commitments can not be trusted then not only will there not be progress towards the underlying problem but also trustworthy users will lose faith in the platform and stop using it. I have some ideas on how to increase the likelihood that people follow through with their commitments such as limiting the number of campaigns a person can commit to, mandating forum discussions, and frequent reminders, however ultimately this boils down to the oracle problem which as of yet there is no good solution to.

2. Are there even enough people who would make use of such a platform that it could be impactful? In general, I am skeptical of technology's ability to produce behavioral changes in people (other than those resulting from addiction). I wonder whether a platform like this could ever reach the critical mass necessary to have any power.

Conclusion

I am a programmer and am considering working on a proof of concept of this idea, however first I would like to receive feedback from the community. Before discussing the details of the procedure and structure of such a service I would like feedback on the general idea.

20

0
0

Reactions

0
0

More posts like this

Comments10


Sorted by Click to highlight new comments since:

Congrats for your first post! I think it’s well written; I like the structure of „problem-proposed solution-possible issues“, you write short and clearly, and you stated what kind of input you want from the community.

It was useful for me that you provided the example of meat eating as a coordination problem. I would have found more examples even more useful for thinking about the potential applications where a coordination platform is among the most promising approaches (btw, I think for meat eating it is not among the most promising).

I like your idea, but I’m also worried about your 2nd issue: nobody will use it. It seems to me like people are just not motivated enough by being a part of improving the world. Meat eating seems like a case in point: there is already a veggie community you can be part of (at least in Germany in every bigger city) and the marginal impact you have doesn’t even depend that much on coordination. Still it’s a tiny movement.

I think it’s reasonable that you are trying to think about the landscape and bottlenecks of behavior change and coordination before moving to action. There is probably much more to learn. For example, I’ve read this short report about change platforms in the context of changing organizations, that seems to have some success stories and learned lessons that are also relevant for you. This might be a much more tractable pathway if there are smaller scale important coordination problems. https://www.mckinsey.com/business-functions/organization/our-insights/build-a-change-platform-not-a-change-program

Regarding more examples, I think that any action which someone could say "I would do this but what difference is one person's action going to make" is a candidate for a good campaign. More examples I can think of mainly relate to conservation (energy, water etc.). I also think this platform could help power boycotts of companies which could be a very powerful use (but also with a risk of becoming dangerous as pointed out by Ramiro). I actually think that this alone would be a very powerful use for such a platform.

And regarding my second issue, I think for the platform to have a chance, it would have to go viral and somehow maintain a user base after that. I agree that acquiring a user base dedicated to making actual changes in their lives would likely be the hardest part of the process.

Also, thanks for linking to that report. It seems to advocate that a grass roots platform like this could be one of the more effective ways to affect change. I definitely would like to do some more reading on the research in this area though.

I love this idea! Of course you’d want to test the demand for it cheaply. Maybe there is already a Kickstarter-like platform where you need to meet a minimum number of contributors rather than just a minimum total contribution (or a maximum contribution per contributor). Then you could just use that platform for a test run. If not among Kickstarter-like platforms, then maybe among petition platforms? Or you could repurpose a mere Mailchimp newsletter signup for this! You could style it to look like a solemn signing of a conditional pledge.

If there is such a platform, you could see if you can get a charity on board with the experiment, one that has a substantial online audience. (They’ll also be happy about the newsletter signups, though that should require a separate opt-in.)

Finally, you could run quick anonymous surveys of the participants: What did they do, and what would they have done without the campaign. Perhaps one after a month and one after a year or so. (It would also be interesting again to follow up after several years because of vegan recidivism, which usually sets in after around 7 years afaik.)

Maybe you can even do all of that without any coding.

I'm not sure about how much good outcomes that depend on individual actions could profit from better coordination, or how an app focused on that would improve the situation. But have you considered that, since coordination is not a good in itself, your app could be used for evil, too?

To be honest, I haven't considered that at all. I can definitely imagine that if a platform like this were to become popular it could start organizing campaigns that don't necessarily conform with my values. However, I think that with minimal moderation those could be dealt with, but I would have to walk the line between removing truly bad ideas and ideas which I just may not completely agree with.

But this is still a problem that I only think we could even face if such a platform reached a large critical mass of users, which is pretty far down the road.

I tend to agree with you... But then I remember how Facebook can be used to coordinate and broadcast executions in Brazil, or sow hate against Rohingyas. My point is not "you should think about this flaw that your system perhaps-eventually-might have", but "are you sure that we need more effective coordination, instead of tools to ensure that it aims for good?"

Just found this post through your response to Tom VanAntwerp on twitter. There's a sequence of questions and posts on LessWrong made by me and a few others that's about this topic. I suggest checking it out for ideas, discussion, and references to a few things that already exist (though they're mostly agreed to be insufficient).

Perhaps also consider cross-posting this to LessWrong.

Maybe this app could be used for people to commit to consume a company’s products if this company makes some changes in its behaviour. So the company could maybe realise that the gain in new customers is worth the requested change.

I recall something exactly like you mention exists, but I can't find it! I think its quite recent.

Not what I was talking about, but a specific application of this idea for science - https://www.freeourknowledge.org/

More from gabcoh
Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 8m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It’s deadline time. Over the last decade, many of the world’s largest food companies — from McDonald’s to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just seven months left, it turns out that many are not. Walmart backtracked first, blaming both its customers and suppliers, who “have not kept pace with our aspiration to transition to a full cage-free egg supply chain.” Kroger soon followed suit. Others, like Target, waited until the last minute, when they could blame bird flu and high egg prices for their backtracks. Then there are those who have just gone quiet. Some, like Subway and Best Western, still insist they’ll be 100% cage-free by year’s end, but haven’t shared updates on their progress in years. Others, like Albertsons and Marriott, are sharing their progress, but have quietly removed their pledges to reach 100% cage-free. Opportunistic politicians are now getting in on the act. Nevada’s Republican governor recently delayed his state’s impending ban on caged eggs by 120 days. Arizona’s Democratic governor then did one better by delaying her state’s ban by seven years. US Secretary of Agriculture Brooke Rollins is trying to outdo them all by pushing Congress to wipe out all stat