This is a special post for quick takes by Puggy. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I think it is a cool idea for people to take a giving pledge on the same day. For example, you and your friend both decide to pledge 10% to charity on the same day. It would be even more fun if you did it with strangers. Call it “giving twins” or “giving siblings”.

Imagine that you met a couple strangers and they pledged with you. Imagine that after pledging you all just decided to be friends, or at least a type of support group for one another. Like “Hey, you and I took the further pledge together on New Year’s Day last year. When I’m in your city let’s go have a pint or maybe we can email each other about our career plans to discuss ways we can help people more!”

Which leads me to my final bit here: would anyone be interested in being my giving sibling? Haha I am interested in taking the further pledge in 2022, and it would be fun to have the same ‘giving birthday’ as other people so I could befriend them, meet people in the community, and get a couple lifelong friends.

Giving what we can could even take this idea and run with it. They could assign you a giving sibling if you entered into the sibling program, this could help increase the feeling that we are a community.

does giving sibling mean giving somethin g to each other?

That could be the case, but I think the emphasis is more on the idea that you have the same “birthdate” to be considered a giving sibling.

Like on February 15 you and a friend took the Giving Pledge together and then that date was the same day you became siblings. Then you celebrate that day every year or form a bond around this shared experience.

Here’s the problem:

Some charities are not just multiple times better than others, some are thousands of times better than others. But as far as I can tell, we haven’t got a good way of signaling to others what this means.

Think about when Ed Sheeran sells an album. It’s “certified platinum” then “double platinum” peaking at “certified diamond”. When people hear this it makes them sit back and say “wow, Ed sheeran is on a different level.”

When a football player is about to announce his college, he says “I’m going D1”. You become a “grandmaster” at chess. Ah, that restaurant must be good it has won two Michelin stars. That economist writing about the tragedy of the commons is great, she won a Nobel prize.

We need nomenclature that goes beyond “High impact” charity. “Cost-effective” “High impact” “Effective” are all good descriptions, but we need to come up with a rating system or some method of giving high status to the best charities (possibly based on how much $ it costs to save one life).

It’s got to be something that we can bring into the popular conscience, and it can’t be something we just narrowly assign to all of our own EA meta charities. We need journalists popularizing the term and recognizing the 3-5 super charities that save lives like no ones business. We should work with marketing teams and carefully plan what the name would be. But it’s got to confer status to the charity and people like Jay Z can gain more status by donating to it, just like he gains status by eating at Michelin star restaurants.

(excuse me if I’m not the first to outline this idea)

"But it’s got to confer status to the charity and people like Jay Z can gain more status by donating to it" - I think this brushes a good point which I'd like to see fleshed out more. On some level I'm still a bit skeptical in part because I think it's more difficult to make these kinds of designations/measurements for charities whereas things like album statuses are very objective (i.e., a specific number of purchases/downloads) and in some cases easier to measure. Additionally, for some of those cases there is a well-established and influential organization making the determination (e.g., football leagues, FIDE for chess). I definitely think something could be done for traditional charities (e.g., global health and poverty alleviation), but it would very likely be difficult for many other charities, and it still would probably not be as widely recognized as most of the things you mentioned.

Great points. Thank you for them. Perhaps we could use a DALY/QALY measure. A charity could reach the highest status if, after randomized controlled studies, it was determined that $10 donated could give one QALY to a human (I’m making up numbers). Any charity that reached this hard to achieve threshold would be given the super-charity designation.

To make it official imagine that there’s a committee or governing body formed between charity navigator and GiveWell. 5 board members from each charity would come together and select the charities then announce the award once a year and the status would only be official for a certain amount of time or it could be removed if they dipped below a threshold.

What do you think

I certainly would be interested in seeing such a system go into place—I think it would probably be beneficial—the main issue is just whether something like that is likely to happen. For example, it might be quite difficult to establish agreement between Charity Evaluator and GiveWell when it comes to the benefits of certain charities. Additionally, there may be a bit of survivor bias when it comes to organizations that have worked like FIDE, although I still think the main issue is 1) the analysis/measurement of effectiveness is difficult (requiring lots of studies vs. simply measuring album downloads/streams); and 2) the determination of effectiveness may not be widely agreed upon. That’s not to say it shouldn’t be tried, but I think that might contribute to limiting the effectiveness relative to the examples you cite.

Forecasters Bias

This may already be a bias, I haven’t really researched this. Excuse my ignorance. But perhaps there could be a new bias we could identify called forecasters bias.

This bias would be the phenomenon where forecasters have a tendency to place too much weight on the importance of or the effect of forecastable events versus events that are less forecastable. Thereby somewhat (or entirely) neglecting other improbable less forecastable events.

Example 1: Theres a new coronavirus variant called Omicron. It has not yet spread, but it will. We can track the spread of this virus going forward into the future. When forecasting Omicron’s effect we have a tendency to overemphasize its effect because this event is forecastable.

Another coronavirus example 2: Early in the coronavirus pandemic individuals tracked the spread of the virus, and the rate at which vaccines progressed. They predicted with a good degree of accuracy the amount of deaths. They did not predict, however, that the political whims of the populace would lead to an anti-vax movement. The less forecastable event (anti-vax sentiment) was under-predicted

Example 3: fictional market researchers notice dropping energy prices. They model this phenomenon and expect it to continue for 18 months. But in this fictional 18 months, major earthquakes destroy huge cities and the researchers systematically failed to consider the prospect of major earthquakes happening which raise energy prices.

Example 4: energy prices are rising drastically. Researchers expect this to continue for 18 months. Suddenly, commercially viable nuclear fusion becomes available and governments spread this throughout the world. Energy prices drop to “too cheap to meter”, researchers got this wrong because it was too hard to forecast the progress of nuclear fusion.

I don’t know if this idea is any good. Just a thought!

Have you seen Taleb’s Black Swan book? (https://en.m.wikipedia.org/wiki/The_Black_Swan:_The_Impact_of_the_Highly_Improbable) I personally haven’t read it, but based on the description it seems related to what you’re describing. Either way, I think it is a good point to consider

[comment deleted]2
0
0

Do you prefer libertarian policy ideas, but you aren’t too sold on the deontological or rights-based reasoning which many libertarians use to justify their policy preferences?

Perhaps this new political identifier could work for you: introducing…. Consequentarian! You’re pretty much a consequentialist through and through, you value good outcomes more than liberty or rights based claims to things however it just empirically turns out that all the best policy ideas which lead to the best outcomes are libertarian. You recognize open borders, drug legalization, limited (or no) government, very low regulation, and competitive enterprise produce more human flourishing than all the alternatives. But you don’t find strict rights arguments compelling (like if a car is driving at you, you can jump on your neighbors lawn even if it violates his property rights).

Pronounced: Consequen-tarian

Associated schools of thought:

  • Chicago school economics
  • University of Arizona Tucson School of liberalism
  • neoclassical liberalism
  • Michael Huemer and Bryan Caplan’s anarchism

empirically turns out that all the best policy ideas which lead to the best outcomes are libertarian


What makes you think this is the case? I agree with your principle that you can make a welfare maximizing case for libertarianism, but surely a Conservative or Socialdemocrat could also argue for their preferred policy from a welfare maximizing perspective.

Calling the set of policies you happen to think are welfare maximizing Consequentarian, strikes me as very uncharitable to those with different views from your own.

There’s a growing literature pointing to the myriad of government failures but the highlights are: government failures are in almost every scenario significantly worse than market failures, so let the market decide. Increasing liberty produces great outcomes (drug use goes down with liberal drug policy same with overdoses, increasing immigration increases everyone’s income, housing prices and homelessness go down when we reduce nimby policies and have a free market in housing, FDA and other bureaucratic agencies overspend (Mercatus Center estimates it costs 93 million to save a life through regulation and with the case of the FDA they actually actively kill 20,000+ people a year), education and healthcare costs would drop significantly if we had a free market in them (the strongest argument that shows why prices rise in these sectors is because of artificial inflation caused by government intervention), wars cost enormous sums of money to produce and their consequences are almost always worse than non-intervention (since 9/11 200K Iraqi civilians have died, while terrorism has increased 2,000%), there’s some historical evidence that free banking systems are less prone to the disastrous effects of the business cycle, there’s lots of more evidence.

These empirical facts are related to the idea that market based interventions outperform government interventions because the market does not have to act through a centralized hierarchy to make decisions. It’s difficult to make centralized decisions that are attentive to the concerns at the margins of the economy.

You might be interested in the Neoliberal Project: What Neoliberals Believe 🥑

Co-director Jeremiah Johnson did an AMA here the other day.

True yea. I have seen the neoliberalism movement. They are more market friendly than the median voter and they are motivated by consequentialist reasoning, but I think they advocate government intervention over what is required in some areas. But overall that’s a great movement.

Has anyone ever thought of doing incentive based pledges with their charitable giving?

Incentive pledge: I will live off of X amount of money, but this figure increases by Y for every $100,000 I donate or pledge to donate.

Example: I will live off of $30,000 in 2020 dollars for the rest of my life and the rest will be donated to charity, however this amount increases by $1000 for every $100,000 I donate (or I will donate at some future date).

Under this incentive pledge, for every $1 million in 2020 dollars that the pledger earns (and donates), $10,000 dollars will be added to their yearly allowance. Then if you’re feeling confident you could cap it at a certain level. For example, this could max out at $100,000 yearly allowance or $70,000 or something like that.

This is for someone who wants to essentially take the further pledge, but they aren’t entirely comfortable confining themselves to a fixed amount to live on (adjusted for inflation) forever. Or this is for the person who could see themselves incentivized to give more if they knew their yearly allowance would raise the more they earned.

Is this much better than pledging a certain %, e.g. 50% of everything above $30,000? It seems that is incentive based, because earning more money means both more for charity and more for you.

That could be a form of an incentive pledge

Curated and popular this week
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 8m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It’s deadline time. Over the last decade, many of the world’s largest food companies — from McDonald’s to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just seven months left, it turns out that many are not. Walmart backtracked first, blaming both its customers and suppliers, who “have not kept pace with our aspiration to transition to a full cage-free egg supply chain.” Kroger soon followed suit. Others, like Target, waited until the last minute, when they could blame bird flu and high egg prices for their backtracks. Then there are those who have just gone quiet. Some, like Subway and Best Western, still insist they’ll be 100% cage-free by year’s end, but haven’t shared updates on their progress in years. Others, like Albertsons and Marriott, are sharing their progress, but have quietly removed their pledges to reach 100% cage-free. Opportunistic politicians are now getting in on the act. Nevada’s Republican governor recently delayed his state’s impending ban on caged eggs by 120 days. Arizona’s Democratic governor then did one better by delaying her state’s ban by seven years. US Secretary of Agriculture Brooke Rollins is trying to outdo them all by pushing Congress to wipe out all stat
 ·  · 13m read
 · 
  There is dispute among EAs--and the general public more broadly--about whether morality is objective.  So I thought I'd kick off a debate about this, and try to draw more people into reading and posting on the forum!  Here is my opening volley in the debate, and I encourage others to respond.   Unlike a lot of effective altruists and people in my segment of the internet, I am a moral realist.  I think morality is objective.  I thought I'd set out to defend this view.   Let’s first define moral realism. It’s the idea that there are some stance independent moral truths. Something is stance independent if it doesn’t depend on what anyone thinks or feels about it. So, for instance, that I have arms is stance independently true—it doesn’t depend on what anyone thinks about it. That ice cream is tasty is stance dependently true; it might be tasty to me but not to you, and a person who thinks it’s not tasty isn’t making an error. So, in short, moral realism is the idea that there are things that you should or shouldn’t do and that this fact doesn’t depend on what anyone thinks about them. So, for instance, suppose you take a baby and hit it with great force with a hammer. Moral realism says: 1. You’re doing something wrong. 2. That fact doesn’t depend on anyone’s beliefs about it. You approving of it, or the person appraising the situation approving of it, or society approving of it doesn’t determine its wrongness (of course, it might be that what makes its wrong is its effects on the baby, resulting in the baby not approving of it, but that’s different from someone’s higher-level beliefs about the act. It’s an objective fact that a particular person won a high-school debate round, even though that depended on what the judges thought). Moral realism says that some moral statements are true and this doesn’t depend on what people think about it. Now, there are only three possible ways any particular moral statement can fail to be stance independently true: 1. It’s