This is a special post for quick takes by Puggy. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I think it is a cool idea for people to take a giving pledge on the same day. For example, you and your friend both decide to pledge 10% to charity on the same day. It would be even more fun if you did it with strangers. Call it “giving twins” or “giving siblings”.

Imagine that you met a couple strangers and they pledged with you. Imagine that after pledging you all just decided to be friends, or at least a type of support group for one another. Like “Hey, you and I took the further pledge together on New Year’s Day last year. When I’m in your city let’s go have a pint or maybe we can email each other about our career plans to discuss ways we can help people more!”

Which leads me to my final bit here: would anyone be interested in being my giving sibling? Haha I am interested in taking the further pledge in 2022, and it would be fun to have the same ‘giving birthday’ as other people so I could befriend them, meet people in the community, and get a couple lifelong friends.

Giving what we can could even take this idea and run with it. They could assign you a giving sibling if you entered into the sibling program, this could help increase the feeling that we are a community.

does giving sibling mean giving somethin g to each other?

That could be the case, but I think the emphasis is more on the idea that you have the same “birthdate” to be considered a giving sibling.

Like on February 15 you and a friend took the Giving Pledge together and then that date was the same day you became siblings. Then you celebrate that day every year or form a bond around this shared experience.

Here’s the problem:

Some charities are not just multiple times better than others, some are thousands of times better than others. But as far as I can tell, we haven’t got a good way of signaling to others what this means.

Think about when Ed Sheeran sells an album. It’s “certified platinum” then “double platinum” peaking at “certified diamond”. When people hear this it makes them sit back and say “wow, Ed sheeran is on a different level.”

When a football player is about to announce his college, he says “I’m going D1”. You become a “grandmaster” at chess. Ah, that restaurant must be good it has won two Michelin stars. That economist writing about the tragedy of the commons is great, she won a Nobel prize.

We need nomenclature that goes beyond “High impact” charity. “Cost-effective” “High impact” “Effective” are all good descriptions, but we need to come up with a rating system or some method of giving high status to the best charities (possibly based on how much $ it costs to save one life).

It’s got to be something that we can bring into the popular conscience, and it can’t be something we just narrowly assign to all of our own EA meta charities. We need journalists popularizing the term and recognizing the 3-5 super charities that save lives like no ones business. We should work with marketing teams and carefully plan what the name would be. But it’s got to confer status to the charity and people like Jay Z can gain more status by donating to it, just like he gains status by eating at Michelin star restaurants.

(excuse me if I’m not the first to outline this idea)

"But it’s got to confer status to the charity and people like Jay Z can gain more status by donating to it" - I think this brushes a good point which I'd like to see fleshed out more. On some level I'm still a bit skeptical in part because I think it's more difficult to make these kinds of designations/measurements for charities whereas things like album statuses are very objective (i.e., a specific number of purchases/downloads) and in some cases easier to measure. Additionally, for some of those cases there is a well-established and influential organization making the determination (e.g., football leagues, FIDE for chess). I definitely think something could be done for traditional charities (e.g., global health and poverty alleviation), but it would very likely be difficult for many other charities, and it still would probably not be as widely recognized as most of the things you mentioned.

Great points. Thank you for them. Perhaps we could use a DALY/QALY measure. A charity could reach the highest status if, after randomized controlled studies, it was determined that $10 donated could give one QALY to a human (I’m making up numbers). Any charity that reached this hard to achieve threshold would be given the super-charity designation.

To make it official imagine that there’s a committee or governing body formed between charity navigator and GiveWell. 5 board members from each charity would come together and select the charities then announce the award once a year and the status would only be official for a certain amount of time or it could be removed if they dipped below a threshold.

What do you think

I certainly would be interested in seeing such a system go into place—I think it would probably be beneficial—the main issue is just whether something like that is likely to happen. For example, it might be quite difficult to establish agreement between Charity Evaluator and GiveWell when it comes to the benefits of certain charities. Additionally, there may be a bit of survivor bias when it comes to organizations that have worked like FIDE, although I still think the main issue is 1) the analysis/measurement of effectiveness is difficult (requiring lots of studies vs. simply measuring album downloads/streams); and 2) the determination of effectiveness may not be widely agreed upon. That’s not to say it shouldn’t be tried, but I think that might contribute to limiting the effectiveness relative to the examples you cite.

Forecasters Bias

This may already be a bias, I haven’t really researched this. Excuse my ignorance. But perhaps there could be a new bias we could identify called forecasters bias.

This bias would be the phenomenon where forecasters have a tendency to place too much weight on the importance of or the effect of forecastable events versus events that are less forecastable. Thereby somewhat (or entirely) neglecting other improbable less forecastable events.

Example 1: Theres a new coronavirus variant called Omicron. It has not yet spread, but it will. We can track the spread of this virus going forward into the future. When forecasting Omicron’s effect we have a tendency to overemphasize its effect because this event is forecastable.

Another coronavirus example 2: Early in the coronavirus pandemic individuals tracked the spread of the virus, and the rate at which vaccines progressed. They predicted with a good degree of accuracy the amount of deaths. They did not predict, however, that the political whims of the populace would lead to an anti-vax movement. The less forecastable event (anti-vax sentiment) was under-predicted

Example 3: fictional market researchers notice dropping energy prices. They model this phenomenon and expect it to continue for 18 months. But in this fictional 18 months, major earthquakes destroy huge cities and the researchers systematically failed to consider the prospect of major earthquakes happening which raise energy prices.

Example 4: energy prices are rising drastically. Researchers expect this to continue for 18 months. Suddenly, commercially viable nuclear fusion becomes available and governments spread this throughout the world. Energy prices drop to “too cheap to meter”, researchers got this wrong because it was too hard to forecast the progress of nuclear fusion.

I don’t know if this idea is any good. Just a thought!

Have you seen Taleb’s Black Swan book? (https://en.m.wikipedia.org/wiki/The_Black_Swan:_The_Impact_of_the_Highly_Improbable) I personally haven’t read it, but based on the description it seems related to what you’re describing. Either way, I think it is a good point to consider

[comment deleted]2
0
0

Do you prefer libertarian policy ideas, but you aren’t too sold on the deontological or rights-based reasoning which many libertarians use to justify their policy preferences?

Perhaps this new political identifier could work for you: introducing…. Consequentarian! You’re pretty much a consequentialist through and through, you value good outcomes more than liberty or rights based claims to things however it just empirically turns out that all the best policy ideas which lead to the best outcomes are libertarian. You recognize open borders, drug legalization, limited (or no) government, very low regulation, and competitive enterprise produce more human flourishing than all the alternatives. But you don’t find strict rights arguments compelling (like if a car is driving at you, you can jump on your neighbors lawn even if it violates his property rights).

Pronounced: Consequen-tarian

Associated schools of thought:

  • Chicago school economics
  • University of Arizona Tucson School of liberalism
  • neoclassical liberalism
  • Michael Huemer and Bryan Caplan’s anarchism

empirically turns out that all the best policy ideas which lead to the best outcomes are libertarian


What makes you think this is the case? I agree with your principle that you can make a welfare maximizing case for libertarianism, but surely a Conservative or Socialdemocrat could also argue for their preferred policy from a welfare maximizing perspective.

Calling the set of policies you happen to think are welfare maximizing Consequentarian, strikes me as very uncharitable to those with different views from your own.

There’s a growing literature pointing to the myriad of government failures but the highlights are: government failures are in almost every scenario significantly worse than market failures, so let the market decide. Increasing liberty produces great outcomes (drug use goes down with liberal drug policy same with overdoses, increasing immigration increases everyone’s income, housing prices and homelessness go down when we reduce nimby policies and have a free market in housing, FDA and other bureaucratic agencies overspend (Mercatus Center estimates it costs 93 million to save a life through regulation and with the case of the FDA they actually actively kill 20,000+ people a year), education and healthcare costs would drop significantly if we had a free market in them (the strongest argument that shows why prices rise in these sectors is because of artificial inflation caused by government intervention), wars cost enormous sums of money to produce and their consequences are almost always worse than non-intervention (since 9/11 200K Iraqi civilians have died, while terrorism has increased 2,000%), there’s some historical evidence that free banking systems are less prone to the disastrous effects of the business cycle, there’s lots of more evidence.

These empirical facts are related to the idea that market based interventions outperform government interventions because the market does not have to act through a centralized hierarchy to make decisions. It’s difficult to make centralized decisions that are attentive to the concerns at the margins of the economy.

You might be interested in the Neoliberal Project: What Neoliberals Believe 🥑

Co-director Jeremiah Johnson did an AMA here the other day.

True yea. I have seen the neoliberalism movement. They are more market friendly than the median voter and they are motivated by consequentialist reasoning, but I think they advocate government intervention over what is required in some areas. But overall that’s a great movement.

Has anyone ever thought of doing incentive based pledges with their charitable giving?

Incentive pledge: I will live off of X amount of money, but this figure increases by Y for every $100,000 I donate or pledge to donate.

Example: I will live off of $30,000 in 2020 dollars for the rest of my life and the rest will be donated to charity, however this amount increases by $1000 for every $100,000 I donate (or I will donate at some future date).

Under this incentive pledge, for every $1 million in 2020 dollars that the pledger earns (and donates), $10,000 dollars will be added to their yearly allowance. Then if you’re feeling confident you could cap it at a certain level. For example, this could max out at $100,000 yearly allowance or $70,000 or something like that.

This is for someone who wants to essentially take the further pledge, but they aren’t entirely comfortable confining themselves to a fixed amount to live on (adjusted for inflation) forever. Or this is for the person who could see themselves incentivized to give more if they knew their yearly allowance would raise the more they earned.

Is this much better than pledging a certain %, e.g. 50% of everything above $30,000? It seems that is incentive based, because earning more money means both more for charity and more for you.

That could be a form of an incentive pledge

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier