This is a special post for quick takes by Charlie_Guthmann. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 10:41 PM

I stopped being vegetarian almost 2 years ago - one of the biggest reasons I'm not a vegetarian is that I stay up late pretty much every day and don't always feel like cooking or eating snacks so I will go to whatever is open near me. During university, nothing really stayed open after 10 anyway because Evanston is a lame place. So I would often eat at or before 10, and if I was eating out there were vegetarian options (stir fry with tofu, chipotle, etc.) still at this time.

Now I live in a predominantly eastern European and Mexican area of Chicago. There isn't much vegetarian food in this neighborhood in general, although there is some still. However, the vegetarian restaurants here seem to service a wealthier demographic than the non vegetarian food. It closes earlier, more expensive, etc. The cheap and late night options are fast food and taquerias,  which essentially have no quality vegetarian items. But since this stuff is open, it actually makes me lazier and I'll often eat at 11:00 PM because I can. However getting into this routine means I eat more meat. 

I'm pretty sure if there was a decent cheap vegetarian restaurant that stayed open till 2:00 am I would eat at least 1 less meat meal a week, probably 2-3. 

why aren't there any vegetarian late night options near me? probably the normal reasons - no one around here wants or can open one, or there isn't enough demand. 

In either case it got me wondering. If there is enough demand to recoup say 95% ish of cost for a late night falafel stand, would it be a cost effective intervention (over whatever other things ACE recommends) to fund that last 5%? I might think more about this unless it's super obvious to someone that this is orders of magnitude worse than other options. 

A five percent subsidy is about fifty cents a meal in Chicago, roughly. However, some subsidized diners would have eaten a vegetarian meal with or without the subsidy, so the true cost per meat meal averted would likely be higher -- so maybe a dollar or so? So you could predict the cost per farmed animal averted from that, keeping in mind that the demand elasticities aren't 1:1.

It doesn't sound terribly promising on my three-minute BOTEC. Notably, much of the displaced meat would be beef, leading to a high cost per 1 cow reduction in demand.

Grant-making as we currently do it seems pretty analogous to a command economy.  

Not sure that's entirely true (I think it's very interesting). I feel like the grantmaking process is not top→down, but bottom→up→top→down (someone has an idea, they start working on it in their free time (this part is optional), they apply, that thing gets evaluated, they get rejected/receive some money). I think in historical command economies the first/second part of was kind of rare.

See also Coase's Theory of the Firm and Evolution as a Backstop to Reinforcement Learning.

Ok this is a solid point and made me slightly reconsider. 

However, I question both how much grant making actually is bottom up and also even if it is how different this is from historical command economies. 

Depending on the grant makers, I feel like there is a range between "has already decided more or less the projects the want to fund" and "funding anything that seems promising under x goals", where the former isn't really "taking suggestions". 

More importantly, while I agree the bottom up thing was rare, I don't think the issue with command economies is that they don't gather any information from the crowd, it's that the information they have available is funneled through human bias and is nontransparent. Would a system of town halls in which command economy leaders had a chance to listen to the complaints of the citizens  fix the command economy? This just seems like a worse version of markets.

By the same logic I think impact markets, while they may have a long learning curve, will clearly be superior to what we are currently doing. The main reason I see us not doing this is because you have to actually specify the currency, which would be specifying a moral world view, which would shatter the communities nebulous ethical agnosticism. 

Re Coase and Gwern's post, I think it gives my whole point even more firepower. I feel like the point is you only need one logical feedback mechanism to create evolution. The rest of the system can be nearly random or very nonlogical. But there is no feedback mechanism. We will never be able to tease out the impact of 99% of the things we are currently doing. If grant makers are totalitarians like the firms, where is the analogous feedback loop to money? (without RCTs up the wazoo). 

Theoretical idea that could be implemented into Metaculus

tldr; add an option to submit models of how to forecast a question, and also voting on the models. 

To be more concrete, when someone submits a question, in addition to forecasting the question, you can submit a squiggle -- or just plain mathematical model -- of your best current guess of how to approach the problem. You define each subcomponent that is important in the final forecast and also how these subcomponents combine into the final forecast. Each subcomponent automatically becomes another forecasting question on the site that people can do the same to (if it is not already one). 

Then in addition to a normal forecast, as we do right now, people can also forecast the subcomponents of the models, as well as vote on the models. If a model already includes previously forecasted questions, they automatically populate in the model. 

The voting system on models could either just draw attention to the best models and encourage forecasting of the subcomponents, or even weight the models estimates into the overall forecast of the question. No idea if this would improve forecasting but it might make it more transparent and scalable. 

I wrote a bit more in this google doc if interested. 

 

edit: I think this might just be guesstimate with memoization

Has anyone thought through if EA should try to start a charter school/ done some sort of impact estimate? Specifically with the purpose of making a really good school, not specifically related to EA outreach.

I searched google and the forum for posts on this and couldn't find anything. 

WSJ Op-Ed: ‘Effective Altruism’ Is Neither - I didn't see any discussion of this low effort but scathing review of EA. I wonder if people feel we should write a response? 

The article itself is bad-faith, bordering on the stupidest critique I have read on EA thus far, but I feel like WSJ op-ed is pretty important, and if SBF signed off on a response they would most likely publish it. 

Couldn't read the article so I can't say. EA does have some red flags it needs to deal with (just like literally any movement in existence) so it's easy to pick on.  How red flags are handled is what's important, and based on the amount of posts I've seen saying that the movement struggles to address logitimate criticisms internally, it needs a shift to sincerely move forward. And I say that because some of the first things mentioned were things I personally had concerns about. 
I will say, if it's low-effort then the best response might be no response. It's an op-ed, it's someone's big fat opinion. If EA was somehow perfect beyond the limits of reality someone would still write a low-effort op-ed.

Re the  "EAs should not should" debate about whether we can use the word "should" which pops up occasionally, most recently on the "university groups need fixing". 

My take is that you can use "should/ought" as long as your target audience has sufficiently grappled with meta-ethics and both parties are clear about what ethical system you are using.

"Should" (to an anti-realist) is shorthand for (the best action under X moral framework). I don't mind it being used in this context (though I agree with ozzies previous shortform on this that it seems unnecessarily binary), but it's problematic using this word around people you don't know or non-philosophy heads. It's completely absurd to tell an 18-year-old or anyone else who doesn't know what utilitarianism and virtue ethics are that they "should" do anything, and if they believe you, then you tricked them into that view (unless you are a moral realist, which I think is also absurd). 

 If your target audience does not know what the is-ought problem is, it's better to stick to output-based cost-benefit and not enter into this "cause agnostic" tier list type thing since inter-output rankings rely on arbitrary metaethical functions that aren't well-known by most or standardized for quick and reliable reference.

However among my friends, we use should all the time because we know what generally mean (our relatively shared utilitarian-ish meta-ethical worldview), and we feel comfortable clarifying this if it seems to be the crux of the debate. But at this point, should loses all of its emotional oomph and maybe it's just not worth the hassle to shorthand a 7-word sentence. 

Would be interesting to compare my likes on the ea forum with other people. I feel like what I up/downvote is way more honest than what I comment. If I could compare with someone the posts/comments where we had opposite reactions, i.e. they upvoted and I downvoted I feel like it could start some honest and interesting discussions. 

Assume there are two societies that passed the great filter and are now grabby. Society EA and society NOEA. 

Society EA you could say is quite similar to our own society. The majority of the dominant species is not concerned with passing the great filter and most individuals are inadvertently increasing the chance of the species extinction. However, a small contingent had become utilitarian rationalists and speced heavily into reducing x-risk. Since the group passed the great filter, you can assume this is in large part due to this contingent of EAs/guardian angels. 

Now society NOEA is a species that passed the filter, but they didn't have EA rationalists. The only way they were able to pass the filter was because as a species, they are overall quite careful and thoughtful. The whole species rather than a divergent few has enough of a security mindset that there was no special group that "saved" them.

Which species would we prefer to get more control of resources? 

The punchline is that the very fact that we "need" EA on earth might provide evidence that our values are worse than the species that didn't need EA to pass the filter.

I feel like " x-risk" is basically tautologically important and thus ceases to be a useful word in many cases. It's like the longtermist equivalent of a neartermist saying "it would be good to solve everything really bad about the current world".  

Curated and popular this week
Relevant opportunities