L

Larks

15808 karmaJoined

Comments
1495

Topic contributions
4

This seems like one of many Manifold markets with terrible resolution criteria. Wikipedia is not an oracle; it is a website run by Trump's political opponents, who are willing to use skullduggery to promote their political agendas. Even just looking at this page, it is a bizarre collection of events. It includes things like this:

In 2017, the eligibility of a number of Australian parliamentarians to sit in the Parliament of Australia was called into question because of their actual or possible dual citizenship. The issue arises from section 44 of the Constitution of Australia, which prohibits members of either house of the Parliament from having allegiance to a foreign power. Several MPs resigned in anticipation of being ruled ineligible, and five more were forced to resign after being ruled ineligible by the High Court of Australia, including National Party leader and Deputy Prime Minister Barnaby Joyce. This became an ongoing political event referred to variously as a "constitutional crisis"[34][35] or the "citizenship crisis".

Inclusion of this sort of event suggests a very low bar for what constitutes a crisis. But then many objectively much more major events are simply totally omitted! 

I can see why the market is trading above 50% - you can just look at the talk page to see people are leaning this way. Arguably this market should have already closed, because the wikipedia page did list it for a while (there was weasel language, but it clearly was 'listed', which was the resolution criteria), prior to the market's rules being [clarified/changed] to include a vague appeal to 'broader consensus'. But I think this mainly tells us about wikipedia, rather than about reality.

After all, we don't want to do the most good in cause area X but the most good, period.

Yes, and 80k think that AI safety is the cause area that leads to the most good. 80k never covered all cause areas - they didn't cover the opera or beach cleanup or college scholarships or 99% of all possible cause areas. They have always focused on what they thought were the most important cause areas, and they continue to do so. Cause neutrality doesn't mean 'supporting all possible causes' (which would be absurd), it means 'being willing at support any cause area, if the evidence suggests it is the best'.

Larks
30
12
2

Makes sense, seems like a good application of the principle of cause neutrality: being willing to update on information and focus on the most cost-effective cause areas.

Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources?

I think there are basically two ways of looking at this question.

One is the typical EA/'consequentialist' approach. Here you accept that some amount of the money will be wasted (fraud/corruption/incompetence), build this explicitly into your cost-effectiveness model, and then see what the bottom line is. If I recall correctly, GiveWell explicitly assumes something like 50% of insecticide-treated bednets are not used properly; their cost-effectiveness estimate would be double if they didn't make this adjustment. $1.6m of mismanagement seems relatively small compared to the total size of anti-malaria programs, so presumably doesn't move the needle much on the overall QALY/$ figure. This sort of approach is also common in areas like for-profit businesses (e.g. half of all advertising spending is wasted, we just don't know which half...) and welfare states (e.g. tolerated disability benefit fraud in the UK). To literally answer your question, that $1.6m is presumably not the best use of resources, but we're willing to tolerate that loss because the rest of the money is used for very good purposes so overall malaria aid is (plausibly) the best use of resources.

The alternative is a more deontological approach, where basically any fraud or malfeasance is grounds for a radical response. This is especially common in cases where adversarial selection is a big risk, where any tolerated bad actors will rapidly grow to take a large fraction of the total, or where people have particularly strong moral views about the misconduct. Examples include zero-tolerance schemes for harassment in the workplace, DOGE hunting down woke in USAID/NSF, or the Foreign Corrupt Practices Act. In cases like this people are willing to cull the entire flock just to stop a single infected bird—sometimes a drastic measure can be warranted to eliminate a hidden threat.

In the malaria example, if the cost is merely that $1.6m is set on fire, the first approach seems pretty appropriate. The second approach seems more applicable if you thought the $1.6m was having actively negative effects (e.g. supporting organised crime) or was liable to grow dramatically if not checked.

It's not clearly bad. It's badness depends on what the training is like, and what your views are around a complicated background set of topics involving gender and feminism, none of which have clear and obvious answers.

The topic here is whether the administration is good at using AI to identify things it dislikes. Whether or not you personally approve of using scientific grants to fund ideological propaganda is, as the OP notes, besides the point. Their use of AI thus far is, according to Scott's data, a success by their lights, and I don't see any much evidence to support huw's claim that their are being 'unthoughtful' or overconfident. They may disagree with huw on goals, but given those goals, they seem to be doing a reasonable job of promoting them.

It seems pretty appropriate and analogous to me - the administration wants to ensure 100% of science grants go to science, not 98%, and similarly they want to ensure that 0% of foreign students support Hamas, not 2%. Scott's data suggests that have done a reasonably good job with the former at identifying 2%-woke grants, and likewise if they identify someone who spends 2% of their time supporting Hamas they would consider this a win. 

Thanks for sharing this, very informative and helpful for highlighting a potential leverage point., strong upvoted.

One minor point of disagreement: I think you are being a bit too pessimistic here:

And put in more EA-coded language: the base rate of courts imploding massive businesses (or charities) is not exactly high. One example in which something like this did happen was the breakup of the Bell System in 1982, but it wasn't quick, the evidence of antitrust violations was massive, and there just wasn't any other plausible remedy. Another would be the breakup of Standard Oil in 1911, again a near-monopoly with some massive antitrust problems.

There are few examples of US courts blowing up large US corporations, but that is not exactly the situation here. OpenAI might claim that preventing a for-profit conversion would destroy or fatally damage the company, but they do not have proof. There is a long history of businesses exaggerating the harm from new regulations, claiming they will be ruinous when actually human ingenuity and entrepreneurship render them merely disadvantageous. The fact is that this far OpenAI has raised huge amounts of money and been at the forefront of scaling with its current hybrid structure, and I think a court could rightfully be skeptical of claims without proof that this cannot continue. 

I think a closer example might be when the DC District Court sided with the FTC and blocked the Staples-Office Depot merger on somewhat dubious grounds. The court didn't directly implode a massive retailer... but Staples did enter administration shortly afterwards, and my impression at the time was the causal link was pretty clear.

at best had a 40% hit rate on ‘woke science’ 

This is... not what the attached source says? Scott estimates 40% woke, 20% borderline, and 40% non-woke. 'at best' means an upper bound, which would be 60% in this case if you accept this methodology.

But even beyond that, I think Scott's grading is very harsh. He says most grants that he considered to be false positives contained stuff like this (in the context of a grant about Energy Harvesting Systems):

The project also aims to integrate research findings into undergraduate teaching and promote equitable outcomes for women in computer science through K-12 outreach program.

But... this is clearly bad! The grant is basically saying it's mainly for engineering research, but they're also going to siphon off some of the money to do some sex-discriminatory ideological propaganda in kindergartens. The is absolutely woke,[1] and it totally makes sense why the administration would want to stop this grant. If the scientists want to just do the actual legitimate scientific research, which seems like most of the application, they should resubmit with just that and take out the last bit.

Some people defend the scientists here by saying that this sort of language was strongly encouraged by previous administrations, which is true and relevant to the degree of culpability you assign to the scientists, but not to whether or not you think the grants have been correctly flagged.

His borderline categorisation seems similarly harsh. In my view, this is a clear example of woke racism:

enhance ongoing education and outreach activities focused on attracting underrepresented minority groups into these areas of research

Scott says these sorts of cases makes up 90% of his false positives. So I think we should adjust his numbers to produce a better estimate:

  • 40% woke according to scott
  • +20% borderline woke
  • +90%*40% incorrectly labeled as false positives

= 96% hit rate.

  1. ^

    If you doubt this, imagine how a typical leftist grant reviewer would evaluate a grant that said some of the money was going to support computer science for white men.

The vast majority of amicus curiae briefs are filed by allies of litigants and duplicate the arguments made in the litigants’ briefs, in effect merely extending the length of the litigant’s brief. Such amicus briefs should not be allowed. They are an abuse. The term 'amicus curiae' means friend of the court, not friend of a party.

I agree that most such briefs are often from close ideological allies, but I'm curious about you suggestion that the court would reject them on this ground. Surely all the organizations filing somewhat duplicative amicus curiae briefs all the time do so because they think it is helpful?

And EA is aimed in many ways at maintaining exclusivity, even while incredible people like Julia make great strides in making it more inclusive. For example, some people in EA think my EA-oriented after-school program is a waste of time because it's not directed at the highest achievers. 

This anecdote seems like very weak evidence for your claim. Claiming EA is 'aimed in many ways' at something implies a concerted effort to achieve it, even at the cost of other goals. In contrast, some people saying a program is a waste of time means just that - it's not producing much value. The whole point of EA is to prioritize - disfavoring donkey sanctuaries doesn't mean EAs hate donkeys, it just means there are other, better things to focus on.

Even 80,000 hours career advice applies not at all to the average person, but is oriented only to those who are already going to spend 6+ years shelling out money for undergrad and grad school, etc (at least last I checked). [emphasis added]

This seems clearly false to me. To test it, I looked at the very first article in their career guide, one of their flagship products. It is about doing engaging work that helps others, doesn't have any major downsides, etc. As far as I can see, almost every part of it applied to average people. The income-satisfaction charts they include have x-axis running from $10k to $210k, a range than covers the median income. It is not in any way dependent on your having a postgraduate qualification. And I have no idea where you get '6+ years shelling out money' from - surely most of their advice applies also to autodidacts, people who finish more quickly, people in countries with state-funded universities, people who get scholarships, etc.?

Load more