J

Jason

15237 karmaJoined Working (15+ years)

Bio

I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . . 

How I can help others

As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.

Posts
2

Sorted by New
6
Jason
· · 1m read

Comments
1723

Topic contributions
2

It feels like in the past, more considerateness might have led to less hard discussions about AI or even animal welfare.

Could you say more about why you feel that way?

Certainly lots of people would have concluded that WAW and AI as subjects of inquiry and action were weird, pointless, stupid, etc. But that's quite different from the reactions to scientific racism.

Most truths have ~0 effect magnitude concerning any action plausibly within EA's purview. This could be because knowing that X is true, and Y is not true (as opposed to uncertainty or even error regarding X or Y) just doesn't change any important decision. It also can be because the important action that a truth would influence/enable is outside of EA's competency for some reason. E.g., if no one with enough money will throw it at a campaign for Joe Smith, finding out that he would be the candidate for President who would usher in the Age of Aquarius actually isn't valuable.

As relevant to the scientific racism discussion, I don't see the existence or non-existence of the alleged genetic differences in IQ distributions by racial group as relevant to any action that EA might plausibly take. If some being told us the answers to these disputes tomorrow (in a way that no one could plausibly controvert), I don't think the course of EA would be different in any meaningful way.

More broadly, I'd note that we can (ordinarily) find a truth later if we did not expend the resources (time, money, reputation, etc.) to find it today. The benefit of EA devoting resources to finding truth X will generally be that truth X was discovered sooner, and that we got to start using it to improve our decisions sooner. That's not small potatoes, but it generally isn't appropriate to weigh the entire value of the candidate truth for all time when deciding how many resources (if any) to throw at it. Moreover, it's probably cheaper to produce scientific truth Z twenty years in the future than it is now. In contrast, global-health work is probably most cost-effective in the here and now, because in a wealthier world the low-hanging fruit will be plucked by other actors anyway.

I read your comment as "have people pay for finding out information via subsidies for markets" being your "alternative" model, rather than being the "take a cut of the trading profits/volume/revenue" model. Anyway, I mentioned earlier why I don't think being "controversial" (~ too toxic for the reputational needs of many businesses with serious money and information needs) fits in well with that business model. Few would want to be named in this sentence in the Guardian in 2028: "The always-controversial Manifest conference was put on by Manifold, a prediction market with a similarly loose moderation norms whose major customers include . . . ."

In the "take a rake of trading volume" model without any significant exogenous money coming it, there have to be enough losses to (1) fund Manifold, and (2) make the platform sufficiently positive in EV to attract good forecasters and motivate them to deploy time and resources. Otherwise, either the business model won't work, or the claimed social good is seriously compromised. In other words, there need to be enough people who are fairly bad at forecasting, yet pump enough money into the ecosystem for their losses to fund (1) and (2). Loosely: whales. 

If that's right, the business rises or falls predominately by the amount of unskilled-forecaster money pumped into the system. Good forecasters shouldn't be the limiting factor in the profit reaction; if the unskilled users are subsidizing the ecosystem enough; the skilled users should come. The model should actually work without good forecasters at all; it's just that the aroma of positive EV will attract them.

This would make whales the primary customers, and would motivate Manifold to design the system to attract as much unskilled-forecaster money as possible, which doesn't seem to jive well with its prosocial objectives. Cf. the conflict in "free-to-play" video game design between design that extracts maximum funds from whales and creating a quality game and experience generally.

While it certainly can be appropriate to criticize religious beliefs, the last sentence feels quite gratuitous and out of left field. [I assume/hope that "Quackerism" is a either a typo or a group I've never heard of.]

For me, "zakat being compatible with EA" means "its possible to increase the impact of zakat and allocate it in the most cost-effective way" [ . . . .]

Indeed, effective giving being subject to donor-imposed constraints is the norm, arguably even in EA. Many donors are open only to certain cause areas, or to certain levels of risk tolerance, or to projects with decent optics, etc. Zakat compliance does not seem fundamentally different from donor-imposed constraints that we're used to working within.

Although I have mixed feelings on the proposal, I'm voting insightful because I appreciate that you are looking toward an actual solution that at least most "sides" might be willing to live with. That seems more insightful than what the Forum's standard response soon ends up as: rehashing fairly well-worn talking points every time an issue like this comes up.

My recollection is that the recent major scandals/controversies were kickstarted by outsiders as well: FTX, Bostrom, Time and other news articles, etc. I don't think any of those needed help from the Forum for the relevant associations to form. The impetus for the Nonlinear situation was of  inside origin, but (1) I don't think many on the outside cared about it, and (2) the motivation to post seemed to be protecting community members from perceived harm, not reputational injury. 

In any event, this option potentially works only for someone's initial decision to post at all. Once something is posted, simply ignoring it looks like tacit consent to what Manifest did. Theoretically, everyone could simply respond with: "This isn't an EA event, and scientific racism is not an EA cause area" and move on. The odds of that happening are . . . ~0. Once people (including any of the organizers) start defending the decision to invite on the Forum, or people start defending scientific racism itself, it is way too late to put the genie back in the bottle. Criticism is the only viable way to mitigate reputational damage at that point.

But insofar as people think that Manifest's actions were ok-ish, it's mostly sad that they are associated with EA and make EA look bad, [ . . . .]

To clarify my own position, one can think Manifest's actions were very much not okay and yet be responding with criticism only because of the negative effects on EA. Also, I would assert that the bad effects here are not limited to "mak[ing] EA look bad."

There's a lot of bad stuff that goes on in the world, and each of us have only a tiny amount of attention and bandwidth in relation to the scope of bad stuff in the world. If there's no relationship to one of my communities, I don't have a principled reason for caring more about what happens at Manifest than I do about what happens in the (random example) Oregon Pokemon Go community. I wouldn't approve if they invited some of these speakers to their Pokemon Go event to speak, but I also wouldn't devote the energy to criticizing.

If you have any good ideas on how to build a reputational firewall, I think most of us would be all ears. I think most of the discussants would be at least content with a world in which organizations/people could platform whoever they wanted but any effects of those decisions would not splash over to everyone else. Unfortunately, I think this is ~impossible given the current structure and organization of EA. There is no authoritative arbiter of what is/isn't EA, or is/isn't consistent with EA. Even if the community recognized such an arbiter, the rest of the world probably wouldn't.

I'm not aware of Manifest (or even Manifold) receiving funding from Open Phil, although Manifold did receive significant funding from an EA-linked funder (FTXFF).

Load more