J

Jason

13003 karmaJoined Nov 2022Working (15+ years)

Bio

I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . . 

How I can help others

As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.

Posts
2

Sorted by New
6
Jason
· 1y ago · 1m read

Comments
1440

Topic contributions
2

However, from review of what I have read, it seems as if he acted from a sincere desire to better the world and did so to the best of his (quite poor) judgment.

Although none of us can peer into SBF's heart directly, I think a conclusion that he acted from mixed motives is better supported by the evidence. It would take a lot to convince me that someone who was throwing money around like SBF on extravagances (or a $16.4MM house for his parents) was not motivated at least in considerable part by non-benevolent desires. 

If one thinks he viewed luxuries bought as part of his fraudulent enterprise as ultimately altruistic because what makes SBF productive --> makes FTX richer --> makes the world better through philanthropy, then we have a framework under which it is impossible to categorize his motives because any behavior can be recast as ultimately motivated by altruism. The base rate of fraudsters being motivated by personal gain is very high, so -- unless there's a clear way to verify lack of personal-gain motivation -- I think assuming its absence is doubtful. 

I can't speak for the disagrees (of which I was not one), but I was envisioning something like this:

You are one of ten trial judges in country X, which gives a lot of deference to trial judges on sentencing. Your nine colleagues apply a level of punitiveness that you think is excessive; they would hand out 10 years' imprisonment for a crime that you -- if not considering the broader community practices -- would find warrants five years. Although citizens of county X have a range of opinions, the idea of sentencing for 10 years seems not inconsistent with the median voter's views. The other judges are set in their ways and have life tenure, so you are unable to affect the sentences they hand down in any way. Do you:

(a) sentence to five years, because you think it is the appropriate sentence based on your own judgment;

(b) impose a ten-year sentence that you find excessive, because it prevents the injustice of unequal treatment based on the arbitrary spin of the assignment wheel; or

(c) split the difference, imposing somewhere between five and ten years, accepting both that you will find the sentence too high and that there is an unwarranted disparity, but limiting the extent to which either goal is compromised.

I would be somewhere in camp (c), while Ben sounds like he may be closer to camp (a). I imagine many people in camp (b) would disagree-vote Ben, while people in camp (c) might agree, disagree, or not vote.

Totally fair! I think part of my reasoning here relies on the difference between "sentence I think is longer than necessary for the purposes of sentencing" (which I would not necessarily classify as an "evil" in the common English usage of that term) and an "unhinged" result. I would not support a consistent sentence if it were unhinged (or even a close call), and I would generally split the difference in various proportions if a sentence fell in places between those two points.

It's a little hard to define the bounds of "unhinged," but I think it might be vaguely like "no reasonable person could consider this sentence to have not been unjustly harsh." Here, even apart from the frame of reference of US sentencing norms, I cannot say that any reasonable person would find throwing the book at SBF here to have been unjustly harsh in light of the extreme harm and culpability.

I agree that the donors should feel free to disassociate themselves from whatever they want, though in this case how the castle is being handled is a decision by EV, the most central EA organization. 

Alexander Berger wrote yesterday:

Another place where I have changed my mind over time is the grant we gave for the purchase of Wytham Abbey, an event space in Oxford. 

[ . . . .]

Because this was a large asset, we agreed with Effective Ventures ahead of time that we would ask them to sell the Abbey if the event space, all things considered, turned out not to be sufficiently cost-effective. We recently made that request; funds from the sale will be distributed to other valuable projects they run. 

Claire had previously noted that "(Proceeds from a sale would be used as general funding within EVF, and that funding would replace some of our and other funders’ future grants to EVF.)" So the plan of reallocating the funds locked up in Wytham to replace some of its general support to EVF if OP wanted out seems to have been the understanding from the get-go.

Based on that, it sounds like the sale was EVF's decision insofar as it could have refused, damaged its relationship with its predominant funder, and found some other way to plug the resulting massive hole in its budget. In other words: not really. And EVF would still have needed to come up with a non-OP funding source for operating Wytham; surely it was not going to get OP's funds to directly or indirectly do so after not honoring the request to divest it had agreed in advance OP could make.

He got off easy, in my opinion. I wrote earlier here about the need for general deterrence, as well as enhancement for certain aggravating factors like witness tampering and perjury.

For non-U.S. readers, the sentence may seem pretty harsh; the actual time to serve may end up roughly equal to the median UK single murderer who did not bring a weapon to the scene and whose conduct did not involve any other statutory aggravating factor. But, compared to what the US criminal justice system regularly hands out for various levels of moral culpability and harm caused, he got (and many white-collar criminals get) significantly less than his culpability level and harm caused would predict. I think "rich people crime" getting a significantly less stern response than ordinary crime is damaging to the rule of law and the social contract in this country. 

I also feel little pity for him, at least relative to the median US convicted individual. A large number of people who receive significant prison sentences experienced great childhood trauma, suffer from significant mental illness that contributed to their offenses, and had relatively few good options in life. SBF had great privilege and could choose among an extremely broad range of lawful and attractive paths in life. I don't think anyone has suggested that any mental illness or autism was contributory to his offense.

So in this case, my desire for consistency and inter-defendant fairness trumps the broader concerns I have about the US system being too punitive.

I think in general, our research is pretty unusual in that we are quite willing to publish research that has a fairly limited number of hours put into it. Partly, this is due to our research not being aimed at external actors (e.g., convincing funders, the broader animal movement, other orgs) as much as aimed at people already fairly convinced on founding a charity and aimed at a quite specific question of what would be the best org to found. We do take an approach that is more accepting of errors, particularly ones that do not affect endline decisions connected directly to founding a charity. 

Do you think there are additional steps you could/should take to make this philosophy / these limitations clearer to would-be to those who come across your reports?

I strongly support more transparency and more release of materials (including less polished work product), but I think it is essential that the would-be secondary user is well aware of the limitations. This could include (e.g.) noting the amount of time spent on the report, the intended audience and use case for the report, the amount of reliance upon which you intend that audience to place on the report, any additional research you expect that intended audience to take before relying on the report, and the presence of any significant issues / weaknesses that may be of particular concern to either the intended audience or anticipated secondary users. If you specifically do not intend to correct any errors discovered after a certain time (e.g., after the idea was used or removed from recommended options), it would probably be good to state that as well.

There is a widely held view in the animal research community that CE's reports on animal welfare consistently contain serious factual errors

To the extent this view is both valid and widely-held, and the reports are public, it should be possible to identify at least some specific examples without compromising your anonymity. While I understand various valid reasons why you might not want to do that, I don't think it is appropriate for us to update on a claim like this from a non-established anonymous account without some sort of support.

Fair, but there was (and arguably still is) a disconnect here between the net karma and the number of comments (was about 0.5 karma-per-comment (kpc) when I posted my comment), as well as the net karma and the evidence that a number of users actually decided the Wenar article was worth reading (based on their engagement in the comments). I think it's likely there is a decent correlation between "should spend some on the frontpage" / "should encourage people to linkpost this stuff" on the one hand and "this is worth commenting on" / "I read the linkposted article."

The post you referenced has 0 active comments (1 was deleted), so the kpc is NaN and there is no evidence either way about users deciding to read the article. Of course, there are a number of posts that I find over-karma'd and under-karma'd, but relatively few have the objective disconnects described above. In addition, there is little reason to think your post received (m)any downvotes at all -- its karma is 16 on 7 votes, as opposed to 33 on 27 for the current post (as of me writing this sentence). So the probability that its karma has been significantly affected by a disagree-ergo-downvote phenomenon seems pretty low. 

attacks by any sufficiently large online mob

To the extent this is an implied characterization of what happened here, I find it unlikely to be an apt one. It is unlikely that, e.g., EVF and/or OP made an optics-based decision on account of random posters on X. I also see no reason to conclude that the decisionmakers were affected by what their friends thought. Rather, I think the decisionmakers concluded that the expected state of the world was better if EV divested Wytham. For the same reason, I think the reference to "primary pathology of most of the world's charity landscape, where vanity projects and complicated signaling games dominate where donations go" is overdone. Even if we assume that continued operation was more economically advantageous, this is a project on the periphery of what EA is, not an object-level issue. That reputational effects may have overruled a cost-effective analysis that disregarded those effects in this particular case does not update me on the probability that EA is at risk that vanity and signaling will "dominate," or even play a major role, in EA funding decisions writ large.

As a practical matter, funders have a huge influence on organizational operations. (This isn't wholly unique to the charitable world: customers and investors have somewhat analogous influences on for-profits.) Giving some weight to the views of future potential funders -- who may be less likely to give to a movement that remains linked to the "castle" -- does not strike me as fundamentally different than "letting" current funders' views and preferences have as much weight as they do. 

To the extent criticism is directed at Dustin, Cari, or Open Phil -- the EA community does not own its donors' money, and I see no basis for demanding that they continue to associate themselves with the "castle" if they do not wish to do so. Unless there's another donor who is willing to incur the capital and operating costs of Wytham, I don't see any potential room for criticizing EVF itself here. Of course, anyone who thinks Wytham should be reopened is welcome to fundraise for purchasing it or a similar building. 

I also submit that wise stewardship and leadership of a social movement includes some consideration of morale amongst the rank and file. I'm guessing that some community builders whose funding has been cut due to financial circumstances may have been salty about "the castle" running while they were being asked to work with fewer resources. They probably were losing some effectiveness -- and morale -- through having to defend Wytham. The whole situation likely contributed to some people disengaging and/or not engaging. If these sorts of effects should not be considered, then I think there is much else in the meta world that could stand a reevaluation.

Finally, I'm more willing to weight optics on meta stuff than on object-level concerns; I think it would be much more epistemically dangerous to (e.g.) refuse to value farmed animals because of that isn't seen as legitimate by certain others than it is to sell the "EA castle" because it is getting in the way of maintaining public respect and effectiveness. Moreover, in your GiveWell hypo, the project rejection on PR grounds would be corrosive to GiveWell's function and value proposition (to be an unbiased, objective recommender in the areas it operates in) in a way that is less true in meta (where securing more money, talent, support, and other resources for object-level work is at least a key penultimate objective).

Very meta observation: in the context of a linkpost with low net positive karma, the primary message conveyed by a downvote may be "I don't think posting this added any value to the Forum." The article's author is a Stanford prof, and Wired is not a small-potatoes publication. There seems to be value in people being aware of it and being given the option to read if they see fit. It appears to have enough substance that there's decent engagement in the comments. To the extent that one wishes to convey that the article itself is unconvincing, I would consider the disagree button over the downvote button.

Thanks for sharing this, Arden.

Load more