Graduate student at Johns Hopkins. Looking for part-time work.
I don't have anything to add about the intra-cause effectiveness multiplier debate. But much of the multiplier over the average charity is simply due to very poor cause selection. So while I applaud OP for wanting rigorous empirical evidence, some comparisons simply don't require peer-reviewed studies. We can still reason well in the absence of easy quantification
Dogs and cats vs farmed animal causes is a great example. But animal shelters vs GHD is just as tenable.
This isn't an esoteric point; a substantial amount of donations are simply to bad causes. Poverty alleviation in rich countries (not political or policy directed), most mutual aid campaigns, feeding or clothing the poor in the rich world, most rich-world DEI related activism lacking political aims (movement building or policy is at least more plausible), most ecological efforts, undirected scholarship funds, the arts.
I'm comfortable suggesting that any of these are at least 1000x less cost effective.
Hot take, but political violence is bad and will continue to be bad in the foreseeable near-term future. That's all I came here to say folks, have a great rest of your day.
Sort of. But claiming that you are an EA organization is at least 80% of what makes you one in the eyes of the public, as well as much of self-identification among employees. Ex: There's a big difference between a company that happens to be full of Mormons and a company that is full of Mormons that calls itself "a Mormon company".
No. Just deflect, which admittedly, is difficult to do, but CEOs do it all the time. Ideally she should have been clear about her own personal relationship with EA, but then moved on. Insofar as she was (or seemed) dishonest here, it didn't help; the wired article is proof of that.
It's hard to pin-point a clear line not to cross, but something like "this is an EA company" would be one, as would "we are guided by the values of the EA movement".
No; it's best if individuals are truthful. But presidents of companies aren't just individuals, does that mean they should lie? Still no. It just means that they should be limited with who and what they associate with. I mentioned an " unnecessary news media firestorm", but the issue is much broader. Anthropic is a private corporation, its fidelity is to its shareholders. "Public Benefit" corporation aside, it is a far different entity than any EA non-profit. I'm not an expert, but I think that history shows that it is almost always a bad idea for private companies to claim allegiance to anything but the most anodyne social goals. It's bad for the company and bad for the espoused social goals or movement. I'm very much pro-cause neutrality in EA; the idea that a charity might all the sudden realize it's not effective enough, choose to shut down and divert all resources elsewhere, awesome! Private companies can't do this. Even a little bit of doing this is antithetical to the incentive structure they face.
As for your second response, I agree 100%.
My two cents is that "brand consistency" is interesting, because brands reflect, roughly, the strain of vegan club that it is, whether associated with particular activist networks, whether it's more vegetarian than vegan or something else. The level of inconsistency is also indicative of a lack of coordination across groups.
My experience in university was that the local club was a bit of an awkward merge between a social club and people with a particular activist agenda (very visible demonstrations against animal labs). In a sense, the career building approach of Alt Protein Projects or the cause agnosticism of EA groups may be better at attracting members. But I'm not sure.
Giving this an "insightful" because I appreciate the documentation of what is indeed a surprisingly close relationship with EA. But also a disagree because it seems reasonable to be skittish around the subject ("AI Safety" broadly defined is the relevant focus, adding more would just set-off an unnecessary news media firestorm).
Plus, I'm not convinced that Anthropic has actually engaged in outright deception or obfuscation. This seems like a single slightly odd sentence by Daniela, nothing else.
I actually agree with a lot of this - we probably won't intend to make them sentient at all, and it seems likely that we may do so accidentally, or that we may just not know if we have done so or not.
I'm mildly inclined to think that if ASI knows all, it can tell us when digital minds are or aren't conscious. But it seems very plausible that we either don't create full ASI, or that we do, but enter into a disempowerment scenario before we can rethink our choices about creating digital minds.
So yes, all that is reason to be concerned in my view. I just depart slightly from your second to last paragraph. To put a number on it, I think that this is at least half as likely as minds that are generally happy. Consciousness is a black box to me, but I think that we should as a default put more weight on a basic mechanistic theory: positive valence encourages us towards positive action, negative valence threatens us away from dis-action or apathy. The fact that we don't observe any animals that seem dominated by one or the other seems to indicate that there is some sort of optimal equilibrium for goal fulfillment; that AI goals are different in kind from evolution's reproductive fitness goals doesn't seem like an obviously meaningful difference to me.
Part of your argument centers around "giving" them the wrong goals. But goals necessarily mean sub-goals - shouldn't we expect the interior life of a digital mind to be in large part about it's sub-goals, rather than just ultimate goals? And if it is something so intractable that it can't even progress, wouldn't it just stop outputting? Maybe there is suffering in that; but surely not unending suffering?
There's a lot of good, old, semi-formal content on the GiveWell blog: https://blog.givewell.org/ If you do some searches, you may be able to find the subject touched on.
I'm not sure if they have done any formal review of the subject however.