J

Jason

18102 karmaJoined Working (15+ years)

Bio

I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . . 

How I can help others

As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.

Comments
2159

Topic contributions
2

<<If the cost of labour is low in a poor country, so too is the lower cost of implementing safety.>>

That makes sense for some safety measures, but not all. If the cost is borne in terms of lost worker productivity, then it makes sense -- a safety measure resulting in a 5 percent productivity loss costs 1/10 as much where the cost of labor is 1/10 as expensive. But there are other kinds of safety costs as well -- imagine a requirement that a factory have one automated external defibrillator (AED) on site for every X workers. That isn't going to be meaningfully cheaper for the factory in the developing country.

Many commitments are not legally binding -- generally, you can say that you're going to do something and then change your mind without any sort of legal penalty. Any other rule would lead to even more litigation than we already have. What would the theory be for Target's promises in 2016 being binding?

The document I saw doesn't look like a contract -- I don't see any evidence of a counterparty who provided consideration in exchange for Target's promises. There are circumstances in which a contract-like claim will lie despite the absence of an actual contract (e.g., various forms of estoppel). But courts are hesitant to apply those circumstances broadly. Among other things, we'd usually be looking for a litigant who reasonably relied on a clear and definite promise to their detriment and suffered a clear-cut injury as a result. The potential injuries here strike me as less than clear-cut, and the promise as less than clear. Available is vague to my ears, rather than clear and definite.

One needs to consider the harmful effects on other workers -- allowing goods to come into your market that were produced with abysmal safety standards puts pressure on other employers and their regulators to cut corners on safety. Pro-sweatshop logic is stronger at establishing that the sweatshop is not worse than the alternatives for the workers than at establishing its superiority to those alternatives. So the harms to non-sweatshop workers could outweigh any possible minor gains to the sweatshop workers.

 I suspect most people are more sympathetic to developed-country workers who object to being undercut by unsafe labor than by inexpensive labor. There's an understanding that there are certain things you shouldn't be expected to do to keep your job -- agreeing to work in unsafe conditions, sleeping with the boss, voting a specific way -- and that society is going to enforce those boundaries. The right to safe working conditions may not mean very much in practice if you can get run out of business by factories outcompeting your employer because they use unsafe labor.

I'm somewhere in the middle. I would probably enforce certain minimum safety standards, but they wouldn't be full-strength developed country standards. 

I can't see a compelling reason to impose the specific safety standards of (e.g.) the United States on factories in developing countries. Regulators consider the value of a statistical life in deciding whether to impose a specific requirement -- and at least one federal agency has that value pegged at over $13MM. The underlying methodology reflects the economic situation in the United States -- e.g., by asking people how much they would be willing to pay to reduce their risk of death (or demand in order to increase their risk of death), by looking at how much higher pay is in high-risk occupations like mining, by looking at lost earnings, and so on. There is no reason to believe those views are universally correct or wise in other contexts.

Employers in societies that do not agree with American sensibilities are not engaging in "bad behaviour" merely because they follow local norms rather than American ones. At least for countries with reasonably functioning democratic systems (or which we otherwise think are doing a respectable job governing in the interests of their citizens), it's not clear to me why we shouldn't usually defer to the level of safety that the society has determined to be generally appropriate. (By generally appropriate, I mean across the society as a whole -- I would not defer to regulations for industries that were more permissive than what was required in other regulatory domains).

This strikes me as having some potentially adverse consequences. Although you suggest a possible extension to positive adjectives in the addendum, I am skeptical that this would be workable as a community norm. So I'll focus on the main proposal here.

I submit that it's generally undesirable in a truthseeking community to make negative evaluations more difficult or costly to express than positive ones. This is likely to tilt the field in favor of the latter. And there are already some nudges to skew positive -- both psychological (e.g., many people would rather avoid conflict) and structural (e.g., being positive is generally a better strategy in life for winning friends and influence). There are, of course, other social circumstances in which slanting the field toward positive feedback is desirable.

As others have implied, a Forum post is often intended to express a view about the nature of reality (~ a judgment) to third parties. To the extent there is "winning," the theory of change of such a post is that third parties update their views in a way that more closely tracks the way things are. That theory of change is harder to accomplish if one refrains from expressing a view about the nature of reality. And I don't think phrasing things as "I feel this model is badly flawed" would help things -- the reader understands that this is equivalent to a claim that the model is badly flawed.

That's not intended as a broader criticism of NVC, a topic on which I have no general opinion. But it does strike me as emphasizing ends like meeting participants' emotional needs and maintaining relationships rather than being focused on community truthseeking. I'm not someone who thinks that community norms should always maximize truthseeking over all other relevant considerations, but it is a rather important consideration (especially in the context of criticism of something with millions of page views, YouTube video views, etc.).

(remembered that I had drafted something and forgot about it, decided it would be better to post late than never)

I wonder how much some of these features are core differences as opposed to adaptations to a specific ecological niche. If that's so, keeping it in mind may be helpful in conversations between the two movements, and in learning from each other.

For example, I suspect most people would not view the relationship with Open Phil as a core defining feature of EA. But I suspect that the dominance of a few highly-aligned funders helps explain things that could be seen as core differences. 

Without suggesting that this is the major explanation, a more inclusive and supportive approach is relatively better adapted to some funding environments than others. If someone has an idea and gets funding from the Usual EA Sources, other EAs understandably see the opportunity cost as pretty high. If someone has an idea and gets funded by a Standard American Foundation, both SMAs and EAs probably would assess the opportunity cost as much lower. (While I don't know where SMA folks anticipate getting funding, I get the sense that a larger proportion will come from less closely aligned funders than is the case in EA.) It's easier to be supportive when the opportunity costs are lower.

Likewise, EA experienced atypical conditions at a particular point in its development -- having high levels of funding relative to its number of adherents -- and I suspect that shaped things that come across as more "core" nowadays. SMA is younger, and will likely be shaped in important ways by internal and external events during critical developmental phases.

"we need to find ways to shut down animal agriculture for good"

There have been EA-aligned alternative protein efforts, so I don't think it would be correct to say that this impulse is absent from EA animal advocacy. That being said, I wouldn't be surprised if it ends up somewhat more prominent in SMA, although it's a young movement and so much is unknown.

To the extent that SMA ends up considerably less focused on measurables, it will be interesting to see how it deals with some of the potential biases that a focus on measurement helps mitigate (e.g., the risk of focusing on what feels good subjectively / or on what is higher status socially / or what suits one's pre-existing ideological sympathies). My own take on EA -- which is probably not the orthodox view -- is that running all the world's charitable activity through an EA lens would not be a good idea, but that some of that activity should be, and that amount that should be is higher than the amount that currently is.

I asked ChatGPT about the average marketing spend of auto manufacturers was (it said 7-8%) and the average fundraising spend of the largest US charities (it said ~10%, which is consistent with my intuition). While I'm not endorsing these percentages as optimal for auto manufacturers or non-EA charities -- much less advocating that they should be applied to EA charities -- they could provide some sort of ballpark starting point.

Automotive marketing, as I understand it, is considerably about creating vague positive brand associations that will pay off when the consumer is ready to make a purchase decision. That's a viable strategy in part because there aren't too many differences between (e.g.) a Ford and a GM truck. It's not obvious to me that would-be EA donors would respond well to that kind of campaign, and this may limit the extent to which their marketing budgets and strategies serve as a useful guide here.

This would benefit from stating a bottom line up front: e.g., Using Shapira's Doom Train analytic framework, I estimate a 31% p(doom). However, after adjustments -- especially for the views of superforcasters and AI insiders -- my adjusted p(doom) is 2.76%.

More substantively, I suggest your outcome is largely driven by the Bayes factors -- I think the possible range of outcomes is 0% to 9% on the stated factors. And my guess is that you might have chosen greater or lesser factors depending on where your own analysis ended up -- so the range of plausible outcomes is even less as a practical matter.

 That's one reason I recommend the BLUF here -- someone who doesn't take the 24 minutes to read the whole thing needs to understand how much of a role the Bayes factors are playing in the titular p(doom) estimate vs. the Doom Train methodology.

I think this critique is stronger as applied to other posts in which Vasco's comment runs a more significant risk of derailing the original poster's topic and intended discussion. Here, I think Vasco's point can be understood as somewhat complementary to the original idea. If dairy is not that bad, then the possibility that anti-dairy advocacy could have undesirable downstream effects on other animals may be an additional reason for deprioritizing such advocacy. In contrast, I think posting a comment like this in (e.g.) a global-health thread runs an elevated risk of the "discussion . . . descending into a discussion about moral weights, or the effect of every single intervention on nematodes."
 

Load more