NS

Noah Scales

500 karmaJoined

Bio

All opinions are my own unless otherwise stated. Geophysics and math graduate with some web media and IT skills.

How others can help me

I am interested in programming, research, data analysis, or other part-time work useful to an organization with a charitable mission. If you have part-time or contract work available, let me know.

How I can help others

I am open to your communications and research questions. If you think I could help you find an answer on some topic, send me a message.

Comments
303

Yes, I took a look at your discussion with MichaelStJules. There is a difference in reliability between:

  • probability that you assign to the Mugger's threat
  • probability that the Mugger or a third party assigns to the Mugger's threat

Although I'm not a fan of subjective probabilities, that could be because I don't make a lot of wagers.

There are other ways to qualify or quantify differences in expectation of perceived outcomes before they happen. One way is by degree or quality of match of a prototypical situation to the current context. A prototypical situation has one outcome. The current context could allow multiple outcomes, each matching a different prototypical situation. How do I decide which situation is the "best" match?

  • a fuzzy matching: a percentage quantity showing degree of match between prototype and actual situation. This seems the least intuitive to me. The conflation of multiple types and strengths of evidence (of match) into a single numeric system (for example, that bit of evidence is worth 5%, that is worth 10%) is hard to justify.
  • a hamming distance: each binary digit is a yes/no answer to a question. The questions could be partitioned, with the partitions ranked, and then hamming distances calculated for each ranked partition, with answers about the situation in question, and questions about identifying a prototypical situation.
  • a decision tree: each situation could be checked for specific values of attributes of the actual context, yielding a final "matches prototypical situation X" or "doesn't match prototypical situation X" along different paths of the tree. The decision tree is most intuitive to me, and does not involve any sums.

In this case, the context is one where you decide whether to give any money to the mugger, and the prototypical context is a payment for services or a bribe. If it were me, the fact that the mugger is a mugger on the street yields the belief "don't give" because, even if I gave them the money, they'd not do whatever it is that they promise anyway. That information would appear in a decision tree, somewhere near the top, as "person asking for money is a criminal?(Y/N)"

Simple and useful, thanks.

In my understanding, Pascal's Mugger offers a set of rewards with risks that I estimate myself. Meanwhile, I need a certain amount of money to give to charity, in order to accomplish something. Let's assume that I don't have the money sufficient for that donation, and have no other way to get that money. Ever. I don't care to spend the money I do have on anything else. Then, thinking altruistically, I'll keep negotiating with Pascal's Mugger until we agree on an amount that the mugger will return that, if I earn it, is sufficient to make that charitable donation. All I've done is establish what amount to get in return from the Mugger before I give the mugger my wallet cash. Whether the mugger is my only source of extra money, and whether there is any other risk in losing the money I do have, and whether I already have enough money to make some difference if I donate, is not in question. Notice that some people might object that my choice is irrational. However, the mugger is my only source of money, and I don't have enough money otherwise to do anything that I care about for others, and I'm not considering consequences to me of losing the money.

In Yudkowsky's formulation, the Mugger is threatening to harm a bunch of people, but with very low probability. Ok. I'm supposed to arrive at an amount that I would give to help those people threatened with that improbable risk, right? In the thought experiment, I am altruistic. I decide what the probability of the Mugger's threat is, though. The mugger is not god, I will assume. So I can choose a probability of truth p < 1/(number of people threatened by the mugger) because no matter how many people that the mugger threatens, the mugger doesn't have the means to do it, and the probability p declines with the increasing number of people that the mugger threatens, or so I believe. In that case, aren't people better off if I give that money to charity after all?

You wrote,

"I can see it might make sense to set yourself a threshold of how much risk you are willing to take to help others. And if that threshold is so low that you wouldn't even give all the cash currently in your wallet to help any number of others in need, then you could refuse the Pascal mugger."

The threshold of risk you refer to there is the additional selfish one that I referred to in my last comment, where loss of the money in an altruistic effort deprives me of some personal need that the money could have served, an opportunity cost of wagering for more money with the mugger. That risk could be a high threshold of risk even if the monetary amount is low. Lets say I owe a bookie 5 dollars and if I don't repay they'll break my legs. Therefore, even though I could give the mugger 5 dollars and in my estimation, save some lives, I won't. Because the 5 dollars is all I have and I need it to repay the bookie. That personal need to protect myself from the bookie defines that threshold of risk. Or more likely, it's my rent money, and without it, I'm turned out onto predatory streets. Or it's my food money for the week, or my retirement money, or something else that pays for something integral to my well-being. That's when that personal threshold is meaningful.

Many situations could come along offering astronomical altruistic returns, but if taking risks for those returns will incur high personal costs, then I'm not interested in those returns. This is why someone with a limited income or savings typically shouldn't make bets. It's also why Effective Altruism's betting focus makes no sense for bets with sizes that impact a person's well-being when the bets are lost. I think it's also why, in the end, EA's don't put their money where their mouthes are.

EA's don't make large bets or they don't make bets that risk their well-being. Their "big risks" are not that big, to them. Or they truly have a betting problem, I suppose. It's just that EA's claim that betting money clarifies odds because EA's start worrying about opportunity costs, but does it? I think the amounts involved don't clarify anything, they're not important amounts to the people placing bets. What you end up with is a betting culture, where unimportant bets go on leading to limited impact on bayesian thinking, at best, to compulsive betting and major personal losses, at worst. By the way, Singer's utilitarian ideal was never to bankrupt people. Actually, it was to accomplish charity cost-effectively, implicitly including personal costs in that calculus (for example, by scaling % income that you give to help charitable causes according to your income size). Just an aside.

Hmm. Interesting, but I don't understand the locality problem. I suspect that you think of consequences as non-local, but instead far-flung, thus involving you in weighing interests with greater significance than you would prefer for decisions. Is that the locality problem to you?

What an interesting and fun post! Your analysis goes many directions and I appreciate your investigation of normative, descriptive, and prescriptive ethics.

The repugnant conclusion worries me. As a thought experiment, it seems to contain an uncharitable interpretation of principles of utilitarianism.

  1. You increase total and average utility to measure increases in individual utility across an existing and constant population. However, those measures, total and average, are not adequate to handle the intuition people associate with them. Therefore, they should not be used for deciding changes in utility across a population of changing size or one containing drastic differences in individual utility. For example, there's no value in increasing total utility by adding additional people, but it will drive total utility up, even if individual utility is low.

  2. You pursue egalitarianism to raise everyone's utility up to the same level. Egalitarian is not an aspiration to lower some people's well-being while raising other's well-being. Likewise, egalitarianism is not pursuit of equality of utility at any utility level. Therefore, egalitarianism does not imply an overriding interest in equalizing everyone's utility. For example, there's no value in lowering other's utility to match those with less.

  3. You measure utility accumulated by existent people in the present or the future to know utility for all individuals in a population and that utility is only relevant to the time period during when those people exist. Those individuals have to exist in order for the measures to apply. Therefore, utilitarianism can be practiced in contexts of arbitrary changes in population, with a caveat: consequences for others of specific changes to population, someone's birth or death, are relevant to utilitarian calculations. TIP: the repugnant conclusion thought experiment only allows one kind of population change: increase. You could ask yourself whether the thought experiment says anything about the real world or requirements of living in it.

  4. Utility is defined with respect to purposes (needs, reasons, wants) that establish a reference point of accumulation of utility suitable for some purpose. That reference point is always at a finite level of accumulation. Therefore, to assume that utility should be maximized to an unbounded extent is an error, and speaks to a problem with some arguments for transitivity. NOTE: by definition, if there is no finite amount of accumulated utility past which you have an unnecessary amount for your purposes, then it is not utility for you.

The repugnant conclusion does not condemn utilitarianism to disuse, but points 1-4 seem to me to be the principles to treat charitably in showing that utilitarianism leads to inconsistency. I don't believe that current formulations of the repugnant conclusion are charitable to those principles and the intuitions behind them.

About steel-manning vs charitably interpreting

The ConcernedEA's state:

"People with heterodox/'heretical' views should be actively selected for when hiring to ensure that teams include people able to play 'devil’s advocate' authentically, reducing the need to rely on highly orthodox people accurately steel-manning alternative points of view"

I disagree. Ability to accurately evaluate the views of the heterodox minority depends on developing a charitable interpretation (not necessarily a steel-manning) of the views. Furthermore, if the majority can not or will not develop such a charitable interpretation, then the heretic must put their argument in a form that the majority will accept (for example, using jargon and selectively adopting non-conflicting elements of the majority ideology). This unduly increases burden on the person with heterodox views.

The difference between a charitably -interpreted view and a steel-manned view is that the steel-manned view is strengthened to seem like a stronger argument to the opposing side. Unfortunately, if there are differences in evaluating strength of evidence or relevance of lines of argument (for example, due to differing experiences between the sides), then steel-manning will actually distort the argument. A charitable interpretation only requires that you accurately determine what the person holding the view intends to mean when they communicate it, not that you make the argument seem correct or persuasive to you.

Sometimes I think EA's mean "charitable interpretation" when they write "steel-manning". Other times I think that they don't. So I make the distinction here.

It's up to the opposing side to charitably interpret any devil's advocate position or heretical view. While you could benefit from including diverse viewpoints, the burden is on you to interpret them correctly, to gain any value available from them.

Developing charitable interpretation skills

To charitably interpret another's viewpoint takes Scout Mindset, first of all. With the wrong attitude, you'll produce the wrong interpretation no matter how well you understand the opposing side. It also takes some pre-existing knowledge of the opposing side's worldview, typical experiences, and typical communication patterns. That comes from research and communication skills training. Trial-and-error also plays a role: this is about understanding another's culture, like an anthropologist would. Immersion in another person's culture can help.

However, I suspect that the demands on EA's to charitably interpret other people's arguments are not that extreme. Charitable interpretations are not that hard in the typical domains you require them. To succeed with including heterodox positions, though, demands on EA's empathy, imagination, and communication skills do go up.

About imagination, communication skills, and empathy for charitably interpreting

EA's have plenty of imagination, that is, they can easily consider all kinds of strange views, it's a notable strength of the movement, at least in some domains. However, EA's need training or practice in advanced communication skills and argumentation. They can't benefit from heterodox views without them. Their idiosyncratic takes on argumentation (adjusting Bayesian probabilities) and communication patterns (schelling points) fit some narrative about their rationalism or intelligence, I suppose, but they could benefit from long-standing work in communication, critical thinking, and informal logic. As practitioners of rationalism to the degree that mathematics is integral, I would think that EA's would have first committed their thinking to consistent analysis with easier tools, such as inference structures, setting aside word-smithing for argument analysis. Instead, IBT gives EA's the excuse not to grapple with the more difficult skills of analyzing argument structures, detailing inference types, and developing critical questions about information gaps present in an argument. EDIT: that's a generalization, but is how I see the impact of IBT in practical use among EA's.

The movement has not developed in any strong way around communication skills specifically, aside from a commitment to truth-seeking and open-mindedness, neither of which is required in order to understand others' views, but are still valuable to empathy.

There's a generalization that "lack of communication skills" is some kind of remedial problem. There are communication skills that fit that category, but those skills are not what I mean.

After several communication studies courses, I learned that communication skills are difficult to develop, that they require setting aside personal opinions and feelings in favor of empathy, and that specific communication techniques require practice. A similar situation exists with interpreting arguments correctly: it takes training in informal logic and plenty of practice. Scout mindset is essential to all this, but not enough on its own.

Actually, Galef's podcast Rationally Speaking includes plenty of examples of charitable interpretation, accomplished through careful questions and sensitivity to nuance, so there's some educational material there.

Typically the skills that require practice are the ones that you (and I) intentionally set aside at the precise time that they are essential: when our emotions run high or the situation seems like the wrong context (for example, during a pleasant conversation or when receiving a criticism). Maybe experience helps with that problem, maybe not. It's a problem that you could address with cognitive aids, when feasible.

Is moral uncertainty important to collective morality?

Ahh, am I right that you see the value of moral uncertainty models as their use in establishing a collective morality given differences in the morality held by individuals?

Yeah. I'll add:

  • Single-sourcing: Building Modular Documentation by Kurt Ament
  • Dictionary of Concise Writing by Robert Hartwell Fiske
  • Elements of Style by William Strunk Jr
  • A Rulebook for Arguments by Anthony Weston

There are more but I'm not finished reading them. I can't say that I've learned what I should from all those books, but I got the right idea, more than once, from them.

effectivealtruism.org suggests that EA values include:

  1. proper prioritization: appreciating scale of impact, and trying for larger scale impact (for example, helping more people)
  2. impartial altruism: giving everyone's interests equal weight
  3. open truth-seeking: including willingness to make radical changes based on new evidence
  4. collaborative spirit: involving honesty, integrity, and compassion, and paying attention to means, not just ends.

Cargill Corporation lists its values as:

  1. Do the Right Thing
  2. Put People First
  3. Reach Higher

Lockheed-Martin Corporation lists its values as:

  1. Do What’s Right
  2. Respect Others
  3. Perform with Excellence

Shell Global Corporation lists its values as:

  1. Integrity
  2. Honesty
  3. Respect

Short lists seem to be a trend, but longer lists with a different label than "values" appear from other corporations(for example, from Google or General Motors) . They all share the quality of being aspirational, but there's a difference with the longer lists, they seem closer suited to the specifics of what the corporations do.

Consider Google's values:

  • Focus on the user and all else will follow.
  • It's best to do one thing really, really well.
  • Fast is better than slow.
  • Democracy on the web works.
  • You don't need to be at your desk to need an answer.
  • You can make money without doing evil. .
  • There's always more information out there.
  • The need for information crosses all borders
  • You can be serious without a suit
  • Great just isn't good enough

Google values are specific. Their values do more than build their brand.

I would like to suggest that EA values are lengthy and should be specific enough to:

  • identify your unique attributes.
  • focus your behavior.
  • reveal your preferred limitations[1].

Having explicit values of that sort:

  • limit your appeal.
  • support your integrity .
  • encourage your honesty.

The values focus and narrow in addition to building your brand. Shell Global, Lockheed-Martin and Cargill are just building their brand. The Google Philosophy says more and speaks to their core business model.

All the values listed as part of Effective Altruism appear to overlap with the concerns that you raise. Obviously, you get into specifics.

You offer specific reforms in some areas. For example:

  • "A certain proportion EA of funds should be allocated by lottery after a longlisting process to filter out the worst/bad-faith proposals*"
  • "More people working within EA should be employees, with the associated legal rights and stability of work, rather than e.g. grant-dependent 'independent researchers'."

These do not appear obviously appropriate to me. I would want to find out what a longlisting process is, and why employees are a better approach than grant-dependent researchers. A little explanation would be helpful.

However, other reforms do read more like statements of value or truisms to me. For example:

  • "Work should be judged on its quality..."[rather than its source].
  • "EAs should be wary of the potential for highly quantitative forms of reasoning to (comparatively easily) justify anything"

It's a truism that statistics can justify anything as in the Mark Twain saying, "There are three kinds of lies: lies, damned lies, and statistics".

These reforms might inspire values like:

  • judge work on its quality alone, not its source
  • Use quantitative reasoning only when appropriate

*You folks put a lot of work into writing this up for EA's. You're smart, well-informed, and I think you're right, where you make specific claims or assert specific values. All I am thinking about here is how to clarify the idea of aligning with values, the values you have, and how to pursue them. *

You wrote that you started with a list of core principles before writing up your original long post? I would like to see that list, if it's not too late and you still have the list. If you don't want to offer the list now, maybe later? As a refinement of what you offered here?

Something like the Google Philosophy, short and to the point, will make it clear that you're being more than reactive to problems, but instead actually have either:

  • differences in values from orthodox EA's
  • differences in what you perceive as achievement of EA values by orthodox EA's

Here are a few prompts to help define your version of EA values:

  1. EA's emphasize quantitative approaches to charity, as part of maximizing their impact cost-effectively. Quantitative approaches have pros and cons, so how to contextualize them? They don't work in all cases, but that's not a bad thing. Maybe EA should only pay attention to contexts where quantitative approaches do work well. Maybe that limits EA flexibility and scope of operations, but also keeps EA integrity, accords with EA beliefs, and focuses EA efforts. You have specific suggestions about IBT and what makes a claim of probabilistic knowledge feasible. Those can be incorporated into a value statement. Will you help EA focus and limit its scope or are you aiming to improve EA flexibility because that's necessary in every context where EA operates?

  2. EA's emphasize existential risk causes. ConcernedEA's offer specific suggestions to improve EA research into existential risk. How would you inform EA values about research in general to include what you understand should be the EA approach to existential risk research? You heed concerns about evaluation of cascading and systemic risks. How would those specific concerns inform your values?

  3. You have specific concerns about funding arrangements, nepotism, and revolving doors between organizations. How would those concerns inform your values about research quality or charity impact?

  4. You have concerns about lack of diversity and its impact on group epistemics. What should be values there?

You can see the difference between brand-building:

  • ethicality
  • impactfulness
  • truth-seeking

and getting specific

  • research quality
  • existential, cascading, and systemic risks
  • scalable and impactful charity
  • quantitative and qualitative reasoning
  • multi-dimensional diversity
  • epistemic capability
  • democratized decision-making

That second list is more specific, plausibly hits the wrong notes for some people, and definitely demonstrates particular preferences and beliefs. As it should! Whatever your list looks like, would alignment with its values imply the ideal EA community for you? That's something you could take another look at, articulating the values behind specific reforms if those are not yet stated or incorporating specific reforms into the details of a value, like:

  • democratized decision-making: incorporating decision-making at multiple levels within the EA community, through employee polling, yearly community meetings, and engaging charity recipients.

I don't know whether you like the specific value descriptors I chose there. Perhaps I misinterpreted your values somewhat. You can make your own list. Making decisions in alignment with values is the point of having values. If you don't like the decisions, the values, or if the decisions don't reflect the values, the right course is to suggest alterations somewhere, but in the end, you still have a list of values, principles, or a philosophy that you want EA to follow.


[1] As I wrote in a few places in this post, and taking a cue from Google and the linux philosophy, sometimes doing one thing and doing it well is preferable to offering loads of flexibility. If EA is supposed to be the swiss-army knife of making change in the world, there's still a lot of better organizations out there for some purposes rather than others, as any user of a swiss-army knife will attest, they are not ideal for all tasks. Also, your beliefs will inform you about what you do well. Does charity without quantitative metrics inevitably result in waste and corruption? Does use of quantitative metrics limit the applicability of EA efforts to specific types of charity work (for example, outreach campaigns)? Do EA quantitative tools limit the value of its work in existential risk? Can they be expanded with better quantitative tools (or qualitative ones)? Maybe EA is self-limiting because of its preferred worldview, beliefs and tools. Therefore, it has preferred limitations. Which is OK, even good.

Hm, ok. Couldn't Pascal's mugger make a claim to actually being God (with some small probability or very weakly plausibly) and upset the discussion? Consider basing dogmatic rejection on something other than the potential quality of claims from the person whose claims you reject. For example, try a heuristic or psychological analysis. You could dogmatically believe that claims of godliness and accurate probabilism are typical expressions of delusions of grandeur.

My pursuit of giving to charity is not unbounded, because I don't perceive an unbounded need. If the charity were meant to drive unbounded increase in the numbers of those receiving charity, that would be a special case, and not one that I would sign up for. But putting aside truly infinite growth of perceived need for the value returned by the wager, in all wagers of this sort that anyone could undertake, they establish a needed level of utility, and compare the risk involved to whatever stakeholders of taking the wager at that utility level against the risks of doing nothing or wagering for less than the required level.

In the case of ethics, you could add an additional bounds on personal risk that you would endure despite the full need of those who could receive your charity. In other words, there's only so much risk you would take on behalf of others. How you decide that should be up to you. You could want to help a certain number of people, or reach a specific milestone towards a larger goal, or meet a specific need for everyone, or spend a specific amount of money, or whathaveyou, and recognize that level of charity as worth the risks involved to you of acquiring the corresponding utility. You just have to figure it out beforehand.

If by living 100 years, I could accomplish something significant, but not everything, on behalf of others, that I wanted, but I would not personally enjoy that time, then that subjective decision makes living past 100 years unattractive, if I'm deciding solely based on my charitable intent. I would not, in fact, live an extra 100 years for such a purpose without meeting additional criteria, but for example's sake, I offered it.

Load more