socialism fundamentally confuses efficiency and equity: the two opposite sides of the economic policy coin.
utility is roughly log(wealth), so we maximize utility by maximizing the size of the pie, and also how evenly it's distributed. some redistributive mechanisms shrink the pie — they have what economists refer to as "deadweight loss". e.g. if i have an apple and you have an orange, but i think an orange is worth two apples and you think an apple is worth two oranges, then we double our utility by trading. that is a pareto improvement. but if a tax is imposed which makes the trade no longer pencil out, then we stick with the suboptimal allocation and derive less utility from the same goods.
we want to avoid these inefficient taxes in favor of methods that have zero (neutral) deadweight loss, or even negative deadweight loss where possible. chiefly these are:
we want to assiduously avoid the socialist conceptions of paying people in the form of company equity (which doesn't change the equilibrium price of their labor anyway). we want to avoid price controls, e.g. rent control and minimum wage. we want to avoid means testing and progressive marginal tax rates. just use efficient taxes and give people money, allowing of course for non-excludable goods (public and common goods).
socialism essentially gets everything wrong on these basic econ 101 concepts.
this doesn't appear to mention what is far and away the strongest EA argument for electoral reform: the superior human welfare impact as gauged rather objectively by voter satisfaction efficiency (aka bayesian regret).
ScoreVoting.net/LivesSaved
to me, electoral reform should be like 99% of what EA focuses on. nothing else can hold a candle to it.
i prioritized the center for election science because reforming our social choice mechanisms offers the highest expected utility of any available intervention. specifically, moving to methods that maximize voter satisfaction efficiency (vse)—such as approval or score voting—yields massive downstream benefits by improving government decision-making quality. as the analysis at ScoreVoting.net/LivesSaved explains, the economic and humanitarian impact of even a slight improvement in the quality of elected officials dwarfs the impact of direct aid like malaria nets. optimizing the decision-making stack is the necessary precursor to solving other global challenges effectively.
i prioritized the center for election science because reforming our social choice mechanisms offers the highest expected utility of any available intervention. specifically, moving to methods that maximize voter satisfaction efficiency (vse)—such as approval or score voting—yields massive downstream benefits by improving government decision-making quality. as the analysis at rangevoting.org/livessaved demonstrates, the economic and humanitarian impact of even a slight improvement in the quality of elected officials dwarfs the impact of direct aid like malaria nets. optimizing the decision-making stack is the necessary precursor to solving other global challenges effectively.
This is clearly not true. The example I gave was foreign aid, which benefits foreigners at the expense of citizens. Since only one of these groups can vote, there is little reason to think that the preferences of this subgroup will align with overall human welfare.
that's incorrect. a rational entity's goal is to rationalize the net utility of the smallest group that includes itself. genes are just trying to maximize their expected number of copies made. the appearance of "altruism" is an illusion caused by:
it's logically and empirically proven that you cannot actually aim for maximizing the welfare of the "universe". if you try to maximize the sum of utility, that would justify trying to make as many new people as possible, so as not to "pre-murder" them and it would mean people should decrease their personal utility as much as possible, as long as it increases net utility. or kill one person if it helps cause two people to be born. whereas if you try to maximize average utility, then you want to kill people who are less happy than average. both of these are obviously untenable and don't remotely fit with observed actual human behavior. this is arguably the most elementary fact in the whole of ethical theory.
https://plato.stanford.edu/entries/repugnant-conclusion/
i discuss all of this in my "ethics 101" primer here.
This is true for most EA cause areas. Existential risk work is about protecting the interests of future generations; animal welfare work is about protecting the interests of animals - neither of which groups can vote.
the point is that if you want to altruistically help future generations, or animals for that matter, it makes sense to do so in the most efficient way possible. but the fundamental desire to be truly altruistic in the first place is irrational. "altruism" as we normally use the term is just the selfish behavior of genes trying to help copies of. themselves that happen to be in other bodies. again, clearly explained in this veritasium video and is just trivial biology 101.
No methodology or source is given for why we should expect a 5% decline in the risk of 2 billion deaths.
it's absolutely given, right there in plain english. the BR figures are cited, and there are multiple plausible independent lines of reasoning from which to derive comparable figures. i don't know why you're just ignoring that as if it's not right there written plain as day.
"best for human welfare" just means the sum of all individual (self interested) utilities. so voter preferences cannot be opposite of what is best for human welfare, by definition.
caveat: there's a disparity between intrinsic and instrumental preferences, in other words voters don't actually know what they want. but to solve that you need an entirely different paradigm, namely election by jury.
better voting methods give you the best you can get from the mediocre human brains you have to work with.
> These numbers just seem totally made up. Why should we believe that approval voting has anything like such a large impact?
the page directly addresses that question quite incisively, citing the bayesian regret figures. the upgrade from plurality voting to score voting is roughly double the effect of having democracy in the first place. and approval voting is just the binary (slightly less optimal but dead simple and politically practical) version of score voting.
i think the evidence is pretty straightforward. e.g. bayesian regret figures by princeton math phd warren smith show that approval voting roughly doubles the human welfare impact of democracy.
https://www.rangevoting.org/BayRegsFig
doing some ballpark math to see how many lives that would save:
https://www.rangevoting.org/LivesSaved
i've never seen any EA cause that could even remotely compete with such a massive improvement, especially given the cost is essentially zero, once you've spent the relatively minor cost to run a ballot measure. as opposed to e.g. malaria nets, where you have to pay an ongoing cost to produce them. fixing the voting method is the policy that lubricates the gears for all other policies.
the idea that there's some viable alternative to expected utility maximization is just thoroughly refuted by everything we know about decision making.
http://www.rangevoting.org/UtilFoundns
http://www.rangevoting.org/Mill
http://www.rangevoting.org/OmoUtil.html
- Maximizing expected value can have counterintuitive implications like prioritizing insects over humans or pursuing astronomical payoffs with tiny probabilities.
no. that's an argument about which entities you choose to consider. rational expected value calculus is to care about the smallest set of people that includes yourself. or, more specifically, for a gene to care about itself only.
Alternatives like contractualism and various forms of risk aversion may better align with moral intuitions.
"risk aversion" is just decreasing marginal utility. e.g. if you take a guarantee of a million dollars over a 50% shot at 3 million. u = log2(wealth), so this is an expected utility calculation of:
100% of 1M = 19.93
vs.
50% of 3M = 21.517/2 = 10.7585
thus the guarantee of 1M obviously makes sense unless you're already quite wealthy.
Practical decision-making requires wrestling with moral and empirical uncertainties.
what is "moral" uncertainty? morality is just "genes maximizing their expected number of copies made".
here's some of the virtual juror commentary.