I'm concerned about the new terms of service for Giving What We Can, which will go into effect after August 31, 2024:
6.3 Feedback. If you provide us with any feedback or suggestions about the GWWC Sites or GWWC’s business (the “Feedback”), GWWC may use the Feedback without obligation to you, and you irrevocably assign to GWWC all right, title, and interest in and to the Feedback. (emphasis added)
This is a significant departure from the Effective Ventures' TOS (GWWC is spinning out of EV), which has users grant EV an unlimited but non-exclusive license to use feedback or suggestions they send, while retaining the right to do anything with it themselves. I've previously talked to GWWC staff about my ideas to help people give effectively, like a donation decision worksheet that I made. If this provision goes into effect, it would deter me from sharing my suggestions with GWWC in the future because I would risk losing the right to disseminate or continue developing those ideas or materials myself.
Thank you for raising this!
After your email last week, we agreed to edit that section and copy EV's terms on Feedback. I've just changed the text on the website.
We only removed the part about "all Feedback we request from you will be collected on an anonymous basis", as we might want to collect non-anonymous feedback in the future.
If anyone else has any feedback, make sure to also send us an email (like Eevee did) as we might miss things on the EA Forum.
Disclaimer: I'm a former PayPal employee. The following statements are my opinion alone and do not reflect PayPal's views. Also, this information is accurate as of 2024-10-14 and may become outdated in the future.
More donors should consider using PayPal Giving Fund to donate to charities. To do so, go to this page, search for the charity you want, and donate through the charity's page with your PayPal account. (For example, this is GiveDirectly's page.)
PayPal covers all processing fees on charitable donations made through their giving website, so you don't have to worry about the charity losing money to credit card fees. If you use a credit card that gives you 1.5 or 2% cash back (or 1.5-2x points) on all purchases, your net donation will be multiplied by ~102%. I don't know of any credit cards that offer elevated rewards for charitable donations as a category (like many do for restaurants, groceries, etc.), so you most likely can't do better than a 2% card for donations (unless you donate stocks).
For political donations, platforms like ActBlue and Anedot charge the same processing fees to organizations regardless of what payment metho... (read more)
Epistemic status: preliminary take, likely not considering many factors.
I'm starting to think that economic development and animal welfare go hand in hand. Since the end of the COVID pandemic, the plant-based meat industry has declined in large part because consumers' disposable incomes declined (at least in developed countries). It's good that GFI and others are trying to achieve price parity with conventional meat. However, finding ways to increase disposable incomes (or equivalently, reduce the cost of living) will likely accelerate the adoption of meat substitutes, even if price parity isn't reached.
the plant-based meat industry has declined in large part because consumers' disposable incomes declined (at least in developed countries)
Do you have a source for this? Median real disposable income is growing in the US, as is meat consumption. https://www.vox.com/future-perfect/386374/grocery-store-meat-purchasing people are buying more and more meat as they get richer, even in developed countries
The current board is:
The only people here who even have rumours of being safety-conscious (AFAIK) is Adam D'Angelo, who allegedly played a role in kickstarting last year's board incident, and Sam, who has contradicted a great deal of his rhetoric with his actions. God knows why Larry Summers is there (give it an air of professionalism?), the rest seem to me like your typical professional board members (i.e. unlikely to understand OpenAI's unique charter & structure). In my opinion, any hope of restraint from this board or OpenAI's current leadership is misplaced.
Not sure who to alert to this, but: when filling out the EA Organization Survey, I noticed that one of the fields asks for a date in DD/MM/YYYY format. As an American this tripped me up and I accidentally tried to enter a date in MM/DD/YYYY format because I am more used to seeing it.
I suggest using the ISO 8601 (YYYY-MM-DD) format on forms that are used internationally to prevent confusion, or spelling out the month (e.g. "1 December 2023" or "December 1, 2023").
Okay, so one thing I don't get about "common sense ethics" discourse in EA is, which common sense ethical norms prevail? Different people even in the same society have different attitudes about what's common sense.
For example, pretty much everyone agrees that theft and fraud in the service of a good cause - as in the FTX case - is immoral. But what about cases where the governing norms are ambiguous or changing? For example, in the United States, it's considered customary to tip at restaurants and for deliveries, but there isn't much consensus on when and how much to tip, especially with digital point-of-sale systems encouraging people to tip in more situations. (Just as an example of how conceptions of "common sense ethics" can differ: I just learned that apparently, you're supposed to tip the courier before you get a delivery now, otherwise they might refuse to take your order at all. I've grown up believing that you're supposed to tip after you get service, but many drivers expect you to tip beforehand.) You're never required to tip as a condition of service, so what if you just never tipped and always donated the equivalent amount to highly effective charities instead? That sou... (read more)
Crazy idea: When charities apply for funding from foundations, they should be required to list 3-5 other charities they think should receive funding. Then, the grantmaker can run a statistical analysis to find orgs that are mentioned a lot and haven't applied before, reach out to those charities, and encourage them to apply. This way, the foundation can get a more diverse pool of applicants by learning about charities outside their network.
Content warning: Israel/Palestine
Has there been research on what interventions are effective at facilitating dialogue between social groups in conflict?
I remember an article about how during the last Israel-Gaza flare-up, Israelis and Palestinians were using the audio chatroom app Clubhouse to share their experiences and perspectives. This was portrayed as a phenomenon that increased dialogue and empathy between the two groups. But how effective was it? Could it generalize to other ethnic/religious conflicts around the world?
Although focused on civil conflicts, Lauren Gilbert's shallow explores some possible interventions in this space, including:
"Quality-adjusted civilization years"
We should be able to compare global catastrophic risks in terms of the amount of time they make global civilization significantly worse and how much worse it gets. We might call this measure "quality-adjusted civilization years" (QACYs), or the quality-adjusted amount of civilization time that is lost.
For example, let's say that the COVID-19 pandemic reduces the quality of civilization by 50% for 2 years. Then the QACY burden of COVID-19 is 0.5×2=1 QACYs.
Another example: suppose climate change will reduce the quality of civilization by 80% for 200 years, and then things will return to normal. Then the total QACY burden of climate change over the long term will be 0.8×200=160 QACYs.
In the limit, an existential catastrophe would have a near-infinite QACY burden.
I think we need to be careful when we talk about AI and automation not to commit the lump of labor fallacy. When we say that a certain fraction of economically valuable work will be automated at any given time, or that this fraction will increase, we shouldn't implicitly assume that the total amount of work being done in the economy is constant. Historically, automation has increased the size of the economy, thereby creating more work to be done, whether by humans or by machines; we should expect the same to happen in the future. (Note that this doesn't exclude the possibility of increasingly general AI systems performing almost all economically valuable work. This could very well happen even as the total amount of work available skyrockets.)
Maybe EA philanthropists should be invest more conservatively, actually
The pros and cons of unusually high risk tolerance in EA philanthropy have been discussed a lot, e.g. here. One factor that may weigh in favor of higher risk aversion is that nonprofits benefit from a stable stream of donations, rather than one that goes up and down a lot with the general economy. This is for a few reasons:
I think we separate causes and interventions into "neartermist" and "longtermist" causes too much.
Just as some members of the EA community have complained that AI safety is pigeonholed as a "long-term" risk when it's actually imminent within our lifetimes[1], I think we've been too quick to dismiss conventionally "neartermist" EA causes and interventions as not valuable from a longtermist perspective. This is the opposite failure mode of surprising and suspicious convergence - instead of assuming (or rationalizing) that the spaces of interventions that are... (read more)
Utility of money is not always logarithmic
EA discussions often assume that the utility of money is logarithmic, but while this is a convenient simplification, it's not always the case. Logarithmic utility is a special case of isoelastic utility, a.k.a. power utility, where the elasticity of marginal utility is η=1. But η can be higher or lower. The most general form of isoelastic utility is the following:
u(c)={c1−η−11−ηη≥0,η≠1ln(c)η=1
Some special cases:
η tells us how sharply marginal utility drops off with increasing consumption: if a person already has k times as much money as the baseline, then giving them an extra dollar is worth (1/k)η times as much. Empirical studies have found that η for most people is between 1 and 2. So if the average GiveDirect... (read more)
Nonprofit idea: YIMBY for energy
YIMBY groups in the United States (like YIMBY Action) systematically advocate for housing developments as well as rezonings and other policies to create more housing in cities. YIMBYism is an explicit counter-strategy to the NIMBY groups that oppose housing development; however, NIMBYism affects energy developments as well - everything from solar farms to nuclear power plants to power lines - and is thus an obstacle to the clean energy transition.
There should be groups that systematically advocate for energy projects (which are mostly in rural areas), borrowing the tactics of the YIMBY movement. Currently, when developers propose an energy project, they do an advertising campaign to persuade local residents of the benefits of the development, but there is often opposition as well.
I thought YIMBYs were generally pretty in favor of this already? (Though not generally as high a priority for them as housing.) My guess is it would be easier to push the already existing YIMBY movement to focus on energy more, as opposed to creating a new movement from scratch.
An idea I liked from Owen Cotton-Barratt's new interview on the 80K podcast: Defense in depth
If S, M, or L is any small, medium, or large catastrophe and X is human extinction, then the probability of human extinction is
Pr(X)=Pr(S)Pr(M∣S)Pr(L∣S,M)Pr(X∣S,M,L).
So halving the probability of all small disasters, the probability of any small disaster becoming a medium-sized disaster, etc. would halve the probability of human extinction.
One thing the EA community should try doing is multinational op-ed writing contests. The focus would be op-eds advocating for actions or policies that are important, neglected, and tractable (although the op-eds themselves don't have to mention EA); and by design, op-eds could be submitted from anywhere in the world. To make judging easier, op-eds could be required to be in a single language, but op-ed contests in multiple languages could be run in parallel (such as English, Spanish, French, and Arabic, each of which is an official language in at least 20 countries).
This would have two benefits for the EA community:
On the difference between x-risks and x-risk factors
I suspect there isn't much of a meaningful difference between "x-risks" and "x-risk factors," for two reasons:
I think the difference is that x-risks are events that directly cause an existential catastrophe, such as exti... (read more)
I think your comment (and particularly the first point) has much more to do with the difficulty of defining causality than with x-risks.
It seems natural to talk about force causing the mass to accelerate: when I push a sofa, I cause it to start moving. but Newtonian mechanics can't capture casualty basically because the equality sign in →F=m→a lacks direction. Similarly, it's hard to capture causality in probability spaces.
Following Pearl, I come to think that causality arises from manipulator/manipulated distinction.
So I think it's fair to speak about factors only with relation to some framing:
Usually, there are multiple external factors in your x-risk modeling. The most salient and undesirable are important enough to care about them (and give them a name).
Calling bio-risks an x-factor makes sense formally; but doesn't make sense pragmatically because bio-risks are very salient (in our community) on their own because they are a canonica... (read more)
Possible outline for a 2-3 part documentary adaptation of The Precipice:
Part 1: Introduction & Natural Risks
Part 2: Human-Made Risks
Part 3: What We Can Do
What this leaves out:
Another way to do it would be to do an episode on each type of risk and what can be done about it, for ... (read more)
Tentative thoughts on "problem stickiness"
When it comes to comparing non-longtermist problems from a longtermist perspective, I find it useful to evaluate them based on their "stickiness": the rate at which they will grow or shrink over time.
A problem's stickiness is its annual growth rate. So a problem has positive stickiness if it is growing, and negative stickiness if it is shrinking. For long-term planning, we care about a problem's expected stickiness: the annual rate at which we think it will grow or shrink. Over the long term - i.e. time frames of 50 years or more - we want to focus on problems that we expect to grow over time without our intervention, instead of problems that will go away on their own.
For example, global poverty has negative stickiness because the poverty rate has declined over the last 200 years. I believe its stickiness will continue to be negative, barring a global catastrophe like climate change or World War III.
On the other hand, farm animal suffering has not gone away over time; in fact, it has gotten worse, as a growing number of people around the world are eating meat and dairy. This trend will continue at least until alternative proteins become com
... (read more)UK prime minister Rishi Sunak got some blowback for meeting with Elon Musk to talk about existential AIS stuff on Sky News, and that clip made it into this BritMonkey video criticizing the state of British politics. Starting at moment 1:10:57:
...the Prime Minister of the United Kingdom interviewing the richest man in the world, talking about AI in the context of the James Cameron Terminator films. I can barely believe I'm saying all of this.
Big O as a cause prioritization heuristic
When estimating the amount of good that can be done by working on a given cause, a good first approximation might be the asymptotic behavior of the amount of good done at each point in time (the trajectory change).
Other important factors are the magnitude of the trajectory change (how much good is done at each point in time) and its duration (how long the trajectory change lasts).
For example, changing the rate of economic growth (population growth * GDP/capita growth) has an O(t2) trajectory change i... (read more)
Disclaimer: This shortform contains advice about navigating unemployment benefits. I am not a lawyer or a social worker, and you should use caution when applying this advice to your specific unemployment insurance situation.
Tip for US residents: Depending on which state you live in, taking a work test can affect your eligibility for unemployment insurance.
Unemployment benefits are typically reduced based on the number of hours you've worked in a given week. For example, in New York, you are eligible for the full benefit rate if you worked 10 hours or less ... (read more)
I just listened to Andrew Critch's interview about "AI Research Considerations for Human Existential Safety" (ARCHES). I took some notes on the podcast episode, which I'll share here. I won't attempt to summarize the entire episode; instead, please see this summary of the ARCHES paper in the Alignment Newsletter.
It seems like decibels (dB) are a natural unit for perceived pleasure and pain, since they account for the fact that humans and other beings mostly perceive sensations in proportion to the logarithm of their actual strength. (This is discussed at length in "Logarithmic Scales of Pleasure and Pain".)
Decibels are a relative quantity: they express the intensity of a signal relative to another. A 10x difference is 10 dB, a 100x difference is 20 dB, and so on. The "just noticeable difference" in amplitude of sound is ~1 dB, or a ~25% increase. But decibels can ... (read more)
I'm excited about Open Phil's new cause area, global aid advocacy. Development aid from rich countries could be used to serve several goals that many EAs care about:
Also, development aid can fund a combination of randomista-style and systemic interventions (such as building infrastructure to promote growth).
The United States has two agencies that provide development aid: USAID, which provid... (read more)
NYC is adopting ranked-choice voting for the 2021 City Council election. One challenge will be explaining the new voting system, though.
Making specialty meats like foie gras using cellular agriculture could be especially promising. Foie gras traditionally involves fattening ducks or geese by force-feeding them, which is especially ethically problematic (although alternative production methods exist). It could probably be produced by growing liver and fat cells in a medium without much of a scaffold, which would make it easier to develop.
Status: Fresh argument I just came up with. I welcome any feedback!
Allowing the U.S. Social Security Trust Fund to invest in stocks like any other national pension fund would enable the U.S. public to capture some of the profits from AGI-driven economic growth.
Currently, and uniquely among national pension funds, Social Security is only allowed to invest its reserves in non-marketable Treasury securities, which are very low-risk but also provide a low return on investment relative to the stock market. By contrast, the Government Pension Fund of Norway (als... (read more)
Back-of-the-envelope calculations for improving efficiency of public transit spending
The cost of building and maintaining public transportation varies widely across municipalities due to inefficiencies - for example, the NYC Second Avenue Subway has cost $2.14 billion per kilometer to build, whereas it costs an average of $80.22 million to build a kilometer of tunnel in Spain (Transit Costs Project). While many transit advocacy groups advocate for improving quality of public transit service (e.g. Straphangers Campaign in NYC), few advocate for reducing was... (read more)
AOC's Among Us stream on Twitch nets $200K for coronavirus relief
"We did it! $200k raised in one livestream (on a whim!) for eviction defense, food pantries, and more. This is going to make such a difference for those who need it most right now." — AOC's Tweet
Video game streaming is a popular way to raise money for causes. We should use this strategy to fundraise for EA organizations.
Episodes 5 and 6 of Netflix's 3 Body Problem seem to have longtermist and utilitarian themes (content warning: spoiler alert)
We're probably surveilling poor and vulnerable people in developing and developed countries too much in the name of aiding them, and we should give stronger consideration to the privacy rights of aid recipients. Personal data about these people collected for benign purposes can be weaponized against them by malicious actors, and surveillance itself can deter people from accessing vital services.
"Stop Surveillance Humanitarianism" by Mark Latonero
Automating Inequality by Virginia Eubanks makes a similar argument regarding aid recipients in developed countries.
I think some of us really need to create op-eds, videos, etc. for a mainstream audience defending longtermism. The Phil Torres pieces have spread a lot (people outside the EA community have shared them in a Discord server I moderate, and Timnit Gebru has picked them up) and thus far I haven't seen an adequate response.
I recently saw a presentation with a diagram showing how committed EA funding dropped by almost half with the collapse of FTX, based on these data compiled by 80k in 2022. Open Phil at the time had a $22.5 billion endowment and FTX's founders were collectively worth $16.5 billion.
I think that this narrative gives off the impression that EA causes (especially global health and development) are more funding-constrained than they really are. 80k's data excludes philanthropists that often make donations ... (read more)
A rebuttal of the paperclip maximizer argument
I was talking to someone (whom I'm leaving anonymous) about AI safety, and they said that the AI alignment problem is a joke (to put it mildly). They said that it won't actually be that hard to teach AI systems the subtleties of human norms because language models contain normative knowledge. I don't know if I endorse this claim but I found it quite convincing, so I'd like to share it here.
In the classic naive paperclip maximizer scenario, we assume there's a goal-directed AI system, and its human boss tells it... (read more)
Do emergency universal pass/fail policies improve or worsen student well-being and future career prospects?
I think a natural experiment is in order. Many colleges are adopting universal pass/fail grading for this semester in response to the COVID-19 pandemic, while others aren't. Someone should study the impact this will have on students to inform future university pandemic response policy.
Content warning: the current situation in Afghanistan (2021)
Is there anything people outside Afghanistan can do to address the worsening situation in Afghanistan? GiveDirectly-style aid to Afghans seems like not-an-option because the previous Taliban regime "prevented international aid from entering the country for starving civilians." (Wikipedia)
The best thing we can do is probably to help resettle Afghan refugees, whether by providing resources to NGOs that help them directly, or by petitioning governments to admit more of them. Some charities that do th... (read more)
I think improving bus systems in the United States (and probably other countries) could be a plausible Cause X.
Importance: Improving bus service would:
Neglectedness: City buses probably don't get much attention because most people don't think very highly of them, and focus much more on novel transportation technologies like electric vehicles.
Tractability: According to Higashide, ... (read more)
New Economist special report on dementia - As humanity ages the numbers of people with dementia will surge: The world is ill-prepared for the frightening human, economic and social implications
Possible research/forecasting questions to understand the economic value of AGI research
A common narrative about AI research is that we are on a path to AGI, in that society will be motivated to try to create increasingly general AI systems, culminating in AGI. Since this is a core assumption of the AGI risk hypothesis, I think it's very important to understand whether this is actually the case.
Some people have predicted that AI research funding will dry up someday as the costs start to outweigh the benefits, resulting in an "AI winter." Jeff Bigham wrote ... (read more)
How pressing is countering anti-science?
Intuitively, anti-science attitudes seem like a major barrier to solving many of the world's most pressing problems: for example, climate change denial has greatly derailed the American response to climate change, and distrust of public health authorities may be stymying the COVID-19 response. (For instance, a candidate running in my district for State Senate is campaigning on opposition to contact tracing as well as vaccines.) I'm particularly concerned about anti-economics attitudes because they lead to bad economi
... (read more)Is there any appetite for a project to make high-risk, high-yield donation recommendations within global health and development? The idea would be to identify donation opportunities that could outperform the GiveWell top charities, especially ones that make long-lasting and systemic changes.
Matt Yglesias gets EA wrong :(
What EAs think is that people should make decisions guided by a rigorous empirical evaluation based on consequentialist criteria.
Ummm, no. Not all EAs are consequentialists (although a large fraction of them are), and most EAs these days understand that "rigorous empirical evaluation" isn't the only way to reason about interventions.
It just gets worse from there:
... (read more)In other words, effective altruists don’t think you should make charitable contributions to your church (again, relative to the mass public this is the most cont
I've realized that I feel less constrained when writing poetry than when writing essays/blog posts. Essays are more time-consuming for me - I spend a lot of time adding links, fact-checking my points, and organizing them in a coherent way, and I feel like I have to stake out a clear position when writing in prose. Whereas in poetry, the rules have more to do with making the form and content work well together, and evoking an emotional response in the reader.
I also think poetry is a good medium for expressing ambiguity. I've written a few draft poems in my ... (read more)
Improving checks and balances on U.S. presidential power seems like an important, neglected, and tractable cause area.
John, Katherine, Sarah, and Hank Green are making a $6.5M donation to Partners in Health to address the maternal mortality crisis in Sierra Leone, and are trying to raise $25M in total. PIH has been working with the Sierra Leone Ministry of Health to improve the quality of maternal care through facility upgrades, supplies, and training.
Epistemic status: Although I'm vaguely aware of the evidence on gender equality and peace, I'm not an expert on international relations. I'm somewhat confident in my main claim here.
Gender equality - in societies at large, in government, and in peace negotiations - may be an existential security factor insofar as it promotes societal stability and decreases international and intra-state conflict.
According to the Council on Foreign Relations, women's participation in peacemaking and government at large improves the durability of peace agreements and social ... (read more)
Epistemic status: Tentative thoughts.
I think that medical AI could be a nice way to get into the AI field for a few reasons:
Stuart Russell: Being human and navigating interpersonal relationships will be humans' comparative advantage when artificial general intelligence is realized, since humans will be better at simulating other humans' minds than AIs will. (Human Compatible, chapter 4)
Also Stuart Russell: Automated tutoring!! (Human Compatible, chapter 3)
I think freedom is very important as both an end and a means to the pursuit of happiness.
Economic theory posits a deep connection between freedom (both positive and negative) and well-being. When sufficiently rational people are free to make choices from a broader choice set, they can achieve greater well-being than they could with a smaller choice set. Raising people's incomes expands their choice sets, and consequently, their happiness - this is how GiveDirectly works.
I wonder what a form of effective altruism that focused o... (read more)
I think it would be helpful to get more non-utilitarian perspectives on longtermism (or ones that don't primarily emphasize utilitarianism).
Some questions that would be valuable to address:
Some reasons I think this would be valuable:
I've been reading Adam Gopnik's book A Thousand Small Sanities: The Moral Adventure of Liberalism, which is about the meaning and history of liberalism as a political movement. I think many of the ideas that Gopnik discusses are relevant to the EA movement as well:
I've been thinking about AI safety again, and this is what I'm thinking:
The main argument of Stuart Russell's book focuses on reward modeling as a way to align AI systems with human preferences. But reward modeling seems more like an AI capabilities technology than an AI safety one. If it's really difficult to write a reward function for a given task Y, then it seems unlikely that AI developers would deploy a system that does it in an unaligned way according to a misspecified reward function. Instead, reward modeling makes it feasible to design an AI syste... (read more)
Content warning: missing persons, violence against women, racism.
Amid the media coverage of the Gabby Petito case in the United States, there's been some discussion of how missing persons cases for women and girls of color are more neglected than those for missing White women. Some statistics:
... (read more)Black girls and women go missing at high rates, but that isn't reflected in news coverage of missing persons cases. In 2020, of the 268,884 girls and women who were reported missing, 90,333, or nearly 34% of them, were Black, according to the National Crime Informatio
I don't know how neglected it is compared to EA's standard portfolio of issues (U.S. issues tend to get disproportionate attention from Americans), but I think it's an interesting example of how people outside EA have applied importance and neglectedness to call attention to neglected issues.
Wild idea: Install a small modular reactor in a charter city and make energy its biggest export!
Charter cities' advantage is their lax regulatory environment relative to their host countries. Such permissiveness could be a good environment for deploying nuclear reactors, which are controversial and regulated to death in many countries. Charter cities are good environments for experimenting with governance structures; they can also be good for experimenting with controversial technologies.
Effective giving, deontologist edition
I've got an idea for how to communicate the idea of effective giving to people even if they don't subscribe to consequentialist ethics.
I'm gonna assume that when deontologists and virtue ethicists donate, they still care about outcomes, but not for the same reasons as consequentialists. For example, a deontologist might support anti-bullying charities to reduce bullying because bullying is wrong behavior, not just because bullying has bad consequences. This person should still seek out charities that are more cost-effective at reducing bullying to donate to.
Epistemic status: Raw thoughts that I've just started to think about. I'm highly uncertain about a lot of this.
Some works that have inspired my thinking recently:
Reading/listening to these works has caused me to reevaluate the risks posed by advanced artificial intelligence. While AI risk is currently the top cause in x-risk reduction
... (read more)Some links about the alleged human male fertility crisis - it's been suggested that this may lead to population decline, but a 2021 study has pointed out flaws in the research claiming a decline in sperm count:
I didn't find this response very convincing. Apart from attempting to smear the researchers as racist, it seems their key argument is that while sperm counts appear to have fallen from towards the top to the bottom of the 'normal' range, they're still within the range. But this 'normal' range is fairly arbitrary, and if the decline continues presumably we will go below the normal range in the future.
Joan Gass (2019) recommends four areas of international development to focus on:
Improving state capabilities, or governments' ability to render public services, seems especially promising for public-interest technologists interested in development (ICT4D). For example, the Zenysis platform helps developing-world governments make d... (read more)
My shortforms on public transportation as an EA cause area:
In theory, any ethical system that provides a total order over actions - basically, a relation that says "action A is better than action B" - is compatible with the "effective" part of effective altruism. The essence of effective altruism, then, is following a decision rule that says to choose the best action A available to you in any given situation.
As for the "altruism" part of EA, an ethical system would have to place value on "what's good/right for others," broadly defined. Usually that's the well-being of other individuals (as... (read more)
Practical/economic reasons why companies might not want to build AGI systems
(Originally posted on the EA Corner Discord server.)
First, most companies that are using ML or data science are not using SOTA neural network models with a billion parameters, at least not directly; they're using simple models, because no competent data scientist would use a sophisticated model where a simpler one would do. Only a small number of tech companies have the resources or motivation to build large, sophisticated models (here I'm assuming, like OpenAI does, that model siz... (read more)
Vaccine hesitancy might be a cause X (no, really!)
One thing that stuck out to me in the interview between Rob Wiblin and Ezra Klein is how much of a risk vaccine hesitancy poses to the US government's public health response to COVID:
... (read more)But there are other things where the conservatism is coming from the simple fact, to put this bluntly, they deal with the consequences of a failure in a way you and I don’t. You and I are sitting here, like, “Go faster. The trade-offs are obvious here.” They are saying, “Actually, no. The trade-offs are not obvious. If this goe
I think there should be an EA Fund analog for criminal justice reform. This could especially attract non-EA dollars.
A social constructivist perspective on long-term AI policy
I think the case for addressing the long-term consequences of AI systems holds even if AGI is unlikely to arise.
The future of AI development will be shaped by social, economic and political factors, and I'm not convinced that AGI will be desirable in the future or that AI is necessarily progressing toward AGI. However, (1) AI already has large positive and negative effects on society, and (2) I think it's very likely that society's AI capabilities will improve over time, amplifying these effects and creating new benefits and risks in the future.
A series of polls by the Chicago Council on Global Affairs show that Americans increasingly support free trade and believe that free trade is good for the U.S. economy (87%, up from 59% in 2016). This is probably a reaction to the negative effects and press coverage of President Trump's trade wars - anecdotally, I have seen a lot of progressives who would otherwise not care about or support free trade criticize policies such as Trump's steel tariffs as reckless.
I believe this presents a unique window of opportunity to educate the American public ... (read more)
An EA Meta reading list:
I have a social constructivist view of technology - that is, I strongly believe that technology is a part of society, not an external force that acts on it. Ultimately, a technology's effects on a society depend on the interactions between that technology and other values, institutions, and technologies within that society. So for example, although genetic engineering may enable human gene editing, the specific ways in which humans use gene editing would depend on cultural attitudes and institutions regarding the technology.
How... (read more)
If you're looking at where to direct funding for U.S. criminal justice reform:
List of U.S. states and territories by incarceration and correctional supervision rate
On this page, you can sort states (and U.S. territories) by total prison/jail population, incarceration rate per 100,000 adults, or incarceration rate per 100,000 people of all ages - all statistics as of year-end 2016.
As of 2016, the 10 states with the highest incarceration rates per 100,000 people were:
I'm playing Universal Paperclips right now, and I just had an insight about AI safety: Just programming the AI to maximize profits instead of paperclips wouldn't solve the control problem.
You'd think that the AI can't destroy the humans because it needs human customers to make money, but that's not true. Instead, the AI could sell all of its paperclips to another AI that continually melts them down and turns them back into wire, and they would repeatedly sell paperclips and wire back and forth to each other, both powered by free sunlight. Bonus points if the AIs take over the central bank.
Can someone please email me a copy of this article?
I'm planning to update the Wikipedia article on Social discount rate, but I need to know what the article says.
I think we need to be careful when we talk about AI and automation not to commit the lump of labor fallacy. When we say that a certain fraction of economically valuable work will be automated at any given time, or that this fraction will increase, we shouldn't implicitly assume that the total amount of work being done in the economy is constant. Historically, automation has increased the size of the economy, thereby creating more work to be done, whether by humans or by machines; we should expect the same to happen in the future. (Note that this doesn't exclude the possibility of increasingly general AI systems performing almost all economically valuable work. This could very well happen even as the total amount of work available skyrockets.)
Also see a recent paper finding no evidence for the automation hypothesis:
http://www.overcomingbias.com/2019/12/automation-so-far-business-as-usual.html