Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
I'm writing a newsletter on current events, long-term trends, and topical debates roughly every other day. Recent posts include: * A summary of a debate on AI progress, featuring Ajeya Cotra, Peter Wildeford, Eli Lifland, Matthew Barnett, and others. * The three types of problems with population decline. * We underestimate the pace of progress because much of it isn't salient. * The reason we don't see more automation is simply that AI isn't good enough. * Economists are unusually good at taking human agency into account.
2
Linch
1h
0
"There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy" One thing I've been floating about for a while, and haven't really seen anybody else deeply explore[1], is what I call "further moral goods": further axes of moral value as yet inaccessible to us, that is qualitatively not just quantitatively different from anything we've observed to date. For background, I think normal, secular, humans live in 3 conceptually distinct but overlapping worlds: 1. The physical world: matter, energy, atoms, stars, cells. An detached external observer might think that's all there is to our universe. 2. The mathematical world. Mathematics, logic, abstract structure, rationality, "natural laws." Even many otherwise-strict "materialists" can see how the mathematical world is conceptually distinct from the physical one: mathematical truths seem conceptually different and perhaps deeper than mere physical facts. And if you're a robot/present-day LLM, you might just live in the first two worlds[2]. Some Kantians try to ground morality entirely within this world, in the logic of cooperation and strategic interaction. 3. The world of consciousness. The experiential realm. Qualia, subjective experience, "what it's like to be me." Most secular moral philosophers treat this as where the real moral action is. A pure hedonic utilitarian might think conscious experience is the only thing that matters, but even other moral philosophies would consider conscious experience extremely important (usually the most important). For the purposes of this post, I'm not that interested in the delineating between whether these worlds are truly different or just conceptually interesting ways to talk about things (ie I'm not positing a strong position on mathematical platonism or consciousness dualism) But what's interesting to me is how these different worlds ground morality/value, what some philosophers would call "axiology." When people try to solely ground morality
If you work in an office with other EAs/ interesting and interested people, consider putting the debate slider from our upcoming debate on a big whiteboard. It can lead to some interesting conversations, and even better, some counterfactual forum posts.    PS- I'm aware this looks a bit 'people selling mirrors'
8
quinn
1d
9
i'm confused about tithing. I yearn for the diamond emoji from GWWC, and I'm not comfortable enough to do it since I took like a 50% pay cut to do AI safety nonprofit stuff. Seems weird to make such a financial commitment, which implicates my future wife, who I have presumably not met yet, especially when I'm scraping by without too many savings per paycheck.  Is there a sense in which I already am diamond emoji eligible, because I'm "donating 50% of my income" in the sense of opportunity cost? 50 is, famously, greater than 10. 
In my experience, orgs work much harder to get donations from a "grantmaker" than from an individual. I made my first big donation in 2015, where I donated $20K to REG. I talked to a bunch of orgs in the process of trying to decide where to donate. Some of them didn't respond at all, and many of their responses were shallow. A few months later, I took a philanthropy class at Stanford where we split up into groups and each group was responsible for figuring out where to donate a $20K grant. The level of communication I got from nonprofits was dramatically different. Orgs bent over backwards to be as communicative and helpful as possible. My experience was that orgs didn't put much priority on a $20K grant from me as an individual, but they jumped at the possibility of a $20K grant from a Stanford Grantmaker. For my future donations, I'm considering whether I should rebrand my emails: I could tell nonprofits something like "I'm reaching out as a representative on behalf of the Greatest Happiness Fund, a grantmaker that focuses on supporting effective charities" (Greatest Happiness Fund is the name of my DAF). Maybe I would get better responses that way. It feels a little manipulative though.