Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
22
Linch
21h
0
"There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy" One thing I've been floating about for a while, and haven't really seen anybody else deeply explore[1], is what I call "further moral goods": further axes of moral value as yet inaccessible to us, that is qualitatively not just quantitatively different from anything we've observed to date. For background, I think normal, secular, humans live in 3 conceptually distinct but overlapping worlds: 1. The physical world: matter, energy, atoms, stars, cells. An detached external observer might think that's all there is to our universe. 2. The mathematical world. Mathematics, logic, abstract structure, rationality, "natural laws." Even many otherwise-strict "materialists" can see how the mathematical world is conceptually distinct from the physical one: mathematical truths seem conceptually different and perhaps deeper than mere physical facts. And if you're a robot/present-day LLM, you might just live in the first two worlds[2]. Some Kantians try to ground morality entirely within this world, in the logic of cooperation and strategic interaction. 3. The world of consciousness. The experiential realm. Qualia, subjective experience, "what it's like to be me." Most secular moral philosophers treat this as where the real moral action is. A pure hedonic utilitarian might think conscious experience is the only thing that matters, but even other moral philosophies would consider conscious experience extremely important (usually the most important). For the purposes of this post, I'm not that interested in the delineating between whether these worlds are truly different or just conceptually interesting ways to talk about things (ie I'm not positing a strong position on mathematical platonism or consciousness dualism) But what's interesting to me is how these different worlds ground morality/value, what some philosophers would call "axiology." When people try to solely ground morality
I'm writing a newsletter on current events, long-term trends, and topical debates roughly every other day. Recent posts include: * A summary of a debate on AI progress, featuring Ajeya Cotra, Peter Wildeford, Eli Lifland, Matthew Barnett, and others. * The three types of problems with population decline. * We underestimate the pace of progress because much of it isn't salient. * The reason we don't see more automation is simply that AI isn't good enough. * Economists are unusually good at taking human agency into account.
7
Linch
17h
0
I think a common mistake for researchers/analysts outside of academia[1] is that they don't focus enough on trying to make their research popular. Eg they don't do enough to actively promote their research, or writing their research in a way that's easy to become popular. I talked to someone (a fairly senior researcher) about this, and he said he doesn't care about mass outreach given that only cares about his research being built upon by ~5 people. I asked him if he knows who those 5 people are and could email them; he said no. I think this is a systematic mistake most of the time. It's true that your impact often routes through a small number of people. However, only some of the time would you know who the decisionmakers are ahead of time (eg X philanthropic fund should fund Y project, B regulator should loosen regulations in C domain), and have a plan for directly reaching them. For the other cases, you probably need to reach at minimum thousands of vaguely-related/vaguely-interested people before the ~5 most relevant people for your research would come across your research.   Furthermore, popularity has other advantages: * If many people read your writing, it's more likely someone else can discover empirical mistakes, logical errors, or (on the upside) unexpected connections. If 100 randos read your article, it's unlikely any of them can discover a critical mistake. This becomes much more likely at 10,000+ randos. * writing for a semi-popular audience forces some degree of simplicity and a different type of rigor. If you write for "informed people" or "vaguely related experts" as opposed to people in your subsubfield, you have less shared assumptions, and are forced to use less jargon and be more precise about your claims. * Recruitment and talent attraction. If your research agenda is good, you want other people to work on it. Popular writing is one of the best ways to get other smart people (with or without directly relevant expertise) to notice a probl
If you work in an office with other EAs/ interesting and interested people, consider putting the debate slider from our upcoming debate on a big whiteboard. It can lead to some interesting conversations, and even better, some counterfactual forum posts.    PS- I'm aware this looks a bit 'people selling mirrors'
12
quinn
2d
16
i'm confused about tithing. I yearn for the diamond emoji from GWWC, and I'm not comfortable enough to do it since I took like a 50% pay cut to do AI safety nonprofit stuff. Seems weird to make such a financial commitment, which implicates my future wife, who I have presumably not met yet, especially when I'm scraping by without too many savings per paycheck.  Is there a sense in which I already am diamond emoji eligible, because I'm "donating 50% of my income" in the sense of opportunity cost? 50 is, famously, greater than 10.