This is a special post for quick takes by Elizabeth. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
EA organizations frequently ask for people to run criticism by them ahead of time. I’ve been wary of the push for this norm. My big concerns were that orgs wouldn’t comment until a post was nearly done, and that it would take a lot of time. My recent post mentioned a lot of people and organizations, so it seemed like useful data.
I reached out to 12 email addresses, plus one person in FB DMs and one open call for information on a particular topic. This doesn’t quite match what you see in the post because some people/orgs were used more than once, and other mentions were cut. The post was in a fairly crude state when I sent it out.
Of those 14: 10 had replied by the start of next day. More than half of those replied within a few hours. I expect this was faster than usual because no one had more than a few paragraphs relevant to them or their org, but is still impressive.
It’s hard to say how sending an early draft changed things. One person got some extra anxiety because their paragraph was full of TODOs (because it was positive and I hadn’t worked as hard fleshing out the positive mentions ahead of time). I could maybe have saved myself one stressful interaction if I’d rea... (read more)
Nice, thanks for keeping track of this and reporting on the data!! <3
No pressure to respond, but I'm curious how long it took you to find the relevant email addresses, send the messages, then reply to all the people etc.? I imagine for me, the main costs would probably be in the added overhead (time + psychological) of having to keep track of so many conversations.
2
Elizabeth
Off the top of my head: in maybe half the cases I already had the contact info. In one or two cases cases one of beta readers passed on the info. For the remainder it was maybe <2m per org, and it turns out they all use info@domain.org so it would be faster next time.
Most approaches to increasing agency and ambition focus on telling people to dream big and not be intimidated by large projects. I'm sure that works for some people, but it feels really flat for me, and I consider myself one of the lucky ones. The worst case scenario is big inspiring speeches get you really pumped up to Solve Big Problems but you lack the tools to meaningfully follow up.
Faced with big dreams but unclear ability to enact them, people have a few options.
try anyway and fail badly, probably too badly for it to even be an educational failure.
fake it, probably without knowing they're doing so
learned helplessness, possible systemic depression
be heading towards failure, but too many people are counting on you so someone steps in and rescue you. They consider this net negative and prefer the world where you'd never started to the one where they had to rescue you.
discover more skills than they knew. feel great, accomplish great things, learn a lot.
The first three are all very costly, especially if you repeat the cycle a few times.
My preferred version is ambition snowball or "get ambitious slowly". Pick something b... (read more)
This post is very popular on Twitter https://x.com/eaheadlines/status/1690624321117388800?s=46&t=7jI2LUFFCdoHtZr1AtWyCA
3
Jonas V
Hmm, I personally think "discover more skills than they knew. feel great, accomplish great things, learn a lot" applies a fair amount to my past experiences, and I think aiming too low was one of the biggest issues in my past, and I think EA culture is also messing up by discouraging aiming high, or something.
I think the main thing to avoid is something like "blind ambition", where your plan involves multiple miracles and the details are all unclear. This seems also a fairly frequent phenomenon.
2
Elizabeth
Accepting your self-report as a given, I have a bunch of questions.
I want to say that I'm not against ambition. From my perspective I'm encouraging more ambition, by focusing on things that might actually happen instead of daydreams.
Does the failure mode I'm describing (people spinning their wheels on fake ambition) make sense to you? Have you seen it?
I'm really surprised to hear you describe EA as discouraging aiming high. Everything I see encourages aiming high, and I see a bunch of side effects of aiming too high littered around me. Can you give some examples of what you're worried about?
What do you think would have encouraged more of the right kind of ambition for you? Did it need to be "you can solve global warming?", or would "could you aim 10x higher?" be enough?
1
Jonas V
Feeling a bit tired to type a more detailed response, but I think I mostly agree with what you say here.
2
Joseph Lemien
I think that you in particular might be quite non-representative of EAs in general, in terms of "success" in the EA context. If I imagine a distribution of "EA success," you are probably very far to the right.
2
emre kaplan🔸
I also liked this quote from Obama on a similar theme. The advice is pretty common for very good reasons but hearing it from former POTUS had more emotional strength on me:
"how do we sustain our own sense of hope, drive, vision, and motivation? And how do we dream big? For me, at least, it was not a straight line. It wasn't a steady progression. It was an evolution that took place over time as I tried to align what I believed most deeply with what I saw around me and with my own actions.
(...)
The first stage is just figuring out what you really believe. What's really important to you, not what you pretend is important to you. And what are you willing to risk or sacrifice for it? The next phase is then you test that against the world, and the world kicks you in the teeth. It says, "You may think that this is important, but we've got other ideas. And who are you? You can't change anything."
Then you go through a phase of trying to develop skills, courage, and resilience. You try to fit your actions to the scale of whatever influence you have. I came to Chicago and I'm working on the South Side, trying to get a park cleaned up or trying to get a school improved. Sometimes I'm succeeding, a lot of times I'm failing. But over time, you start getting a little bit of confidence with some small victories. That then gives you the power to analyze and say, "Here's what worked, here's what didn't. Here's what I need more of in order to achieve the vision or the goals that I have." Now, let me try to take it to the next level, which means then some more failure and some more frustration because you're trying to expand the orbit of your impact.
I think it's that iterative process. It's not that you come up with a grand theory of "here's how I'm going to change the world" and then suddenly it all just goes according to clockwork. At least not for me. For me, it was much more about trying to be the person I wanted to believe I was. And at each phase, challenging myself and t
8
Elizabeth
"we don't have time" is only an argument for big gambles if they work. If ambition snowballs work better, then a lack of time is all the more reason not to waste time with vanity projects whose failures won't even be educational.
I could steel man this as something of a lottery, where n% of people with way-too-big goals succeed and those successes are more valuable than the combined cost of the failures. I don't think we're in that world, because I think goals in the category I describe aren't actually goals, they're dreams, and by and large can't succeed.
You could argue that's defining myself into correctness and some big goals are genuinely goals even if they pattern match my criteria like "failure is uninformative" and "contemplating a smaller project is scary or their mind glances off the option (as opposed to being rejected for being too small)". I think that's very unlikely to be true for my exact critieria, but agree that in general overly broad definitions of fake ambition could do a lot of damage. I think creating a better definition people can use to evaluate their own goals/dreams is useful for that exact reason.
I also think that even if there are a few winning tickets in that lottery- people pushed into way-too-big projects that succeed- there aren't enough of them to make a complete problem-solving ecosystem. The winning tickets still need staff officers to do the work they don't have time for, or require skills inimical to swinging for the fences.
I should note that my target audience here is primarily "people attempting to engender ambition in others", followed by "the people who are subject to those attempts". I think engendering fake ambition is actively harmful, and the counterfactual isn't "30 years in a suit", it's engendering ambition snowballs that lead to more real projects. I don't think discouraging people who are naturally driven to do much-too-big projects is helpful.
I'd also speculate that if you tell a natural fence-swinger to s
2
NickLaing
This is fantastic, and mirrors the method that has helped things work well in my own life.
2
lincolnq
I'm a bit confused about this because "getting ambitious slowly" seems like one of those things where you might not be able to successfully fool yourself: once you can conceive that your true goal is to cure cancer, you are already "ambitious"; unless you're really good at fooling yourself, you will immediately view smaller goals as instrumental to the big one. It doesn't work to say I'm going to get ambitious slowly.
What does work is focusing on achievable goals though! Like, I can say I want to cure cancer but then decide to focus on understanding metabolic pathways of the cell, or whatever. I think if you are saying that you need to focus on smaller stuff, then I am 100% in agreement.
3
Elizabeth
Does what I said here and here answer this? the goal isn't "put the breaks on internally motivated ambition", it's "if you want to get unambitious people to do bigger projects, you will achieve your goal faster if you start them with a snowball rather than try to skip them straight to Very Big Plans".
I separately think we should be clearer on the distinction between goals (things you are actively working on, have a plan with concrete next steps and feedback loops, and could learn from failure) and dreams (things you vaguely aspire and maybe are working in the vicinity of, but no concrete plans). Dreams are good, but the proper handling of them is pretty different from that of goals.
-1
Rainbow Affect
Agreed.
I think that people should break down their goals, no matter how easy they seem, into easier and smaller steps, especially if they feel lazy. Laziness appears when we feel like we need to do tasks that seem unecessary for us, even when we know that they're necessary. One reason why they appear unecessary is their difficulty of achievement. Why exercise for 30 minutes per day if things are "fine" without that? As such, one way to deal with that is by taking whatever goal you have and breaking it down into a lot of easy steps. As an example, imagine that you want to write the theoretical part of your thesis. So, you could start by writing what is the topic, what questions you might want to research, what key uncertainties you have about those questions, then you search for papers in order to clarify those uncertainties, and so on, immediate step by step, until you finish your thesis. If a step seems difficult, break it down even more. That's why I think that breaking down your goals into smaller and easier steps might help when you feel lazy.
Anyways, thanks for your quick take!
A friend asked me which projects in EA I thought deserved more money, especially ones that seemed to be held back by insufficient charisma of the founders. After a few names he encouraged me to write it up. This list is very off the cuff and tentative: in most cases I have pretty minimal information on the project, and they’re projects I incidentally encountered on EAF. If you have additions I encourage you to comment with them.
The main list
The bar here is “the theory of change seems valuable, and worse projects are regularly funded”.
Faunalytics is a data analysis firm focused on metrics related to animal suffering. I searched high and low for health data on vegans that included ex-vegans, and they were the only place I found anything that had any information from ex-vegans. They shared their data freely and offered some help with formatting, although in the end it was too much work to do my own analysis.
I do think their description minimized the problems they found. But they shared enough information that I could figure that out rather than relying on their interpretation, and that’s good enough.
Very grateful for the kind words, Elizabeth! Manifund is facing a funding shortfall at the moment, and will be looking for donors soon (once we get the ACX Grants Impact Market out the door), so I really appreciate the endorsement here.
(Fun fact: Manifund has never actually raised donations for our own core operations/salary; we've been paid ~$75k in commission to run the regrantor program, and otherwise have been just moving money on behalf of others.)
7
Elizabeth
what would fundraising mean here? is it for staffing, or donations to programs, or to your grantmakers to distribute as they seem fit?
8
Saul Munn
i've been working at manifund for the last couple months, figured i'd respond where austin hasn't (yet)
here's a grant application for the meta charity funders circle that we submitted a few weeks ago, which i think is broadly representative of who we are & what we're raising for.
tldr of that application:
* core ops
* staff salaries
* misc things (software, etc)
* programs like regranting, impact certificates, etc, for us to run how we think is best[1]
additionally, if a funder was particularly interested in a specific funding program, we're also happy to provide them with infrastructure. e.g. we're currently facilitating the ACX grants, we're probably (70%) going to run a prize round for dwarkesh patel, and we'd be excited about building/hosting the infrastructure for similar funding/prize/impact cert/etc programs. this wouldn't really look like [funding manifund core ops, where the money goes to manifund], but rather [running a funding round on manifund, where the funding mostly[2] goes to object-level projects that aren't manifund].
i'll also add that we're a less funding-crunched than when austin first commented; we'll be running another regranting round, for which we'll be paid another $75k in commission. this was new info between his comment and this comment. (details of this are very rough/subject to change/not firm.)
1. ^
i'm keeping this section intentionally vague. what we want is [sufficient funding to be able to run the programs we think are best, iterate & adjust quickly, etc] not [this specific particular program in this specific particular way that we're tying ourselves down to]. we have experimentation built into our bones, and having strings attached breaks our ability to experiment fast.
2. ^
we often charge a fee of 5% of the total funding; we've been paid $75k in commission to run the $1.5mm regranting round last year.
6
Jason
I probably would have had ALLFED and CE on a list like this had I written it (don't know as much about most of the other selections). It seems to me that both organizations get, on a relative basis, a whole lot more public praise than they get funding. Does anyone have a good explanation for the praise-funding mismatch?
TL;DR: I think the main reason is the same reason we aren't donating to them: we think there are even more promising projects in terms of the effectiveness of a marginal $, and we are extremely funding constrained. I strongly agree with Elizabeth that all these projects (and many others) deserve more money.
Keeping in mind that I haven't researched any of the projects, and I'm definitely not an expert in grantmaking; I personally think that “the theory of change seems valuable, and worse projects are regularly funded” is not the right bar to estimate the relative value of a marginal dollar, as it doesn't take into account funding-gaps, costs, and actual results achieved.
As a data point on the perspective of a mostly uninformed effectiveness-oriented small donor, here's why I personally haven't donated to these projects in 2023, starting from the 2 you mention.
I'm not writing this because I think they are good reasons to fund other projects, but as a potentially interesting data-point in the psychology of an uninformed giver.
ALLFED:
Their theory of change seems really cool, but research organizations seem very hard to evaluate as a non-expert. I think 3 things all need to ... (read more)
Quick notes on your QURI section:
"after four years they don't seem to have a lot of users" -> I think it's more fair to say this has been about 2 years. If you look at the commit history you can see that there was very little development for the first two years of that time.
https://github.com/quantified-uncertainty/squiggle/graphs/contributors
We've spent a lot of time at blog posts / research, and other projects, as well as Squiggle Hub. (Though in the last year especially, we've focused on Squiggle)
Regarding users, I'd agree it's not as many as I would have liked, but think we are having some. If you look through the Squiggle Tag, you'll see several EA groups who have used Squiggle.
We've been working with a few EA organizations on Squiggle setups that are mostly private.
I think for-profits have their space, but I also think that nonprofits and open-source/open organizations have a lot of benefits.
4
Lorenzo Buonanno🔸
Thank you for the context! Useful example of why it's not trivial to evaluate projects without looking into the details
4
Ozzie Gooen
Of course! In general I'm happy for people to make quick best-guess evaluations openly - in part, that helps others here correct things when there might be some obvious mistakes. :)
4
Jason
My thoughts were:
* For many CE-incubated charities, the obvious counterfactual donation would be to GiveWell top charities, and that's a really high bar.
* I consider the possibility that a lot of ALLFED's potential value proposition comes from a low probability of saving hundreds of millions to billions of lives in scenarios that would counterfactually neither lead to extinction nor produce major continuing effects thousands of years down the road.
* If that is so, it is plausible that this kind of value proposition may not be particularly well suited to many neartermist donors (for whom the chain of contingencies leading to impact may be too speculative for their comfort level) or to many strong longtermist donors (for whom the effects thousands to millions of years down the road may be weaker than for other options seen as mitigating extinction risk more).
* If you had a moral parliament of 50 neartermists & 50 longtermists that could fund only one organization (and by a 2/3 majority vote), one with this kind of potential impact model might do very well!
8
Elizabeth
I think this is right and important. Possible additional layer: some donors are more comfortable with experimental or hits based giving than others. Those people disproportionately go into x-risk. The donors remaining in global poverty/health are both more adverse to uncertainty and have options to avoid it (both objectively, and vibe-wise)
4
Lorenzo Buonanno🔸
I really agree with the first point, and the really high bar is the main reason all of these projects have room for more funding.
I somewhat disagree with the second point: my impression is that many donors are interested in mitigating non-existential global catastrophic risks (e.g. natural pandemics, climate change), but I don't have much data to support this.
2
Jason
I don't think many donors are interested in mitigating non-existential global catastrophic risks is necessarily inconsistent with the potential explanation for why organizations like ALLFED may get substantially more public praise than funding. It's plausible to me that an org in that position might be unusually good at rating highly on many donors' charts, without being unusually good at rating at the very top of the donors' lists:
* There's no real limit on how many orgs one can praise, and preventing non-existential GCRs may win enough points on donors' scoresheets to receive praise from the two groups I described above (focused neartermists and focused longtermists) in addition to its actual donors.
* However, many small/mid-size donors may fund only their very top donation opportunities (e.g., top two, top five, etc.)
7
Vasco Grilo🔸
Hi Jason,
Here is why I do not recommend donating to ALLFED, for which I work as a contractor. If one wants to:
* Minimise existential risk, one had better donate to the best AI safety interventions, namely the Long-Term Future Fund (LTFF).
* Maximise nearterm welfare, one had better donate to the best animal welfare interventions.
* I estimate corporate campaigns for chicken welfare, like the ones promoted by The Humane League, are 1.37 k times as cost-effective as GiveWell's top charities.
* Maximise nearterm human welfare in a robust way, one had better donate to GiveWell's funds.
* I guess the cost-effectiveness of ALLFED is of the same order of magnitude of that of GiveWell's funds (relatedly), but it is way less robust (in the sense my best guess will change more upon further investigation).
* CEARCH estimated "the cost-effectiveness of conducting a pilot study of a resilient food source to be 10,000 DALYs per USD 100,000, which is around 14× as cost-effective as giving to a GiveWell top charity". "The result is highly uncertain. Our probabilistic model suggests a 53% chance that the intervention is less cost-effective than giving to a GiveWell top charity, and an 18% chance that it is at least 10× more cost-effective. The estimated cost-effectiveness is likely to fall if the intervention is subjected to further research, due to optimizer’s curse". I guess CEARCH is overestimating cost-effectiveness (see my comments).
* Maximise nearterm human welfare supporting interventions related to nuclear risk, one had better donate to Longview’s Nuclear Weapons Policy Fund.
* My impression is that efforts to decrease the number of nuclear detonations are more cost-effective than ones to decrease famine deaths caused by nuclear winter. This is partly informed by CEARCH estimating that lobbying for arsenal limitation is 5 k times as cost-effective as GiveWell's top charities, although I guess the actual cost-effectivess is more like 0.5 to 50 times that
7
Elizabeth
Some hypotheses:
1. I'm wrong, and they are adequately funded
2. I'm wrong and they're not outstanding orgs, but discovering that takes work the praisers haven't done.
3. The praise is a way to virtue signal, but people don't actually put their money behind it.
4. The praise is truly meant and people put their money behind it, but none of the praise is from the people with real money.
5. I believe CE has received OpenPhil money and ALLFED CEA and SFF money, just not as much as they wanted. Maybe the difference is not in # of grants approved, but in how much room for funding big funders believe they have or want to fill.
1. I'm not sure of CE's funding situation, it was the incubated orgs that they pitched as high-need.
6. Maybe the OpenPhil AI and meta teams are more comfortable fully funding something than other teams.
7. ALLFED also gets academic grants, maybe funders fear their money will replace those rather than stack on top of.
8. OpenPhil has a particular grant cycle, maybe it doesn't work for some orgs (at least not as their sole support).
5
ag4000
I found this list very helpful, thank you!
On exotic tofu: I am not yet convinced that Stiffman doesn't have the requisite charisma. Is your concern that he's vegan (hence less relatable to non-vegans), his messaging in Broken Cuisine specifically, or something else? I am sympathetic to the first concern, but not as convinced by the second. In particular, from what little else I've read from Stiffman, his messaging is more like his original post on this Forum: positive and minimally doom-y. See, for example, his article in Asterisk, this podcast episode (on what appears to be a decently popular podcast?), and his newsletter.
Have you reached out to him directly about your concerns about his messaging? Your comments seem very plausible to me and reaching out seems to have a high upside.
5
Elizabeth
I sent a message to George Stiffman through a mutual friend and never heard back, so I gave up after 2 pings (to the friend).
Thanks for mentioning places Stiffman comes across better. I've read the Asterisk article and found it irrelevant to his consumer-aimed work. Maybe the Bittman podcast is consumer-targeted and an improvement, I dunno. For now I can't get over that book title and blurb.
3
anormative
Can you elaborate on what you mean by “the EA-offered money comes with strings?”
2
Elizabeth
Not well. I only have snippets of information, and it's private (Habryka did sign off on that description).
I don't know if this specifically has come up in regards to Lightcone or Lighthaven, but I know Haybrka has been steadfastly opposed to the kind of slow, cautious, legally-defensive actions coming out of EVF. I expect he would reject funding that demanded that approach (and if he accepted it, I'd be disappointed in him, given his public statements).
2
Dawn Drescher
Thanks for putting the Exotic Tofu Project on my screen! I also like all the others.
We (me and my cofounder) run yet another “impact certificates” project. We started out with straightforward impact certificates, but the legal hurdles for us and for the certificate issuers turned out too high and possibly (for us) insurmountable, at least in the US.
We instead turned to the system that works for carbon credits. These are not so much traded on the level of the certificate or impact claim but instead there are validators that confirm that the impact has happened according to certain standards and then pay out the impact credits (or carbon credits) associated with that standard.
That system seems more promising to us as it has all the advantages of impact certificate markets but also the advantage that one party (e.g., us) can do the legal battle in the US once for this impact credit (and can even rely on the precedent of carbon credits), and thereby pave the ground for all the other market participants that come after and don't have to worry about the legalities anymore. There are already a number non-EA organizations that are working toward a similar vision.
Even outside such restrictive jurisdictions as the US, this system has the advantage that it allows for deeper liquidity on the impact credit markets (compared to the auctions for individual impact certificates). But the US is an important market for EA and AI safety, so we couldn't just ignore it even if it hadn't been for this added benefit.
We've started bootstrapping this system with GiveWiki in January of last year. But over the course of the year we've found it very hard to find anyone who wanted to use the system as a donor/grantmaker. Most of the grantmakers we were in touch with had lost their funding in Nov. 2022; others wanted to wait until the system is mature; and many smaller donors had no trouble finding great funding gaps without our help.
We will keep the platform running, but we'll prob
2
Elizabeth
GiveWiki just looks a list of charities to me; what's the additional thing you are doing?
4
Dawn Drescher
Frankie made a nice explainer video for that!
What a market does, idealizing egregiously, is that people with special knowledge or insight invest into things early: Thus less informed people (some of whom have more capital) can watch the valuations, and invest into projects with high and increasing valuations or some other valuation-based marker of quality. A process of price discovery.
AngelList, for example, facilitates that. They have a no-action letter from the SEC (and the startups on AngelList have at least a registration D I imagine), so they didn't have to register as a broker-dealer to be allowed to match startups to investors. I think they have some funds that are led by seasoned investors, and then the newbie investors can follow the seasoned ones by investing into their funds. Or some mechanism of that sort.
We're probably not getting a no-action letter, and we don't have the money yet to start the legal process to get our impact credits registered with the CFTC. So instead we recognized that in the above example investors are treating valuations basically like scores. So we're just using scores for now. (Some rich people say money is just for keeping score. We're not rich, so we use scores directly.)
The big advantage of actual scores (rather than using monetary valuations like scores) is that it's legally easy. The disadvantage is that we can't pitch GiveWiki to profit-oriented investors.
So unlike AngelList, we're not giving profit-oriented investors the ability to follow more knowledgeable profit-oriented investors, but we're allowing donors/grantmakers to follow more knowledgeable donors/grantmakers. (One day, with the blessing of the CFTC, we can hopefully lift that limitation.)
We usually frame this as a process of three phases:
1. Implement the equivalent of price discovery with a score. (The current state of GiveWiki.)
2. Pay out a play money currency according to the score.
3. Turn the play money currency into a real impact credit t
There's a thing in EA where encouraging someone to apply for a job or grant gets coded as "supportive", maybe even a very tiny gift. But that's only true when [chance of getting job/grant] x [value of job/grant over next best alternative] > [cost of applying].
One really clear case was when I was encouraged to apply for a grant my project wasn't a natural fit for, because "it's quick and there are few applicants". This seemed safe, since the deadline was in a few hours. But in those few hours the number of applications skyrocketed- I want to say 5x but my memory is shaky- presumably because I wasn't the only person the grantmaker encouraged. I ended up wasting several hours of my and co-founders time before dropping out, because the project really was not a good fit for the grant.
[if the grantmaker is reading this and recognizes themselves: I'm not mad at you personally].
I've been guilty of this too, defaulting to encouraging people to try for something without considering the costs of making the attempt, or the chance of success. It feels so much nicer than telling someone "yeah you're probably not good enough".
A lot of EA job postings encourage people t... (read more)
I think this falls into a broader class of behaviors I'd call aspirational inclusiveness.
I do think shifting the relative weight from welcoming to clear is good. But I'd frame it as a "yes and" kind of shift. The encouragement message should be followed up with a dose of hard numbers.
Something I've appreciated from a few applications is the hiring manager's initial guess for how the process will turn out. Something like "Stage 1 has X people and our very tentative guess is future stages will go like this".
Scenarios can also substitute in areas where numbers may be misleading or hard to obtain. I've gotten this from mentors before, like here's what could happen if your new job goes great. Here's what could happen if your new job goes badly. Here's the stuff you can control and here's the stuff you can't control.
Something I've tried to practice in my advice is giving some ballpark number and reference class. I tell someone they should consider skilling up in hard area or pursuing competitive field, then I tell them I expect success in <5% of people I give the advice to, and then say you may still want to do it because of certain reasons
Yes, it's all very noisy. But numbers seem far far better than expecting applicants to read between the lines on what a heartwarming message is supposed to mean, especially early-career folks who would understandably assign a high probability of success with it
Yeah this sounds right.
One thing is just that discouragement is culturally quite hard and there are strong disincentives to do so; eg I think I definitely get more flak for telling people they shouldn't do X than telling them they should (including a recent incidence which was rather personally costly). And I think I'm much more capable of diplomatic language than the median person in such situations; some of my critical or discouraging comments on this forum are popular.
I also know at least 2 different people who were told (probably wrongly) many years ago that they can't be good researchers, and they still bring it up as recently as this year. Presumably people falsely told they can be good researchers (or correctly told that they cannot) are less likely to e.g. show up at EA Global. So it's easier for people in positions of relative power or prestige to see the positive consequences of encouragement, and the negative consequences of discouragement, than the reverse.
Sometimes when people ask me about their chances, I try to give them off-the-cuff numerical probabilities. Usually the people I'm talking to appreciate it but sometimes people around them (or around me) get mad at me.
(Tbf, I have never tried scoring these fast guesses, so I have no idea how accurate they are).
1
Evan_Gaensbauer
How my perspective has changed on this during the last few years is to advise others not to give much weight to a single point of feedback. Especially for those who've told me only one or two people have discouraged them from be(com)ing a researcher, I tell them not to stop trying in spite of that. That's even when the person giving the discouraging feedback is in a position of relative power or prestige.
The last year seems to have proven that the power or prestige someone has gained in EA is a poor proxy for how much weight their judgment should be given on any, single EA-relsted topic. If Will MacAskill and many of his closest peers are doubting how they've conceived of EA for years in the wake of the FTX collapse, I expect most individual effective altruists confident enough to judge another's entire career trajectory are themselves likely overconfident.
Another example is AI safety. I've talked to dozens of aspiring AI safety researchers who've felt very discouraged An illusory consensus thrust upon them that their work was essentially worthless because it didn't superficially resemble the work being done by the Machine Intelligence Research Institute or whatever other approach was in vogue at the time. For years, I suspected that was bullshit.
Some of the brightest effective altruists I've met were being inundated by personal criticism harsher than any even Eliezer Yudkowsky would give. I told those depressed, novice AIS researchers to ignore those dozens of jerks who concluded the way to give constructive criticism, like they presumed Eliezer would, was to emulate a sociopath. These people were just playing a game of 'follow the leader' not even the "leaders" would condone. I distrusted their hot takes based on clout and vibes about who was competent and who wasn't
Meanwhile, increasingly over the last year or two, more and more of the AIS field, including some of its most reputed luminaires, have come out of the woodwork more and more to say, essentially
3
Julia Michaels 🔸
Thanks for this post, as I've been trying to find a high-impact job that's a good personal fit for 9 months now. I have noticed that EA organizations use what appears to be a cookie-cutter recruitment process with remarkable similarities across organizations and cause areas. This process is also radically different from what non-EA nonprofit organizations use for recruitment. Presumably EA organizations adopted this process because there's evidence behind its effectiveness but I'd love to see what that evidence actually is. I suspect it privileges younger, (childless?) applicants with time to burn, but I don't have data to back up this suspicion other than viewing the staff pages of EA orgs.
3
Elizabeth
Can you say more about cookie-cutter recruitment? I don't have a good sense of what you mean here.
I think solving this is tricky. I want hiring to be efficient, but most ways hiring orgs can get information take time, and that's always going to be easier for people with more free time. I think EA has an admirable norm of paying for trials and deserves a lot of credit for that.
2
Austin
One possible solution is to have applicants create a prediction market on their chance of getting a job/grant, before applying -- this helps grant applicants get a sense of how good their prospects are. (example 1, 2) Of course, there's a cost to setting up a market and making the relevant info legible to traders, but it should be a lot less than the cost of writing the actual application.
Another solution I've been entertaining is to have grantmakers/companies screen applications in rounds, or collaboratively, such that the first phase of application is very very quick (eg "drop in your Linkedin profile and 2 sentences about why you're a good fit").
4
Joseph Lemien
I'd be interested in seeing some organizations try out the very very quick method. Heck, I'd be willing to help set it up and trial run it. My rough/vague perception is that a lot of the information in a job application is superfluous.
I also remember Ben West posting some data about how a variety of "how EA is this person" metrics held very little predictive value in his own hiring rounds.
EA hiring gets a lot of criticism. But I think there are aspects at which it does unusually well.
One thing I like is that hiring and holding jobs feels way more collaborative between boss and employee. I'm much more likely to feel like a hiring manager wants to give me honest information and make the best decision, whether or not that's with them.Relative to the rest of the world they're much less likely to take investigating other options personally.
Work trials and even trial tasks have a high time cost, and are disruptive to people with normal amounts of free time and work constraints (e.g. not having a boss who wants you to trial with other orgs because they personally care about you doing the best thing, whether or not it's with them). But trials are so much more informative than interviews, I can't imagine hiring for or accepting a long-term job without one.
Trials are most useful when you have the least information about someone, so I expect removing them to lead to more inner-ring dynamics and less hiring of unconnected people.
EA also has an admirable norm of paying for trials, which no one does for interviews.
The impression I get from the interview paradigm vs work trial paradigm is: so much of today's civilization is less than 100 years old, and really big transformations happen during each decade. The introduction of work trials is one of those things.
People talk about running critical posts by the criticized person or org ahead of time, and there are a lot of advantages to that. But the plans I've seen are all fairly one sided: all upside goes to the criticized, all the extra work goes to the critic.
What I'd like to see is some reciprocal obligation from recipients of criticism, especially formal organizations with many employees. Things like answering questions from potential critics very early in the process, with a certain level of speed and reliability. Right now it feels like orgs are very fast to respond to polished, public posts, but you can't count on them to even answer questions. They'll respond quickly to public criticism, and maybe even to polished posts sent to them before publication, but they are not fast or reliable at answering questions with implicit potential criticism behind them. Which is a pretty shitty deal for the critic, who I'm sure would love to find out their concern was unmerited before spending dozens of hours writing a polished post.
This might be unfair. I'm quite sure it used to be true, but a lot of the orgs have professionalized over the years. In which case I'd like to ask they make their commitments around this public and explicit, and share them in the same breath that they ask for heads up on criticism.
But the plans I've seen are all fairly one sided: all upside goes to the criticized, all the extra work goes to the critic.
I see a pretty important benefit to the critic, because you're ensuring that there isn't some obvious response to your criticisms that you are missing.
I once posted something that revised/criticized an Open Philanthropy model, without running it by anyone there, and it turned out that my conclusions were shifted dramatically by a coding error that was detected immediately in the comments.
That's a particularly dramatic example that I don't expect to generalize, but often if a criticism goes "X organization does something bad" the natural question is, why do they do that? Is there a reason that's obvious in hindsight that they've thought about a lot, but I haven't? Maybe there isn't, but I would want to run a criticism by them just to see if that's the case.
I don't think people are obligated to build in the feedback they get extensively if they don't think it's valid/their point still stands.
This seems like a good argument against not asking, but a bad argument against getting people information as early as possible.
4
Karthik Tadepalli
I don't have any disagreement with getting people information early, I just think characterizing the current system as one where only the criticizee benefits is wrong.
A few benefits I see to the critic even in the status quo:
The post generally ends up stronger, because it's more accurate. Even if you only got something minor wrong, readers will (reasonably!) assume that if you're not getting your details right then they should pay less attention to your post.
To the extent that the critic wants the public view to end up balanced and isn't just trying to damage the criticizee, having the org's response go live at the same time as the criticism helps.
If the critic does get some things wrong despite giving the criticizee the opportunity to review and bring up additional information, either because the criticizee didn't mention these issues or refused to engage, the community would generally see it as unacceptable for the crtiticizee to sue the critic for defamation. Whereas if a critic posts damaging false claims without that (and without a good reason for skipping review, like "they abused me and I can't sanely interact with them") then I think the law is still on the table.
A norm where orgs need to answer critical questions promptly seems good on it's face, but I'm less sure in practice. Many questions take far more effort to answer... (read more)
You're not wrong, but I feel like your response doesn't make sense in context.
Handled vastly better by being able to reliably get answers about concerns earlier.
Assumes things are on a roughly balanced footing and unanswered criticism pushes it out of balance. If criticism is undersupplied for large orgs, making it harder makes things less balanced (but rushed or bad criticism doesn't actually fix this, now you just have two bad things happening)
I'm asking the potential criticizee to provide that information earlier in the process.
Two popular responses to FTX are "this is why we need to care more about honesty" and "this is why we need to not do weird/sketchy shit". I pretty strongly believe the former. I can see why people would believe the latter, but I worry that the value lost is too high.
But I think both side can agree that representing your weird/sketchy thing as mundane is highly risky. If you're going to disregard a bunch of the normal safeguards of operating in the world, you need to replace them with something, and most of those somethings are facilitated by honesty.
Complaints about lack of feedback for rejected grants are fairly frequent, but it seems relevant that I can't get feedback for my accepted grants or in-progress work. The most I have ever gotten was a 👍 react when I texted them "In response to my results I will be doing X instead of the original plan on the application". In fact I think I've gotten more feedback on rejections than acceptances (or in one case, I received feedback on an accepted grant, from a committee member who'd voted to reject). Sometimes they give me more money, so it's not that the work is so bad it's not worth commenting on. Admittedly my grants are quite small, but I'm not sure how much feedback medium or even large projects get.
Acceptance feedback should be almost strictly easier to give, and higher impact. You presumably already know positives about the grant, the impact of marginal improvements is higher in most cases, people rarely get mad about positive feedback, and even if you share negatives the impact is cushioned by the fact that you're still approving their application. So without saying where I think the line should be, I do think feedback for acceptances is higher priority than for rejections.
A relevant question here is "what would I give up to get that feedback?". This is very sensitive to the quality of feedback and I don't know exactly what's on offer, but... I think I'd give up at least 5% of my grants in exchange for a Triplebyte-style short email outlining why the grant was accepted, what their hopes are, and potential concerns.
2
Benevolent_Rain
I have had that experience too. It seems grant work is pretty independent. I think it is worth emphasizing that even though you might not get much except a thumbs up, it is important to inform the grantmakers about changes in plans. Moreover, I think your way of doing it as a statement instead of as a question is a good strategy. I have also included something along the lines of "if you have concerns, questions or objections about my proposed change of plan, please contact me asap" so you firmly place the ball in the grantmakers' court and that it seems fair to interpret a lack of response as an endorsement of your proposed changes.
Good posts generate a lot of positive externalities, which means they're undersupplied, especially by people who are busy and don't get many direct rewards from posting. How do we fix that? What are rewards relevant authors would find meaningful?
Here are some possibilities off the top of my head, with some commentary. My likes are not universal and I hope the comments include people with different utility functions.
Money. Always a classic, rarely disliked although not always prioritized. I'm pretty sure this is why LTFF and EAIF are writing more now.
Appreciation (broad). Some people love these. I definitely prefer getting them over not getting them, but they're not that motivating for me. Their biggest impact on motivation is probably cushioning the blow of negative comments.
Appreciation (specific). Things like "this led to me getting my iron tested" or "I changed my mind based on X". I love these, they're far more impactful than generic appreciation.
High quality criticism that changes my mind.
Arguing with bad commenters.
One of the hardest parts of writing for me is getting a shitty, hostile comment, and feeling like my choices are "let it stand" or "get suc
I definitely agree that funding is a significant factor for some institutional actors.
For example, RP's Surveys and Data Analysis team has a significant amount of research that we would like to publish if we had capacity / could afford to do so: our capacity is entirely bottlenecked on funding and as we are ~ entirely reliant on paid commissions (we don't receive any grants for general support) time spent publishing reports is basically just pro bono, adding to our funding deficit.
Example of this sort of unpublished research include:
The two reports mentioned by CEA here about attitudes towards EA post-FTX among the general public, elites, and students on elite university campuses.
Followup posts about the survey reported here about how many people have heard of EA, to further discuss people's attitudes towards EA, and where members of the general public hear about EA (this differs systematically)
Updated numbers on the growth of the EA community (2020-2022) extending this method and also looking at numbers of highly engaged longtermists specifically
Several studies we ran to develop reliable measure of how positively inclined towards longtermism people are, looking at different predic
"We want to publish but can't because the time isn't paid for" seems like a big loss[1], and a potentially fixable one. Can I ask what you guys have considered for fixing it? This seems to me like an unusually attractive opportunity for crowdfunding or medium donors, because it's a crisply defined chunk of work with clear outcomes. But I imagine you guys have already put some thought into how to get this paid for.
1. ^
To be totally honest, I have qualms about the specific projects you mention, they seem centered on social reality not objective reality. But I value a lot of RP's other work, think social reality investigations can be helpful in moderation, and my qualms about these questions aren't enough to override the general principle.
2
David_Moss
Thanks! I'm planning to post something about our funding situation before the end of the year, but a couple of quick observations about the specific points you raise:
* I think funding projects from multiple smaller donors is just generally more difficult to coordinate than funding from a single source
* A lot of people seem to assume that our projects already are fully funded or that they should be centrally funded because they seem very much like core community infrastructure, which reduces inclination to donate
I'd be curious to understand this line of thinking better if you have time to elaborate. "Social" vs "objective" doesn't seem like a natural and action-guiding distinction to me. For example:
* Does everyone we want to influence hate EA post-FTX?
* Are people more convinced by outreach based on "longtermism" or "existential risk" or principles-based effective altruism or specific concrete causes more effective?
* Do people who first engage with EA when they are younger end up less engaged with EA than those who first engage when they are older?
* How fast is EA growing?
all strike me as objective social questions of clear importance. Also, it seems like the key questions around movement building will often be (characterisable as) "social" questions. I could understand concerns about too much meta but too much "social" seems harder to understand.[1]
1. ^
A possible interpretation I would have some sympathy for is distinguishing between concern with what is persuasive vs what is correct. But I don't think this raises concerns about these kinds of projects, because:
- A number of these projects are not about increasing persuasiveness at all (e.g. how fast is EA growing? Where are people encountering EA ideas?). Even findings like "does everyone on elite campuses hate EA?" are relevant for reasons other than simply increasing persuasiveness, e.g. decisions about whether we should increase or decrease spending on outreach at
2
Elizabeth
Yeah, "objective" wasn't a great word choice there. I went back and forth between "objective", "object", and "object-level", and probably made the wrong call. I agree there is an objective answer to "what percentage of people think positively of malaria nets?" but view it as importantly different than "what is the impact of nets on the spread of malaria?"
I agree the right amount of social meta-investigation is >0. I'm currently uncomfortable with the amount EA thinks about itself and its presentation; but even if that's true, professionalizing the investigation may be an improvement. My qualms here don't rise to the level where I would voice them in the normal course of events, but they seemed important to state when I was otherwise pretty explicitly endorsing the potential posts.
I can say a little more on what in particular made me uncomfortable. I wouldn't be writing these if you hadn't asked and if I hadn't just called for money for the project of writing them up, and if I was I'd be aiming for a much higher quality bar. I view saying these at this quality level as a little risky, but worth it because this conversation feels really productive and I do think these concerns about EA overall are important, even though I don't think they're your fault in particular:
* several of these questions feel like they don't cut reality at the joints, and would render important facets invisible. These were quick summaries so it's not fair to judge them, but I feel this way about a lot of EA survey work where I do have details.
* several of your questions revolve around growth; I think EA's emphasis on growth has been toxic and needs a complete overhaul before EA is allowed to gather data again.
* I especially think CEA's emphasis on Highly Engaged people is a warped frame that causes a lot of invisible damage. My reasoning is pretty similar to Theo's here.
* I don't believe EA knows what to do with the people it recruits, and should stop worrying about recruiti
I think we need to be a bit careful with this, as I saw many highly upvoted posts that in my opinion have been actively harmful. Some very clear examples:
Theses on Sleep, claiming that sleep is not that important. I know at least one person that tried to sleep 6 hours/day for a few weeks after reading this, with predictable results
In general, I think we should promote more posts like "Veg*ns should take B12 supplements, according to nearly-unanimous expert consensus" while not promoting posts like "Veg*nism entails health tradeoffs", when there is no scientific evidence of this and expert consensus of the contrary. (I understand that your intention was not to claim that a vegan diet was worse than an average non-vegan diet, but that's how most readers I've spoken to updated in response to your posts.)
I think you are incorrectly conflating being mistaken and being "actively harmful" (what does actively mean here?) I think most things that are well-written and contain interesting true information or perspectives are helpful, your examples included.
Truth-seeking is a long game that is mostly about people exploring ideas, not about people trying to minimize false beliefs at each individual moment.
2
AnonymousTurtle
That's a fair point, I listed posts that were clearly not only mistaken but also harmful, to highlight that the cost-benefit analysis of "good posts" as a category is very non-obvious.
I shouldn't have used the term "actively", I edited the comment.
I fear that there's a very real risk of building castles in the sky, where interesting true information gets mixed with interesting not-so-true information and woven into a misleading narrative that causes bad consequences, that this happens often, and that we should be mindful of that.
I should have explicitly mentioned it, but I mostly agree with Elizabeth's quick take. I just want to highlight that while some "good posts" "generate a lot of positive externalities", many other "good posts" are wrong and harmful (and many many more get forgotten after a few days). I'm also probably more skeptical of hard-to-measure diffuse benefits without a clear theory of change or observable measures and feedback loops.
As of October 2022, I don't think I could have known FTX was defrauding customers.
If I'd thought about it I could probably have figured out that FTX was at best a casino, and I should probably think seriously before taking their money or encouraging other people to do so. I think I failed in an important way here, but I also don't think my failure really hurt anyone, because I am such a small fish.
But I think in a better world I should have had the information that would lead me to conclude that Sam Bankman-Fried was an asshole who didn't keep his promises, and that this made it risky to make plans that depended on him keeping even explicit promises, much less vague implicit commitments. I have enough friends of friends that have spoken out since the implosion that I'm quite sure that in a more open, information-sharing environment I would have gotten that information. And if I'd gotten that information, I could have shared it with other small fish who were considering uprooting their lives based on implicit commitments from SBF. Instead, I participated in the irrational exuberance that probably made people take more risks on the margin, and left them more vulnerable to... (read more)
I think the encouragement I gave people represents a moral failure on my part. I should have realized I didn't have enough information to justify it, even if I never heard about specific bad behavior.
I don't know the specific circumstances of your or anyone else's encouragement, so I want to be careful not to opine on any specific circumstances. But as a general matter, I'd encourage self-compassion for "small fish" [1] about getting caught up in "irrational exuberance." Acting in the presence of suboptimal levels of information is unavoidable, and declining to act until things are clearer carries moral weight as well.
In retrospect, we know that the EA whispernet isn't that reliable, that prominence in EA shouldn't be seen as a strong indicator of reliability, that the media was asleep at the wheel, and that crypto investors exercise very minimial due dillgence. But I don't think we should expect "small fish" to have known those things in 2021 and 2022.
Hell even if SBF wasn't an unreliable asshole, Future Fund could have turned off the fire hose for lots of reasons. IIRC they weren't even planning on continuing the regrantor project.
I think expecting myself to figure out the fraud would be unreasonable. As you say, investors giving him billions of dollars didn't notice, why should I, who received a few tens of thousands, be expected to do better due diligence? But I think a culture where this kind of information could have bubbled up gradually is an attainable and worthwhile goal.
E.g. I think my local community handled covid really well. That didn't happen because someone wrote a big scary announcement. It was an accumulation of little things, like "this is probably nothing but always good to keep a stock of toilet paper" and "if this is airborne masks are probably useful". And that could happen because those small statements were allowed. And I think it would have been good if people could similarly share small warnings about SBF as casually as they shared good things, and an increasingly accurate picture would emerge over time.
Am I understanding right that the main win you see here would have been protecting people from risks they took on the basis that Sam was reasonably trustworthy?
I also feel pretty unsure but curious about whether a vibe of "don't trust Sam / don't trust the money coming through him" would have helped discover or prevent the fraud - if you have a story for how it could have happened (e.g. via as you say people feeling more empowered to say no to him - maybe it would have via been his staff making fewer crazy moves on his behalf / standing up to him more?), I'd be interested.
"protect people from dependencies on SBF" is the thing for which I see a clear causal chain and am confident in what could have fixed it.
I do have a more speculative hope that an environment where things like "this billionaire firehosing money is an unreliable asshole" are easy to say would have gotten better outcomes for the more serious issues, on the margin. Maybe the FTX fraud was overdetermined, even if it wasn't and I definitely don't have enough insight to be confident in picking a correction. But using an abstract version of this case as an example for how I think a more open environment could have led to better outcomes:
My sense is SBF just kept taking stupid unethical bets and having them work out for him financially and socially. Maybe small consequences early on would have reduced the reward to stupid unethical bets.
Before the implosion, SBF('s public persona) was an EA success story that young EAs aspired to copy. Less of that on the margin would probably lead to less fraud 5 years from now, especially in the world where the FTX fraud took longer to discover.
I think aping SBF's persona was bad for other reasons, but they're harder to justify.
None of my principled arguments against "only care about big projects" have convinced anyone, but in practice Google reorganized around that exact policy ("don't start a project unless it could conceivably have 1b+ users, kill if it's ever not on track to reach that") and they haven't grown an interesting thing since.
My guess is the benefits of immediately aiming high are overwhelmed by the costs of less contact with reality.
the policy was commonly announced when I worked at google (2014), I'm sure anyone else who was there at the time would confirm its existence. In terms of "haven't grown anything since", I haven't kept close track but can't name one and frequently hear people say the same.
2
Linch
I like the Google Pixels. Well specifically I liked 2 and 3a but my current one (6a) is a bit of a disappointment. My house also uses Google Nest and Chromecast regularly. Tensorflow is okay. But yeah, overall certainly nothing as big as Gmail or Google Maps, never mind their core product.
4
Elizabeth
Google was producing the Android OS and its own flagship phones well before the Pixel, so I consider it to predate my knowledge of the policy (although maybe the policy started before I got there, which I've now dated to 4/1/2013)
0
Evan_Gaensbauer
Please send me links to posts with those arguments you've made, as I've not read them, though my guess would be that you haven't convinced anyone because some of the greatest successes in EA started out so small. I remember the same kind of skepticism being widely expressed some projects like that.
Rethink Priorities comes to mind as one major example. The best example is Charity Entrepreneurship. It was not only one of those projects with potential scalability that was doubted. It keeps incubating successful non-profit EA startups for across almost every EA-affiliated cause. CE's cumulative track record might the best empirical argument against the broad applicability to the EA movement of your own position here.
4
Elizabeth
Your comment makes the most sense to me if you misread my post and are responding to exactly the opposite of my position, but maybe I'm the one misreading you.
2
Evan_Gaensbauer
Upvoted. Thanks for clarifying. The conclusion to your above post was ambiguous to me, though I now understand.
That palette is not just great in the abstract, it's great as a representation of LW. I did some very interesting anthropology with some non-rationalist friends explaining the meaning and significance of the weirder reacts.
A lot of what I explained was how specific reacts relate to one of the biggest pain points on LW (and EAF): shitty comments. The reacts are weirdly powerful, in part because it's not the comments' existence that’s so bad, it’s knowing that other people might read them and not understand they are shitty. I could explain why in a comment of my own, but that invites more shitty comments and draws attention to the original one. It’s only worth it if many people are seeing and believing the comment.
Emojis neatly resolve this. If several people mark a comment as soldier mindset, I feel off the hook for arguing with it. And if several people (especially people I respect) mark a comment as insightful or changing their mind, that suggests that at a minimum it’s worth the time to engage with the comment, and quite possibly I am in the wrong.
You might say I should develop a thicker skin so shitty comments bug me less, and t... (read more)
How common do you think "shitty comments" are? And how well/poorly do you think the existing karma system provides an observer with knowledge that the user base "understand[s] they are shitty"? (To be sure, it doesn't tell you if the voting users understand exactly why the comment is shitty.)
I'm not sure how many people would post attributed-to-them emojis if they weren't already anonymously downvoting a comment for being shitty. So if they aren't already getting significant downvotes, I don't know how many negative emojis they would get here.
They're especially useful for comments of mixed quality- e.g. someone is right and making an important point, but too aggressively. Or a comment is effortful, well-written, and correct within its frame, but fundamentally misunderstood your position. Or, god forbid, someone make a good point and a terrible point in the same comment. I was originally skeptical of line-level reacts but ended up really valuing them due to that.
There's also reacts like "elaborate", "taboo this word" and "example" that invite a commenter to correct problems, at which point the comment may become really valuable. Unfortunately there's no notifications for reacts so this can really easily go unnoticed, but I it at least raises the option.
If I rephase your question as "how often do I see comments for which reacts convey something important I couldn't say with karma?"; most of my posts since reacts came out have been controversial, so I'm using many comment reacts per post (not always dismissively).
I also find positive emojis much more rewarding than karma, especially Changed My Mind.
I like the LW emoji palette, but it is too much. Reading forum posts and parsing through comments can be mentally taxing. I don't want to spend additional effort going through a list of forty-something emojis and buttons to react to something, especially comments. I am often pressed for time, so almost always I would avoid the LW emoji palette entirely. Maybe a few other important reactions can be added instead of all of them? Or maybe there could be a setting which allows people to choose if they want to see a "condensed" or "extended" emoji palette? Either way, just my two cents.
I agree EAF shouldn't have a LW-sized palette, much less LW's specific palette. I want EAF to have a palette that reflects its culture as well as LW's palette reflects its culture. And I think that's going to take more than 4 reacts (note that my original comment mortifyingly used a special palette made for a single post, the new version has the normal EAF reacts of helpful, insightful, changed my mind, and heart), but way less than is in the LW palette.
I do think part of LessWrong's culture is preferring to have too many options rather than making do with the wrong one. I know the team has worked really hard to keep reacts to a manageable level, while making most of them very precise, while covering a wide swath of how people want to react. I think they've done an admirable job (full disclosure: I'm technically on the mod team and give opinions in slack, but that's basically the limit of my power). This is something I really appreciate about LW, but I know shrinks its audience.
I'm not on LW very often, how frequently do you see these emojis being used?
From a UX perspective, I agree with Akash - it seems like there are way too many options and my prior is that people wouldn't use >80% of them.
9
Lizka
Hi! I think we might have a bug — I'm not sure where you're seeing those emojis on the Forum. For me, here are the emojis that show up:
@Agnes Stenlund might be able to say more about how we chose those,[1] but I do think we went for this set as a way to create a low-friction way of sharing non-anonymous positive feedback (which authors and commenters have told us they lack, and some have told us that they feel awkward just commenting with something non-substantive but positive like "thanks!") while also keeping the UX understandable and easy to use. I think it's quite possible that it would be better to also add some negative/critical emojis, but I'm not very convinced right now and not convinced that it's super promising relative to the others stuff we're working on, & something we should dive deeper into. It won't be my call in the end, regardless, but I'm definitely up for hearing arguments about why this is wrong!
1. ^
I don't view this as a finalized set — I think there's a >50% chance (75%?) that we've changed at least something about it in the next ~6 months.
8
Ollie Etherington
Not a bug - it's from Where are you donating this year, and why? which is grandfathered into an old experimental voting system (and it's the only post with this voting system - there are a couple of others with different experimental systems).
4
Elizabeth
I'm so sorry- I should have been more surprised when I went to get a screenshot and it wasn't the palette I expected. I have comments set to notify me only once per day, so I didn't get alerted to the issue until now.
I wrote this with the standard palette so I still think there is a problem, but I feel terrible for exaggerating it with a palette that was perfectly appropriate for its thread.
4
Pablo
Semi-tangential question: what's the rationale for making the reactions public but the voting (including the agree/disagree voting) anonymous?
8
Rebecca
Where are you seeing that emoji palette on here?
2
Elizabeth
See sister thread- this was for a specific positivity focused thread I picked completely at random 😱.
4
Sarah Cheng
As Ollie mentioned, I made the set you referenced for just this one thread. As far as I remember it was meant to to support positive vibes in that thread and was done very quickly, so I would not say a lot of thought went into that palette.
2
Elizabeth
@Lizka and co: could I ask for some commentary on this?
A repost from the discussion on NDAs and Wave (a software company). Wave was recently publicly revealed to have made severance dependent on non-disparagement agreements, cloaked by non-disclosure agreements. I had previously worked at Wave, but negotiated away the non-disclosure agreement (but not the non-disparagement agreement).
But my guess is that most of the people you sent to Wave were capable of understanding what they were signing and thinking through the implications of what they were agreeing to, even if they didn't actually have the conscientiousness / wisdom / quick-thinking to do so. (Except, apparently, Elizabeth. Bravo, @Elizabeth!)
I appreciate the kudos here, but feel like I should give more context.
I think some of what led to me to renegotiate was a stubborn streak and righteousness about truth. I mostly hear when those traits annoy people, so it’s really nice to have them recognized in a good light here. But that righteous streak was greatly enabled by the fact that my mom is a lawyer who modeled reading legal documents before signing (even when it's embarrassing your kids who just want to join their friends at the rockclimbing birthday party), and that I cou... (read more)
I feel like a lot of castle discourse missed the point.
By default, OpenPhil/Dustin/Owen/EV don't need anyone's permission for how they spend their money.
And it is their money, AFAICT open phil doesn't take small donations. I assume Dustin can advocate for himself here.
One might argue that the castle has such high negative externalities it can be criticized on that front. I haven't seen anything to convince me of that, but it's a possibility and "right to spend one's own money" doesn't override that.
You could argue OpenPhil etc made some sort of promise they are violating by buying the castle. I don't think that's true- but I also think the castle-complainers have a legitimate grievance.
I do think the word "open" conveys something of a promise, and I will up my sympathy for open phil if they change their name. But my understanding is they are more open than most foundations.
My guess is that lots of people entered EA with inaccurate expectations, and the volume at which this happens indicates a systemic problem, probably with recruiting. They felt ~promised that EA wasn't the kind of place where people bought fancy castles, or would at least publicly announce they'd... (read more)
I think the first point here -- that the buyers "don't need anyone's permission" to purchase a "castle" -- isn't contested here. Other than maybe the ConcernedEA crowd, is anyone claiming that they were somehow required to (e.g.) put this to a vote?
I think the "right to spend one's own money" in no way undermines other people's "right to speak one's own speech" by lambasting that expenditure. In the same way, my right to free speech doesn't prevent other people from criticizing me for it, or even deciding not to fund/hire me if I were to apply for funding or a job. There are circumstances in which we have -- or should have -- special norms against negative reactions by third parties; for instance, no one should be retailiated against for reporting fraud, waste, abuse, harassment, etc. But the default rule is that what the critics have said here is fair game.
A feeling of EA having breached a "~promise[]" isn't the only basis for standing here. Suppose a non-EA megadonor had given a $15MM presumably tax-deductible donation to a non-EA charity for buying a "castle." Certainly both EAs and non-EAs would have the right to criticize that decision, especially because the tax-favored... (read more)
I 100% agree with you that people should be and are free to give their opinions, full stop.
Many specific things people said only make sense to me if they have some internal sense that they are owed a justification and input (example, example, example, example).
I almost-but-don't-totally reject PR arguments. EA was founded on "do the thing that works not the thing that looks good". EAs encourage many other things people find equally distasteful or even abhorrent, because they believe it does the most good. So "the castle is bad PR" is not a good enough argument, you need to make a case for "the castle is bad PR and meaningfully worse than these other things that are bad PR but still good". I believe things in that category exist, and people are welcome to make arguments that the castle is one of them, but you do have to make the full argument.
I think you're slightly missing the point of the 'castle' critics here.
By default, OpenPhil/Dustin/Owen/EV don't need anyone's permission for how they spend their money. And it is their money, AFAICT open phil doesn't take small donations. I assume Dustin can advocate for himself here.
One might argue that the castle has such high negative externalities it can be criticized on that front. I haven't seen anything to convince me of that, but it's a possibility and "right to spend one's own money" doesn't override that.
Technically this is obviously true. And it was the main point behind one of the most popular responses to FTX and all the following drama. But I think that point and the post misses people's concerns completely and comes off as quite tone-deaf.
To pick an (absolutely contrived) example, let's say OpenPhil suddenly says it now believes that vegan diets are more moral and healthier than all other diets, and that B12 supplementation increases x-risk, and they're going to funnel billions of dollars into this venture to persuade people to go Vegan and to drone-strike any factories producing B12. You'd probably be shocked and think that this was a terrible decision and that it ... (read more)
Bombing B12 factories has negative externalities and is well covered by that clause. You could make it something less inflammatory, like funding anti-B12 pamphlets, and there would still be an obvious argument that this was harmful. Open Phil might disagree, and I wouldn't have any way to compel them, but I would view the criticism as having standing due to the negative externalities. I welcome arguments the retreat center has negative externalities, but haven't seen any that I've found convincing.
My understanding is:
* Open Phil deliberately doesn't fill the full funding gap of poverty and health-focused charities.
* While they have set a burn rate and are currently constrained by it, that burn rate was chosen to preserve money for future opportunities they think will be more valuable. If they really wanted to do both AMF and the castle, they absolutely could.
Given that, I think the castle is a red herring. If people want to be angry about open phil not filling the full funding gaps when it is able I think you can make a case for that, but the castle is irrelevant in the face of its many-billion dollar endowment.
https://www.openphilanthropy.org/research/update-on-how-were-thinking-about-openness-and-information-sharing/
6
Jason
Even assuming OP was already at its self-imposed cap for AMF and HKI, it could have asked GiveWell for a one-off recommendation. The practice of not wanting to fill 100% of a funding gap doesn't mean the money couldn't have been used profitably elsewhere in a similar organization.
4
Elizabeth
are you sure GW has charities that meet their bar that they aren't funding as much as they want to? I'm pretty sure that used to not be the case, although maybe it has changed. There's also value to GW behaving predictably, and not wildly varying how much money it gives to particular orgs from year to year.
This might be begging the question, if the bar is raised due to anticipated under funding. But I'm pretty sure at one point they just didn't have anywhere they wanted to give more money to, and I don't know if that has changed.
2023: "We expect to find more outstanding giving opportunities than we can fully fund unless our community of supporters substantially increases its giving."
Giving Season 2022: "We've set a goal of raising $600 million in 2022, but our research team has identified $900 million in highly cost-effective funding gaps. That leaves $300 million in funding gaps unfilled."
July 2022: "we don’t expect to have enough funding to support all the cost-effective opportunities we find." Reports rolling over some money from 2021, but much less than originally believed.
Giving Season 2021: GiveWell expects to roll over $110MM, but also believes it will find very-high-impact opportunities for those funds in the next year or two.
Giving Season 2020: No suggestion that GW will run out of good opportunities -- "If other donors fully meet the highest-priority needs we see today before Open Philanthropy makes its January grants, we’ll ask Open Philanthropy to donate to priorities further down our list. It won’t give less funding overall—it’ll just fund the next-highest-priority needs."
Thanks for the response Elizabeth, and the link as well, I appreciate it.
On the B12 bombing example, it was deliberately provocative to show that, in extremis, there are limits to how convincing one would find the justification "the community doesn't own its donor's money" as a defence for a donation/grant
On the negative externality point, maybe I didn't make my point that clear. I think a lot of critics I think are not just concerned about the externalities, but the actual donation itself, especially the opportunity cost of the purchase. I think perhaps you simply disagree with castle critics on the object level of 'was it a good donation or not'.
I take the point about Open Phil's funding gap perhaps being the more fundamental/important issue. This might be another case of decontextualising vs contextualising norms leading to difficult community discussions. It's a good point and I might spend some time investigating that more.
I still think, in terms of expectations, the new EA joiners have a point. There's a big prima facie tension between the drowning child thought experiment and the Wytham Abbey purchase. I'd be interested to hear what you think a more realistic 'recruiting pitch' to EA would look like, but don't feel the need to spell that out if you don't want.
5
Elizabeth
I think a retreat center is a justifiable idea, I don't have enough information to know if Wytham in particular was any good, and... I was going to say "I trust open phil" here, but that's not quite right, I think open phil makes many bad calls. I think a world where open phil gets to trust its own judgement on decisions with this level of negative externality is better than one where it doesn't.
I understand other people are concerned about the donation itself, not just the externalities. I am arguing that they are not entitled to have open phil make decisions they like, and the way some of them talk about Wytham only makes sense to me if they feel entitlement around this. They're of course free to voice their disagreement, but I wish we had clarity on what they were entitled to.
This is the million dollar question. I don't feel like I have an answer, but I can at least give some thoughts.
* I think the drowning child analogy is deceitful, manipulative, and anti-epistemic, so it's no hardship for me to say we should remove that from recruiting.
* Back in 2015 three different EA books came out- Singer's The Most Good You Can Do, MacAskill's Doing Good Better, and Nick Cooney's How To Be Great At Doing Good. My recollection is that Cooney was the only one who really attempted to transmit epistemic taste and a drive to think things through. MacAskill's book felt like he had all the answers and was giving the reader instructions, and Singer's had the same issues. I wish EA recruiting looked more like Cooney's book and less like MacAskill's.
* That's a weird sentence because Nick Cooney has a high volume of vague negative statements about him. No one is very specific, but he shows up on a lot of animal activism #metoo type articles. So I want to be really clear this preference is for that book alone, and it's been 8 years since I read it.
* I think the emphasis on doing The Most Possible Good (* and nothing else counts) makes people miserable and less effec
Addendum: I just checked out Wytham's website, and discovered they list six staff. Even if those people aren't all full-time, several of them supervise teams of contractors. This greatly ups the amount of value the castle would need to provide to be worth the cost. AFAIK they're not overstaffed relative to other venues, but you need higher utilization to break even.
Additionally, the founder (Owen Cotton-Barrat) has stepped back for reasons that seem merited (history of sexual harassment), but a nice aspect of having someone important and busy in charge was that he had a lot less to lose if it was shut down. The castle seems more likely to be self-perpetuating when the decisions are made by people with fewer outside options.
I still view this as fundamentally open phil's problem to deal with, but it seemed good to give an update.
"I think the drowning child analogy is deceitful, manipulative, and anti-epistemic, so it's no hardship for me to say we should remove that from recruiting. " - I'm interested in why you think this?
It puts you in a high SNS activation state, which is inimical to the kind of nuanced math good EA requires
As Minh says, it's based in avoidance of shame and guilt, which also make people worse at nuanced math.
The full parable is "drowning child in a shallow pond", and the shallow pond smuggles in a bunch of assumptions that aren't true for global health and poverty. Such as
"we know what to do", "we know how to implement it", and "the downside is known and finite", which just don't hold for global health and poverty work. Even if you believe sure fire interventions exist and somehow haven't been fully funded, the average person's ability to recognize them is dismal, and many options make things actively worse. The urgency of drowningchildgottasavethemnow makes people worse as distinguishing good charities from bad. The more accurate analogy would be "drowning child in a fast moving river when you don't know how to swim".
I think Peter Singer believes this so he's not being inconsistent, I just think he's wrong.
"you can fix this with a single action, after which you are done." Solving poverty for even a single child is a marathon.
I think this might be a good top level post - would be keen for you more people to see and discuss this point
2
Elizabeth
Do people still care about drowning child analogy? Is it still used in recruiting? I'd feel kind of dumb railing against a point no one actually believed in.
4
Vaidehi Agarwalla 🔸
I'm not sure (my active intro cb days were ~2019), but I think it is possibly still in the intro syllabus ? You could add a disclaimer at the top.
7
Minh Nguyen
I will say I also never use the Drowning Child argument. For several reasons:
* I generally don't think negative emotions like shame and guilt are a good first impression/initial reason to join EA. People tend to distance themselves from sources of guilt. It's fine to mention the drowning child argument maybe 10-20 minutes in, but I prefer to lead with positive associations.
* I prefer to minimise use of thought experiments/hypotheticals in intros, and prefer to use examples relatable to the other person. IMO, thought experiments make the ethical stakes seem too trivial and distant.
What I often do is to figure out what cause areas the other person might relate to based on what they already care about, describe EA as fundamentally "doing good, better" in the sense of getting people to engage more thoughtfully with values they already hold.
2
NickLaing
Thanks that's helpful!
3[anonymous]
Just a quick comment that I strong upvoted this post because of the point about violated expectations in EA recruitment, and disagree voted because it's missing some important points of why EAs should be concerned about how OP and other EA orgs spend their EA money.
2
Elizabeth
if you have the energy, I'd love to hear your disagreement on open phil or ownership of money.
[anonymous]13
3
0
1
I feel similarly to Jason and JWS. I don't disagree with any of the literal statements you made but I think the frame is really off. Perhaps OP benefits from this frame, but I probably disagree with that too.
Another frame: OP has huge amounts of soft and hard power over the EA community. In some ways, it is the de facto head of the EA community. Is this justified? How effective is it? How do they react to requests for information about questionable grants that have predictably negative impacts on the wider EA community? What steps do they take to guard against motivated reasoning when doing things that look like stereotypical examples of motivated reasoning? There are many people who have a stake in these questions.
Thanks, that is interesting and feels like it has conversational hooks I haven't heard before.
What would it mean to say Open Phil was justified or not justified in being the de facto head of the community? I assume you mean morally justified, since it seems pretty logical on a practical level.
Supposing a large enough contingent of EA decided it was not justified; what then? I don't think anyone is turning down funding for the hell of it, so giving up open phil money would require a major restructuring. What does that look like? Who drives it? What constitutes large enough?
5[anonymous]
Briefly in terms of soft and hard power:
Soft power
* Deferring to OP
* Example comment about how much some EAs defer to OP even when they know it’s bad reasoning.
* OP’s epistemics are seen as the best in EA and jobs there are the most desirable.
* The recent thread about OP allocating most of its neartermist budget to FAW and especially its comments shows much reduced deference (or at least more openly taking such positions) among some EAs.
* As more critical attention is turned towards OP among EAs, I expect deference will reduce further. E.g. some of David Thorstad's critical writings have been cited on this forum on this.
* I expect this will continue happening organically, particularly in response to failures and scandals, and the castle played a role in reduced deference.
Hard power
* I agree no one is turning down money willy-nilly, but if we ignore labels, how much OP money and effort actually goes into governance and health for the EA community, rather than recruitment for longtermist jobs?
* In other words, I’m not convinced it would require restructuring or just structuring.
* A couple of EAs I spoke to about reforms both talked about how huge sums of money are needed to restructure the community and it’s effectively impossible without a megadonor. I didn’t understand where they were coming from. Building and managing a community doesn’t take big sums of money and EA is much richer than most movements and groups.
* Why can’t EAs set up a fee-paying society? People could pay annual membership fees and in exchange be part of a body that provided advice for donations, news about popular cause areas and the EA community, a forum, annual meetings, etc. Leadership positions could be decided by elections. I’m just spitballing here.
* Of course this depends on what one’s vision for the EA community is.
What do you think?
Why can’t EAs set up a fee-paying society? People could pay annual membership fees and in exchange be part of a body that provided advice for donations, news about popular cause areas and the EA community, a forum, annual meetings, etc. Leadership positions could be decided by elections. I’m just spitballing here.
The math suggests that the meta would look much different in this world. CEA's proposed budget for 2024 is $31.4MM by itself, about half for events (mostly EAG), about a quarter for groups. There are of course other parts of the meta. There were 3567 respondents to the EA Survey 2022, which could be an overcount or undercount of the number of people who might join a fee-paying society. Only about 60% were full-time employed or self-employed; most of the remainder were students.
Maybe a leaner, more democratic meta would be a good thing -- I don't have a firm opinion on that.
To make sure I understand; this is an answer to "what should EA do if it decides OpenPhil's power isn't justified?" And the answer is "defer less, and build a grassroots community structure?"
I'm not sure what distinction you're pointing at with structure vs. restructure. They both take money that would have to come from somewhere (although we can debate how much money). Maybe you mean OP wouldn't actively oppose this effort?
6[anonymous]
To the first: Yup, it’s one answer. I’m interested to hear other ideas too.
Structure vs restructuring: My point was that a lot of the existing community infrastructure OP funds is mislabelled and is closer to a deep recruitment funnel for longtermist jobs rather than infrastructure for the EA community in general. So for the EA community to move away from OP infrastructure wouldn’t require relinquishing as much infrastructure as the labels might suggest.
For example, and this speaks to @Jason's comment, the Center for Effective Altruism is primarily funded by the OP longtermist team to (as far as I can tell) expand and protect the longtermist ecosystem. It acts and prioritizes accordingly. It is closer to a longtermist talent recruitment agency than a center for effective altruism. EA Globals (impact often measured in connections) are closer to longtermist job career fairs than a global meeting of effective altruists. CEA groups prioritize recruiting people who might apply for and get OP longtermist funding (“highly engaged EAs”).
4
Elizabeth
I think we have a lot of agreement in what we want. I want more community infrastructure to exist, recruiting to be labeled as recruiting, and more people figuring out what they think is right rather than deferring to authorities.
I don't think any of these need to wait on proving open phil's power is unjustified. People can just want to do them, and then do them. The cloud of deference might make that harder[1], but I don't think arguing about the castle from a position of entitlement makes things better. I think it's more likely to make things worse.
Acting as if every EA has standing to direct open phil's money reifies two things I'd rather see weakened. First it reinforces open phil's power, and promotes deference to it (because arguing with someone implies their approval is necessary). But worse, it reinforces the idea that the deciding body is the EA cloud, and not particular people making their own decisions to do particular things[2]. If open phil doesn't get to make its own choices without community ratification, who does?
1. ^
I remember reading a post about a graveyard of projects CEA had sniped from other people and then abandoned. I can't find that post and it's a serious accusation so I don't want to make it without evidence, but if it is true, I consider it an extremely serious problem and betrayal of trust.
2. ^
yes, everyone has standing to object to negative externalities
3. ^
narrow is meant to be neutral to positive here. No event can be everything to all people, I think it's great they made an explicit decision on trade-offs. They maybe could have marketed it more accurately. They're moving that way now and I wish it had gone farther earlier. But I think even perfectly accurate marketing would have left a lot of people unhappy.
2[anonymous]
Maybe some people argued from a position of entitlement. I skimmed the comments you linked above and I did not see any entitlement. Perhaps you could point out more specifically what you felt was entitled, although a few comments arguing from entitlement would only move me a little so this may not be worth pursuing.
The bigger disagreement I suspect is between what we think the point of EA and the EA community is. You wrote that you want it to be a weird do-ocracy. Would you like to expand on that?
2
JWS 🔸
Maybe you two might consider having this discussion using the new Dialogue feature? I've really appreciated both of your perspectives and insights on this discussion, and I think the collaborative back-and-forth your having seems a very good fit for how Dialogues work.
2
Jason
That's helpful.
So in this hypothetical, certain functions transfer to the fee-paying society, and certain functions remain funded by OP. That makes sense, although I think the range of what the fee-paying society can do on fees alone may be relatively small. If we estimate 2,140 full fee-payers at $200 each and 1,428 students at $50 each, that's south of $500K. You'd need a diverse group of EtGers willing to put up $5K-$25K each for this to work, I suspect. I'm not opposed; in fact, my first main post on the Forum was in part about the need for the community to secure independent funding for certain epistemically critical functions. I just want to see people who advocate for a fee-paying society to bite the bullet of how much revenue fees could generate and what functions could be sustained on that revenue. It sounds like you are willing to do so.
But looping back to your main point about "huge amounts of soft and hard power over the EA community" held by OP, how much would change in this hypothetical? OP still funds the bulk of EA, still pays for the "recruitment funnel," pays the community builders, and sponsors the conferences. I don't think characterizing the bulk of what CEA et al. do as a "recruitment funnel" for the longtermist ecosystem renders those functions less important as sources of hard and soft power. OP would still be spending ~ $20-$30MM on meta versus perhaps ~ $1-2MM for the fee-paying society.
6[anonymous]
OP and most current EA community work takes a “Narrow EA” approach. The theory of change is that OP and EA leaders have neglected ideas and need to recruit elites to enact these ideas. Buying castles and funding expensive recruitment funnels is consistent with this strategy.
I am talking about something closer to a big tent EA approach. One vision could be to help small and medium donors in rich countries spend more money more effectively on philanthropy, with a distinctive emphasis on cause neutrality and cause prioritization. This can and probably should be started in a grassroots fashion with little money. Spending millions on fancy conferences and paying undergraduate community builders might be counter to the spirit and goals of this approach.
A fee-paying society is a natural fit for big tent EA and not for narrow EA.
I didn’t know that the huge amounts of power held by OP was my main point! I was trying to use that to explain why EA community members were so invested in the castle. I’m not sure I succeeded, especially since I agree with @Elizabeth's points that no one needs to wait for permission from OP or anyone else to pursue what they think is right, and the EA community cannot direct OP's donations.
4
Jason
I personally would love to see a big-tent organization like the one you describe! I think it less-than-likely that the existence of such an organization would have made most of the people who were "so invested in the castle" significantly less so. But there's no way to test that. I agree that a big-tent organization would bring in other people -- not currently involved in EA -- who would be unlikely to care much about the castle.
-3
AltForHonesty
"Castles", plural. The purchase of Wytham Abbey gets all the attention, but everyone ignores that during that same time there was also the purchase of a chateau in Hostačov using FTX funding.
I sometimes argue against certain EA payment norms because they feel extractive, or cause recipients to incur untracked costs. E.g. "it's not fair to have a system that requires unpaid work, or going months between work in ways that can't be planned around and aren't paid for". This was the basis for some of what I said here. But I'm not sure this is always bad, or that the alternatives are better. Some considerations
if it's okay for people to donate money I can't think of a principled reason it's not okay for them to donate time -> unpaid work is not a priori bad.
If it would be okay for people to solve the problem of gaps in grants by funding bridge grants, it can't be categorically disallowed to self-fund the time between grants.
If partial self-funding is required to do independent, grant-funded work, then only people who can afford that will do such work. To the extent the people who can't would have done irreplaceably good work, that's a loss, and it should be measured. And to the extent some people would personally enjoy doing such work but can't, that's sad for them. But the former is an empirical question weighed against the benefits of underpaying, and the latter i
I think an underappreciated part of castlegate is that it fairly easily puts people in an impossible bind.
EA is a complicated morass, but there are a few tenets that are prominent, especially early on. These may be further simplified, especially in people using EA as treatment for their scrupulosity issues. For most of this post I'm going to take that simplified point of view (I'll mark when we return to my own beliefs).
Two major, major tenets brought up very early in EA are:
You should donate your money to the most impactful possible cause
Some people will additionally internalize "The most impactful in expectation"
GiveWell and OpenPhil have very good judgment.
The natural conclusion of which is that donating GiveWell or OpenPhil-certified causes is a safe and easy way to fulfill your moral duty.
If you're operating under those assumptions and OpenPhil funds something without making their reasoning legible, there are two possibilities:
The opportunity is bad, which at best means OpenPhil is bad, and at worst means the EA ecosystem is trying to fleece you.
The opportunity is good but you're not allowed to donate to it, which leaves you in violation of tenet #1.
Where does the “ you're not allowed to donate to it” part of #2 come from?
2
Elizabeth
because it's not legible, and willingness to donate to illegible things opens you up to scams.
OpenPhil also discourages small donations, I believe specifically because they don't want to have to justify their decisions to the public, but I think will accept them.
-1
Rebecca
Saying you’re not allowed to donate to the projects is much stronger than either of these things though. E.g. re your 2nd point, nothing is stopping someone from giving top up funding to projects/people that have received OpenPhil funding, and I’m not sure anyone feels like they’re being told they shouldn’t? E.g. the Nonlinear Fund was doing exactly this kind of marginal funding.
7
Elizabeth
I agree they're allowed to seek out frontier donations, or for that matter give to Open Phil. I believe that this doesn't feel available/acceptable, on an emotional level, to a meaningful portion of the EA population, who have a strong need for both impact and certainty.
Original source is here.
A good summary of pop-Bayesianism failure modes. Garbage in is still garbage out, even if you put the garbage through Bayes theorem.
Salaries at direct work orgs are a frequent topic of discussion, but I’ve never seen those conversations make much progress. People tend to talk past each other- they’re reading words differently (“reasonable”), or have different implicit assumptions that change the interpretation. I think the questions below could resolve a lot of the confusion (although not all of it, and not the underlying question. Highlighting different assumptions doesn’t tell you who’s right, it just lets you focus discussions on the actual disagreements).
Here’s my guess for the important questions. Some of them are contingent- e.g. you might think new grad generalists and experienced domain experts should be paid very differently. Feel free to give as many sets of answers as you want, just be clear which answers lump together, so no one misreads your expert salary as if it was for interns.
What kind of position are you thinking about?
Experienced vs. new grad
Domain expertise vs generalist?
Many outside options vs. few?
Founder vs employee?
What salary are you thinking about?
What living conditions do you expect this salary to buy?
People often cite EA salaries as higher than other non-profits, but my understanding is that most non-profits pay pretty badly. Not "badly" as in "low", but "badly" as in "they expect credentials, hours, and class signals that are literally unaffordable on the salary they pay. The only good employees who stick around for >5 years have their bills paid by a rich spouse or parent."
So I don't think that argument in particular holds much water.
8
Cullen 🔸
Do you have any citations for this claim?
3
Elizabeth
Implict and explicit from https://askamanager.com/ and https://nonprofitaf.com/ (which was much epistemically stronger in its early years)
4
yanni
n = 1 but my wife has worked in non-EA non-profits her whole career, and this is pretty much true. Its mostly women earning poorly at the non-profit, while husbands makes bank at the big corporate.
3
NickLaing
Where does this idea come from Elizabeth? From my experience (n=10) this argument is incorrect. I know a bunch of people who work in these "badly" paying jobs you talk of who defy your criteria, they don't have their bills paid for by a rough parent - instead they are content with their work and accept a form of "salary sacrifice" mindset even if they wouldn't phrase it in those EA terms.
EA doesn't have a monopoly on altruism, there are plenty of folks out there living simply and working for altruistic causes they believe in even thought it doesn't pay well and they could be earning way more elsewhere., outside of conventional market forces.
2
Elizabeth
The sense I get reading this is that you feel I've insulted your friends, who have made a big sacrifice to do impactful work. That wasn't my intention and I'm sorry it came across that way. From my perspective, I am respecting the work people do by suggesting they be paid decently.
First, let me take my own advice and specify what I mean by decently: I think people should be able to have kids, have a sub-30 minute commute, live in conditions they don't find painful (people only live with housemates if they like it, not physically dangerous, outdoor space if they need that to feel good. Any of these may come at at trade off with the others, probably no one gets all of them, but you shouldn't be starting out from a position where it's impossible to get reasonable needs met), save for retirement, have cheap vacations, have reasonably priced hobbies, pay their student loans, and maintain their health (meaning both things like healthcare, and things like good food and exercise). If they want to own their home, they shouldn't be too many years behind their peers in being able to do so.
I think it is both disrespectful to the workers and harmful to the work to say that people don't deserve these things, or should be willing to sacrifice it for the greater good. Why on earth put the pressure on them to accept less[1], and not on high-earners to give more? This goes double for orgs that require elite degrees or designer clothes: if you want those class signals, pay for them.
1. ^
There's an argument here that low payment screens for mission alignment. I think this effect is real, but is insignificant at the level I've laid out.
0
NickLaing
Hey Elizabeth - just to clarify I don't think you've insulted my friends at all don't worry about that - I just disagreed from my experience at least that was the situation with most NGO workers like you claimed. I get that you are trying to respect people by pushing for them to be paid more it's all good.
As a small note, I don't think they have made a "big sacrifice" at all, most wouldn't say they have made any sacrifice at all. They have traded earning money (which might mean less to them than for other people anyway) for a satisfying job while living a (relatively) simple lifestyle which they believe is healthy for themselves and the planet. Personally I don't consider this a sacrifice either, just living your best life!
I'm going to leave it here for now (not in a bad way at all) because I suspect our underlying worldviews differ to such a degree here that it may be hard to debate these surface salary and lifestyle issues without first probing at deeper underlying assumptions here about happiness, equality, "deserving" etc., which would take a deeper and longer discussion that might be tricky on a forum back and forth
Not saying I'm not up for discussing these things in general though!
4
Elizabeth
I tested a version of these here, and it worked well. A low-salary advocate revealed a crux they hadn't before (there is little gap between EA orgs' first and later choice candidates), and people with relevant data shared it (the gap may be a 50% drop in quality, or not filling the position at all).
2
Jason
This is an interesting model -- but what level of analysis do you think is best for answering question 7? One could imagine answering this question on:
* the vacancy level at the time of hire decision (I think Bob would be 80% as impactful as the frontrunner, Alice)
* the vacancy level at the time of posting (I predict that on average the runner-up candidate will be 80% as the best candidate would be at this org at this point in time)
* the position level (similar, but based on all postings for similiar positions, not just this particular vacancy at this point in time)
* the occupational field level (e.g., programmer positions in general)
* the organizational level (based on all positions at ABC Org; this seems to be implied when an org sets salaries mainly by org-wide algorithm)
* the movement-wide level (all EA positions)
* the sector-wide level (which could be "all nonprofits," "all tech-related firms," etc.)
* the economy-wide level.
I can see upsides and downsides to using most of these to set salary. One potential downside is, I think, common to analyses conducted at a less-than-organizational level.
Let's assume for illustrative purposes that 50% of people should reach the state specified in question 4 with $100K, and that the amount needed is normally distributed with a standard deviation of $20K due to factors described in step five and other factors that make candidates need less money. (The amount needed likely isn't normally distributed, but one must make sacrifices for a toy model.) Suppose that candidates who cannot reach the question-4 state on the offered salary will decline the position, while candidates who can will accept. (Again, a questionable but simplifying assumption.)
One can calculate, in this simplified model, the percentage of employees who could achieve the state at a specific salary. One can also compute the amount of expected "excess" salary paid (i.e., the amounts that were more than necessary for employees to achieve the
Ambition snowballs/Get ambitious slowly works very well for me, but sonepeople seem to hate it. My first reaction is that these people need to learn to trust themselves more, but today I noticed a reason I might be unusually suited for this method.
two things that keep me from aiming at bigger goals are laziness and fear. Primarily fear of failure, but also of doing uncomfortable things. I can overcome this on the margin by pushing myself (or someone else pushing me), but that takes energy, and the amount of energy never goes down the whole time I'm working... (read more)
I'm pretty sure you can't have consequentialist arguments for deceptions of allies or self, because consequentialism relies on accurate data. If you've blinded yourself then you can have the best utility function in the world and it will do you no good because you're applying it to gibberish.
I'll be at EAGxVirtual this weekend. My primary goal is to talk about my work on epistemics and truthseeking within EA, and especially get the kind of feedback that doesn't happen in public. If you're interested, you can find me on the usual channels.
EA organizations frequently ask for people to run criticism by them ahead of time. I’ve been wary of the push for this norm. My big concerns were that orgs wouldn’t comment until a post was nearly done, and that it would take a lot of time. My recent post mentioned a lot of people and organizations, so it seemed like useful data.
I reached out to 12 email addresses, plus one person in FB DMs and one open call for information on a particular topic. This doesn’t quite match what you see in the post because some people/orgs were used more than once, and other mentions were cut. The post was in a fairly crude state when I sent it out.
Of those 14: 10 had replied by the start of next day. More than half of those replied within a few hours. I expect this was faster than usual because no one had more than a few paragraphs relevant to them or their org, but is still impressive.
It’s hard to say how sending an early draft changed things. One person got some extra anxiety because their paragraph was full of TODOs (because it was positive and I hadn’t worked as hard fleshing out the positive mentions ahead of time). I could maybe have saved myself one stressful interaction if I’d rea... (read more)
GET AMBITIOUS SLOWLY
Most approaches to increasing agency and ambition focus on telling people to dream big and not be intimidated by large projects. I'm sure that works for some people, but it feels really flat for me, and I consider myself one of the lucky ones. The worst case scenario is big inspiring speeches get you really pumped up to Solve Big Problems but you lack the tools to meaningfully follow up.
Faced with big dreams but unclear ability to enact them, people have a few options.
The first three are all very costly, especially if you repeat the cycle a few times.
My preferred version is ambition snowball or "get ambitious slowly". Pick something b... (read more)
A friend asked me which projects in EA I thought deserved more money, especially ones that seemed to be held back by insufficient charisma of the founders. After a few names he encouraged me to write it up. This list is very off the cuff and tentative: in most cases I have pretty minimal information on the project, and they’re projects I incidentally encountered on EAF. If you have additions I encourage you to comment with them.
The main list
The bar here is “the theory of change seems valuable, and worse projects are regularly funded”.
Faunalytics
Faunalytics is a data analysis firm focused on metrics related to animal suffering. I searched high and low for health data on vegans that included ex-vegans, and they were the only place I found anything that had any information from ex-vegans. They shared their data freely and offered some help with formatting, although in the end it was too much work to do my own analysis.
I do think their description minimized the problems they found. But they shared enough information that I could figure that out rather than relying on their interpretation, and that’s good enough.
ALLFED
EA is trend-fol... (read more)
TL;DR: I think the main reason is the same reason we aren't donating to them: we think there are even more promising projects in terms of the effectiveness of a marginal $, and we are extremely funding constrained. I strongly agree with Elizabeth that all these projects (and many others) deserve more money.
Keeping in mind that I haven't researched any of the projects, and I'm definitely not an expert in grantmaking; I personally think that “the theory of change seems valuable, and worse projects are regularly funded” is not the right bar to estimate the relative value of a marginal dollar, as it doesn't take into account funding-gaps, costs, and actual results achieved.
As a data point on the perspective of a mostly uninformed effectiveness-oriented small donor, here's why I personally haven't donated to these projects in 2023, starting from the 2 you mention.
I'm not writing this because I think they are good reasons to fund other projects, but as a potentially interesting data-point in the psychology of an uninformed giver.
ALLFED:
Their theory of change seems really cool, but research organizations seem very hard to evaluate as a non-expert. I think 3 things all need to ... (read more)
There's a thing in EA where encouraging someone to apply for a job or grant gets coded as "supportive", maybe even a very tiny gift. But that's only true when [chance of getting job/grant] x [value of job/grant over next best alternative] > [cost of applying].
One really clear case was when I was encouraged to apply for a grant my project wasn't a natural fit for, because "it's quick and there are few applicants". This seemed safe, since the deadline was in a few hours. But in those few hours the number of applications skyrocketed- I want to say 5x but my memory is shaky- presumably because I wasn't the only person the grantmaker encouraged. I ended up wasting several hours of my and co-founders time before dropping out, because the project really was not a good fit for the grant.
[if the grantmaker is reading this and recognizes themselves: I'm not mad at you personally].
I've been guilty of this too, defaulting to encouraging people to try for something without considering the costs of making the attempt, or the chance of success. It feels so much nicer than telling someone "yeah you're probably not good enough".
A lot of EA job postings encourage people t... (read more)
I think this falls into a broader class of behaviors I'd call aspirational inclusiveness.
I do think shifting the relative weight from welcoming to clear is good. But I'd frame it as a "yes and" kind of shift. The encouragement message should be followed up with a dose of hard numbers.
Something I've appreciated from a few applications is the hiring manager's initial guess for how the process will turn out. Something like "Stage 1 has X people and our very tentative guess is future stages will go like this".
Scenarios can also substitute in areas where numbers may be misleading or hard to obtain. I've gotten this from mentors before, like here's what could happen if your new job goes great. Here's what could happen if your new job goes badly. Here's the stuff you can control and here's the stuff you can't control.
Something I've tried to practice in my advice is giving some ballpark number and reference class. I tell someone they should consider skilling up in hard area or pursuing competitive field, then I tell them I expect success in <5% of people I give the advice to, and then say you may still want to do it because of certain reasons
Yes, it's all very noisy. But numbers seem far far better than expecting applicants to read between the lines on what a heartwarming message is supposed to mean, especially early-career folks who would understandably assign a high probability of success with it
EA hiring gets a lot of criticism. But I think there are aspects at which it does unusually well.
One thing I like is that hiring and holding jobs feels way more collaborative between boss and employee. I'm much more likely to feel like a hiring manager wants to give me honest information and make the best decision, whether or not that's with them.Relative to the rest of the world they're much less likely to take investigating other options personally.
Work trials and even trial tasks have a high time cost, and are disruptive to people with normal amounts of free time and work constraints (e.g. not having a boss who wants you to trial with other orgs because they personally care about you doing the best thing, whether or not it's with them). But trials are so much more informative than interviews, I can't imagine hiring for or accepting a long-term job without one.
Trials are most useful when you have the least information about someone, so I expect removing them to lead to more inner-ring dynamics and less hiring of unconnected people.
EA also has an admirable norm of paying for trials, which no one does for interviews.
People talk about running critical posts by the criticized person or org ahead of time, and there are a lot of advantages to that. But the plans I've seen are all fairly one sided: all upside goes to the criticized, all the extra work goes to the critic.
What I'd like to see is some reciprocal obligation from recipients of criticism, especially formal organizations with many employees. Things like answering questions from potential critics very early in the process, with a certain level of speed and reliability. Right now it feels like orgs are very fast to respond to polished, public posts, but you can't count on them to even answer questions. They'll respond quickly to public criticism, and maybe even to polished posts sent to them before publication, but they are not fast or reliable at answering questions with implicit potential criticism behind them. Which is a pretty shitty deal for the critic, who I'm sure would love to find out their concern was unmerited before spending dozens of hours writing a polished post.
This might be unfair. I'm quite sure it used to be true, but a lot of the orgs have professionalized over the years. In which case I'd like to ask they make their commitments around this public and explicit, and share them in the same breath that they ask for heads up on criticism.
I see a pretty important benefit to the critic, because you're ensuring that there isn't some obvious response to your criticisms that you are missing.
I once posted something that revised/criticized an Open Philanthropy model, without running it by anyone there, and it turned out that my conclusions were shifted dramatically by a coding error that was detected immediately in the comments.
That's a particularly dramatic example that I don't expect to generalize, but often if a criticism goes "X organization does something bad" the natural question is, why do they do that? Is there a reason that's obvious in hindsight that they've thought about a lot, but I haven't? Maybe there isn't, but I would want to run a criticism by them just to see if that's the case.
I don't think people are obligated to build in the feedback they get extensively if they don't think it's valid/their point still stands.
A few benefits I see to the critic even in the status quo:
The post generally ends up stronger, because it's more accurate. Even if you only got something minor wrong, readers will (reasonably!) assume that if you're not getting your details right then they should pay less attention to your post.
To the extent that the critic wants the public view to end up balanced and isn't just trying to damage the criticizee, having the org's response go live at the same time as the criticism helps.
If the critic does get some things wrong despite giving the criticizee the opportunity to review and bring up additional information, either because the criticizee didn't mention these issues or refused to engage, the community would generally see it as unacceptable for the crtiticizee to sue the critic for defamation. Whereas if a critic posts damaging false claims without that (and without a good reason for skipping review, like "they abused me and I can't sanely interact with them") then I think the law is still on the table.
A norm where orgs need to answer critical questions promptly seems good on it's face, but I'm less sure in practice. Many questions take far more effort to answer... (read more)
Two popular responses to FTX are "this is why we need to care more about honesty" and "this is why we need to not do weird/sketchy shit". I pretty strongly believe the former. I can see why people would believe the latter, but I worry that the value lost is too high.
But I think both side can agree that representing your weird/sketchy thing as mundane is highly risky. If you're going to disregard a bunch of the normal safeguards of operating in the world, you need to replace them with something, and most of those somethings are facilitated by honesty.
Complaints about lack of feedback for rejected grants are fairly frequent, but it seems relevant that I can't get feedback for my accepted grants or in-progress work. The most I have ever gotten was a 👍 react when I texted them "In response to my results I will be doing X instead of the original plan on the application". In fact I think I've gotten more feedback on rejections than acceptances (or in one case, I received feedback on an accepted grant, from a committee member who'd voted to reject). Sometimes they give me more money, so it's not that the work is so bad it's not worth commenting on. Admittedly my grants are quite small, but I'm not sure how much feedback medium or even large projects get.
Acceptance feedback should be almost strictly easier to give, and higher impact. You presumably already know positives about the grant, the impact of marginal improvements is higher in most cases, people rarely get mad about positive feedback, and even if you share negatives the impact is cushioned by the fact that you're still approving their application. So without saying where I think the line should be, I do think feedback for acceptances is higher priority than for rejections.
Good posts generate a lot of positive externalities, which means they're undersupplied, especially by people who are busy and don't get many direct rewards from posting. How do we fix that? What are rewards relevant authors would find meaningful?
Here are some possibilities off the top of my head, with some commentary. My likes are not universal and I hope the comments include people with different utility functions.
- Money. Always a classic, rarely disliked although not always prioritized. I'm pretty sure this is why LTFF and EAIF are writing more now.
- Appreciation (broad). Some people love these. I definitely prefer getting them over not getting them, but they're not that motivating for me. Their biggest impact on motivation is probably cushioning the blow of negative comments.
- Appreciation (specific). Things like "this led to me getting my iron tested" or "I changed my mind based on X". I love these, they're far more impactful than generic appreciation.
- High quality criticism that changes my mind.
- Arguing with bad commenters.
- One of the hardest parts of writing for me is getting a shitty, hostile comment, and feeling like my choices are "let it stand" or "get suc
... (read more)I definitely agree that funding is a significant factor for some institutional actors.
For example, RP's Surveys and Data Analysis team has a significant amount of research that we would like to publish if we had capacity / could afford to do so: our capacity is entirely bottlenecked on funding and as we are ~ entirely reliant on paid commissions (we don't receive any grants for general support) time spent publishing reports is basically just pro bono, adding to our funding deficit.
Example of this sort of unpublished research include:
- The two reports mentioned by CEA here about attitudes towards EA post-FTX among the general public, elites, and students on elite university campuses.
- Followup posts about the survey reported here about how many people have heard of EA, to further discuss people's attitudes towards EA, and where members of the general public hear about EA (this differs systematically)
- Updated numbers on the growth of the EA community (2020-2022) extending this method and also looking at numbers of highly engaged longtermists specifically
- Several studies we ran to develop reliable measure of how positively inclined towards longtermism people are, looking at different predic
... (read more)I think we need to be a bit careful with this, as I saw many highly upvoted posts that in my opinion have been
activelyharmful. Some very clear examples:In general, I think we should promote more posts like "Veg*ns should take B12 supplements, according to nearly-unanimous expert consensus" while not promoting posts like "Veg*nism entails health tradeoffs", when there is no scientific evidence of this and expert consensus of the contrary. (I understand that your intention was not to claim that a vegan diet was worse than an average non-vegan diet, but that's how most readers I've spoken to updated in response to your posts.)
I would be very excited about ... (read more)
As of October 2022, I don't think I could have known FTX was defrauding customers.
If I'd thought about it I could probably have figured out that FTX was at best a casino, and I should probably think seriously before taking their money or encouraging other people to do so. I think I failed in an important way here, but I also don't think my failure really hurt anyone, because I am such a small fish.
But I think in a better world I should have had the information that would lead me to conclude that Sam Bankman-Fried was an asshole who didn't keep his promises, and that this made it risky to make plans that depended on him keeping even explicit promises, much less vague implicit commitments. I have enough friends of friends that have spoken out since the implosion that I'm quite sure that in a more open, information-sharing environment I would have gotten that information. And if I'd gotten that information, I could have shared it with other small fish who were considering uprooting their lives based on implicit commitments from SBF. Instead, I participated in the irrational exuberance that probably made people take more risks on the margin, and left them more vulnerable to... (read more)
I don't know the specific circumstances of your or anyone else's encouragement, so I want to be careful not to opine on any specific circumstances. But as a general matter, I'd encourage self-compassion for "small fish" [1] about getting caught up in "irrational exuberance." Acting in the presence of suboptimal levels of information is unavoidable, and declining to act until things are clearer carries moral weight as well.
In retrospect, we know that the EA whispernet isn't that reliable, that prominence in EA shouldn't be seen as a strong indicator of reliability, that the media was asleep at the wheel, and that crypto investors exercise very minimial due dillgence. But I don't think we should expect "small fish" to have known those things in 2021 and 2022.
As far as other pote... (read more)
Am I understanding right that the main win you see here would have been protecting people from risks they took on the basis that Sam was reasonably trustworthy?
I also feel pretty unsure but curious about whether a vibe of "don't trust Sam / don't trust the money coming through him" would have helped discover or prevent the fraud - if you have a story for how it could have happened (e.g. via as you say people feeling more empowered to say no to him - maybe it would have via been his staff making fewer crazy moves on his behalf / standing up to him more?), I'd be interested.
"protect people from dependencies on SBF" is the thing for which I see a clear causal chain and am confident in what could have fixed it.
I do have a more speculative hope that an environment where things like "this billionaire firehosing money is an unreliable asshole" are easy to say would have gotten better outcomes for the more serious issues, on the margin. Maybe the FTX fraud was overdetermined, even if it wasn't and I definitely don't have enough insight to be confident in picking a correction. But using an abstract version of this case as an example for how I think a more open environment could have led to better outcomes:
- My sense is SBF just kept taking stupid unethical bets and having them work out for him financially and socially. Maybe small consequences early on would have reduced the reward to stupid unethical bets.
- Before the implosion, SBF('s public persona) was an EA success story that young EAs aspired to copy. Less of that on the margin would probably lead to less fraud 5 years from now, especially in the world where the FTX fraud took longer to discover.
- I think aping SBF's persona was bad for other reasons, but they're harder to justify.
- SBF
... (read more)None of my principled arguments against "only care about big projects" have convinced anyone, but in practice Google reorganized around that exact policy ("don't start a project unless it could conceivably have 1b+ users, kill if it's ever not on track to reach that") and they haven't grown an interesting thing since.
My guess is the benefits of immediately aiming high are overwhelmed by the costs of less contact with reality.
LessWrong’s emoji palette is great
That palette is not just great in the abstract, it's great as a representation of LW. I did some very interesting anthropology with some non-rationalist friends explaining the meaning and significance of the weirder reacts.
A lot of what I explained was how specific reacts relate to one of the biggest pain points on LW (and EAF): shitty comments. The reacts are weirdly powerful, in part because it's not the comments' existence that’s so bad, it’s knowing that other people might read them and not understand they are shitty. I could explain why in a comment of my own, but that invites more shitty comments and draws attention to the original one. It’s only worth it if many people are seeing and believing the comment.
Emojis neatly resolve this. If several people mark a comment as soldier mindset, I feel off the hook for arguing with it. And if several people (especially people I respect) mark a comment as insightful or changing their mind, that suggests that at a minimum it’s worth the time to engage with the comment, and quite possibly I am in the wrong.
You might say I should develop a thicker skin so shitty comments bug me less, and t... (read more)
How common do you think "shitty comments" are? And how well/poorly do you think the existing karma system provides an observer with knowledge that the user base "understand[s] they are shitty"? (To be sure, it doesn't tell you if the voting users understand exactly why the comment is shitty.)
I'm not sure how many people would post attributed-to-them emojis if they weren't already anonymously downvoting a comment for being shitty. So if they aren't already getting significant downvotes, I don't know how many negative emojis they would get here.
I like the LW emoji palette, but it is too much. Reading forum posts and parsing through comments can be mentally taxing. I don't want to spend additional effort going through a list of forty-something emojis and buttons to react to something, especially comments. I am often pressed for time, so almost always I would avoid the LW emoji palette entirely. Maybe a few other important reactions can be added instead of all of them? Or maybe there could be a setting which allows people to choose if they want to see a "condensed" or "extended" emoji palette? Either way, just my two cents.
I agree EAF shouldn't have a LW-sized palette, much less LW's specific palette. I want EAF to have a palette that reflects its culture as well as LW's palette reflects its culture. And I think that's going to take more than 4 reacts (note that my original comment mortifyingly used a special palette made for a single post, the new version has the normal EAF reacts of helpful, insightful, changed my mind, and heart), but way less than is in the LW palette.
I do think part of LessWrong's culture is preferring to have too many options rather than making do with the wrong one. I know the team has worked really hard to keep reacts to a manageable level, while making most of them very precise, while covering a wide swath of how people want to react. I think they've done an admirable job (full disclosure: I'm technically on the mod team and give opinions in slack, but that's basically the limit of my power). This is something I really appreciate about LW, but I know shrinks its audience.
A repost from the discussion on NDAs and Wave (a software company). Wave was recently publicly revealed to have made severance dependent on non-disparagement agreements, cloaked by non-disclosure agreements. I had previously worked at Wave, but negotiated away the non-disclosure agreement (but not the non-disparagement agreement).
I appreciate the kudos here, but feel like I should give more context.
I think some of what led to me to renegotiate was a stubborn streak and righteousness about truth. I mostly hear when those traits annoy people, so it’s really nice to have them recognized in a good light here. But that righteous streak was greatly enabled by the fact that my mom is a lawyer who modeled reading legal documents before signing (even when it's embarrassing your kids who just want to join their friends at the rockclimbing birthday party), and that I cou... (read more)
I feel like a lot of castle discourse missed the point.
My guess is that lots of people entered EA with inaccurate expectations, and the volume at which this happens indicates a systemic problem, probably with recruiting. They felt ~promised that EA wasn't the kind of place where people bought fancy castles, or would at least publicly announce they'd... (read more)
I think the first point here -- that the buyers "don't need anyone's permission" to purchase a "castle" -- isn't contested here. Other than maybe the ConcernedEA crowd, is anyone claiming that they were somehow required to (e.g.) put this to a vote?
I think the "right to spend one's own money" in no way undermines other people's "right to speak one's own speech" by lambasting that expenditure. In the same way, my right to free speech doesn't prevent other people from criticizing me for it, or even deciding not to fund/hire me if I were to apply for funding or a job. There are circumstances in which we have -- or should have -- special norms against negative reactions by third parties; for instance, no one should be retailiated against for reporting fraud, waste, abuse, harassment, etc. But the default rule is that what the critics have said here is fair game.
A feeling of EA having breached a "~promise[]" isn't the only basis for standing here. Suppose a non-EA megadonor had given a $15MM presumably tax-deductible donation to a non-EA charity for buying a "castle." Certainly both EAs and non-EAs would have the right to criticize that decision, especially because the tax-favored... (read more)
I think you're slightly missing the point of the 'castle' critics here.
Technically this is obviously true. And it was the main point behind one of the most popular responses to FTX and all the following drama. But I think that point and the post misses people's concerns completely and comes off as quite tone-deaf.
To pick an (absolutely contrived) example, let's say OpenPhil suddenly says it now believes that vegan diets are more moral and healthier than all other diets, and that B12 supplementation increases x-risk, and they're going to funnel billions of dollars into this venture to persuade people to go Vegan and to drone-strike any factories producing B12. You'd probably be shocked and think that this was a terrible decision and that it ... (read more)
2023: "We expect to find more outstanding giving opportunities than we can fully fund unless our community of supporters substantially increases its giving."
Giving Season 2022: "We've set a goal of raising $600 million in 2022, but our research team has identified $900 million in highly cost-effective funding gaps. That leaves $300 million in funding gaps unfilled."
July 2022: "we don’t expect to have enough funding to support all the cost-effective opportunities we find." Reports rolling over some money from 2021, but much less than originally believed.
Giving Season 2021: GiveWell expects to roll over $110MM, but also believes it will find very-high-impact opportunities for those funds in the next year or two.
Giving Season 2020: No suggestion that GW will run out of good opportunities -- "If other donors fully meet the highest-priority needs we see today before Open Philanthropy makes its January grants, we’ll ask Open Philanthropy to donate to priorities further down our list. It won’t give less funding overall—it’ll just fund the next-highest-priority needs."
Addendum: I just checked out Wytham's website, and discovered they list six staff. Even if those people aren't all full-time, several of them supervise teams of contractors. This greatly ups the amount of value the castle would need to provide to be worth the cost. AFAIK they're not overstaffed relative to other venues, but you need higher utilization to break even.
Additionally, the founder (Owen Cotton-Barrat) has stepped back for reasons that seem merited (history of sexual harassment), but a nice aspect of having someone important and busy in charge was that he had a lot less to lose if it was shut down. The castle seems more likely to be self-perpetuating when the decisions are made by people with fewer outside options.
I still view this as fundamentally open phil's problem to deal with, but it seemed good to give an update.
- It puts you in a high SNS activation state, which is inimical to the kind of nuanced math good EA requires
- As Minh says, it's based in avoidance of shame and guilt, which also make people worse at nuanced math.
- The full parable is "drowning child in a shallow pond", and the shallow pond smuggles in a bunch of assumptions that aren't true for global health and poverty. Such as
- "we know what to do", "we know how to implement it", and "the downside is known and finite", which just don't hold for global health and poverty work. Even if you believe sure fire interventions exist and somehow haven't been fully funded, the average person's ability to recognize them is dismal, and many options make things actively worse. The urgency of drowningchildgottasavethemnow makes people worse as distinguishing good charities from bad. The more accurate analogy would be "drowning child in a fast moving river when you don't know how to swim".
- I think Peter Singer believes this so he's not being inconsistent, I just think he's wrong.
- "you can fix this with a single action, after which you are done." Solving poverty for even a single child is a marathon.
- "you are the only person who ca
... (read more)I feel similarly to Jason and JWS. I don't disagree with any of the literal statements you made but I think the frame is really off. Perhaps OP benefits from this frame, but I probably disagree with that too.
Another frame: OP has huge amounts of soft and hard power over the EA community. In some ways, it is the de facto head of the EA community. Is this justified? How effective is it? How do they react to requests for information about questionable grants that have predictably negative impacts on the wider EA community? What steps do they take to guard against motivated reasoning when doing things that look like stereotypical examples of motivated reasoning? There are many people who have a stake in these questions.
The math suggests that the meta would look much different in this world. CEA's proposed budget for 2024 is $31.4MM by itself, about half for events (mostly EAG), about a quarter for groups. There are of course other parts of the meta. There were 3567 respondents to the EA Survey 2022, which could be an overcount or undercount of the number of people who might join a fee-paying society. Only about 60% were full-time employed or self-employed; most of the remainder were students.
Maybe a leaner, more democratic meta would be a good thing -- I don't have a firm opinion on that.
I sometimes argue against certain EA payment norms because they feel extractive, or cause recipients to incur untracked costs. E.g. "it's not fair to have a system that requires unpaid work, or going months between work in ways that can't be planned around and aren't paid for". This was the basis for some of what I said here. But I'm not sure this is always bad, or that the alternatives are better. Some considerations
- if it's okay for people to donate money I can't think of a principled reason it's not okay for them to donate time -> unpaid work is not a priori bad.
- If it would be okay for people to solve the problem of gaps in grants by funding bridge grants, it can't be categorically disallowed to self-fund the time between grants.
- If partial self-funding is required to do independent, grant-funded work, then only people who can afford that will do such work. To the extent the people who can't would have done irreplaceably good work, that's a loss, and it should be measured. And to the extent some people would personally enjoy doing such work but can't, that's sad for them. But the former is an empirical question weighed against the benefits of underpaying, and the latter i
... (read more)I think an underappreciated part of castlegate is that it fairly easily puts people in an impossible bind.
EA is a complicated morass, but there are a few tenets that are prominent, especially early on. These may be further simplified, especially in people using EA as treatment for their scrupulosity issues. For most of this post I'm going to take that simplified point of view (I'll mark when we return to my own beliefs).
Two major, major tenets brought up very early in EA are:
The natural conclusion of which is that donating GiveWell or OpenPhil-certified causes is a safe and easy way to fulfill your moral duty.
If you're operating under those assumptions and OpenPhil funds something without making their reasoning legible, there are two possibilities:
Bot... (read more)
Utilitarianism without strong object-level truthseeking be like
(credit: I found on twitter, uncredited)
Salaries at direct work orgs are a frequent topic of discussion, but I’ve never seen those conversations make much progress. People tend to talk past each other- they’re reading words differently (“reasonable”), or have different implicit assumptions that change the interpretation. I think the questions below could resolve a lot of the confusion (although not all of it, and not the underlying question. Highlighting different assumptions doesn’t tell you who’s right, it just lets you focus discussions on the actual disagreements).
Here’s my guess for the important questions. Some of them are contingent- e.g. you might think new grad generalists and experienced domain experts should be paid very differently. Feel free to give as many sets of answers as you want, just be clear which answers lump together, so no one misreads your expert salary as if it was for interns.
- What kind of position are you thinking about?
- Experienced vs. new grad
- Domain expertise vs generalist?
- Many outside options vs. few?
- Founder vs employee?
- What salary are you thinking about?
- What living conditions do you expect this salary to buy?
- Housing?
- Location?
- Kids?
- Food?
- Savings rate
- What is your ba
... (read more)Ambition snowballs/Get ambitious slowly works very well for me, but sonepeople seem to hate it. My first reaction is that these people need to learn to trust themselves more, but today I noticed a reason I might be unusually suited for this method.
two things that keep me from aiming at bigger goals are laziness and fear. Primarily fear of failure, but also of doing uncomfortable things. I can overcome this on the margin by pushing myself (or someone else pushing me), but that takes energy, and the amount of energy never goes down the whole time I'm working... (read more)
I'm pretty sure you can't have consequentialist arguments for deceptions of allies or self, because consequentialism relies on accurate data. If you've blinded yourself then you can have the best utility function in the world and it will do you no good because you're applying it to gibberish.
I'll be at EAGxVirtual this weekend. My primary goal is to talk about my work on epistemics and truthseeking within EA, and especially get the kind of feedback that doesn't happen in public. If you're interested, you can find me on the usual channels.