AD

Amber Dawn

4315 karmaJoined

Bio

I'm a freelance writer and editor for the EA community. I can help you edit drafts and write up your unwritten ideas. If you'd like to work with me, book a short calendly meeting or email me at ambace@gmail.com. Website with more info: https://amber-dawn-ace.com/

Comments
221

Topic contributions
70

Yeah, that's what I hoped. I couldn't honestly say that I would care about these labels (cos I don't eat animal products anyway), but I said stuff like 'consumers would like to know this', which I think is true.

That's interesting! 

As a follow-up, in consultations you've been involved with, did they put weight on the thoughts on random members of the public, assuming the thoughts were sensible ofc?

I have a few thoughts on this.

First, it's definitely worth considering if you're contributing to conversations, but as others have said, I don't think the bar has to be "your post is as well-thought-out and detailed as a Scott Alexander post on the same topic". I basically trust the Forum's karma system + people's own judgment of what's valuable to them to effectively filter for what's worth reading, so I don't think writers have to do that themselves. If your post isn't valuable to individuals, they won't read it or upvote it.

A way you can see this is: if you write the thing, people can choose not to read it, but if you don't write it, they can't choose to read it. I feel like what you are doing is similar to how some EAs are like 'oh I won't apply to that job because I don't want to waste the org's time and surely I'm not a good candidate'. Well, that's true for some jobs, but most orgs want people to apply, even if they are uncertain, and they'll do the filtering themselves! 

Second, maybe if you're worried about diverting traffic from posts you see as better, you could incorporate those posts into your own and link them/give them a shout-out.

E.g.: [at the end of the post] "if you're interested in this topic, I found this post by [NAME] super helpful in clarifying my thoughts."
E.g.: [at the start of the post] "I really enjoyed this post by [NAME] on [TOPIC], and it inspired me to write up some more arguments about [TOPIC] that [NAME] didn't go into"

i.e. frame your post as a "yes and" or as a contribution to an ongoing conversation, rather than something designed to compete with, or be as good as, other posts. 

NON-example: "If you care about this topic you should probably read this post whch is waaaay better than mine I'm sure" self-flagellate, self-flagellate

Third, would it help to frame your writing (to yourself, or explicitly in the post) as a way for you to clarify your own thinking, rather than as something that has to make an original argument? For example, Holden Karnofsky has talked about 'learning by writing': maybe you are doing a version of that, rather than being at the absolute cutting edge of research. You might say 'well, in that case, I don't need to publish it', and it's true you don't have to publish anything, but some reasons to publish this sort of writing might be:

-it might be helpful, not for experts, but for others with similar expertise to you (or less) who are trying to clarify their own thinking on the matter
-you can get feedback from commenters that might help you learn
-the fact of having Published a Thing might motivate you to do more of this


 

FWIW I'm happy this question was asked publicly: I had no idea about this ruling (which is just extremely cruel and unhelpful) and this is a serious inclusion issue. 

Yeah, this is a good point: you can go a long way with just commitment/agency/creativity/confidence/?

I mean, maybe people who are strong in those traits aren't really "mediocre", ?

But yeah, this is a good reminder that excellence isn't just one axis.

Answer by Amber Dawn47
13
0

I’ve been thinking about this quite a bit recently. It’s not that I see myself as a “mediocre” EA, and in fact I work with EAs, so I am engaging with the community through my work. But I feel like a lot of the attitudes around career planning in EA sort of assume that you are formidable within a particular, rather narrow mould. You talk about mediocre EAs, but I’d also extend this to people who have strong skills and expertise that’s not obviously convertable into ‘working in the main EA cause areas’.

And the thing is, this kind of makes sense: like, if you’re a hardcore EA, it makes sense to give lots of attention and resources to people who can be super successful in the main EA cause areas, and comparatively neglect people who can’t. Inasmuch as the community’s primary aim is to do more good according to a specific set of assumptions and values, and not to be a fuzzy warm inclusive space, it makes sense that there aren’t a lot of resources for people who are less able to have an impact. But it's kind of annoying if you're one of those people! 

Or like: most EA jobs are crazy competitive nowadays. And from the point of view of "EA" (as an ideology), that's fine; impactful jobs should have large hiring pools of talented committed people. But from the point of view of people in the hiring pool, who are constantly applying to and getting rejected from EA jobs - or competitive non-EA jobs - because they've been persuaded these are the only jobs worth having, it kinda sucks.

There’s this well-known post ‘don’t be bycatch’; I currently suspect that EA structurally generates bycatch. By ‘structurally’ I mean ‘the behaviour of powerful actors in EA is kinda reasonable, but also it predictably creates situations where lots of altruistic, committed people get drawn into the community but can’t succeed within the paradigms the community has defined as success’. 

Thanks for writing this! I’ve long been suspicious of this idea but haven’t got round to investigating the claim itself, and my skepticism of it, fully, so I super appreciate you kicking off this discussion.

I also identify with ‘do I disagree with this empirically or am I just uneasy with the vibes/frame, how to tease those apart, ?'

For people who broadly agree with the idea that Sarah is critiquing: what do you think is the best defence of it, arguing from first principles and data as much as possible?

I have a couple of other queries/scepticisms about the power-law argument. I haven’t read all the other comments, so sorry if I repeat stuff said elsewhere.

1. Does it empirically hold up even assuming you can attribute stuff to individuals?
You focus a lot on critiquing conceptual idea of the individual impact of one person (since most actions happen in the context of other actions and actors). I think I also have empirical disagreements with the claim even if we can tease out what impact comes from which person. 

It feels to me like EAs sometimes over-generalize that finding from global health interventions — where I don’t doubt that it holds up — to other domains, where it hasn’t been established (e.g., orgs working in longtermist causes, or people compared to their peers, or actions one takes in one’s career). It’s possible that there *is* more discussion and substantiation of this idea out there, but I just haven’t seen it.

Like, even if we accept that (per your example) the President does have much more impact than the average person, or (per Jeff’s example above) a larger donor has more impact than a smaller donor to the same charity, can I generalize that to the actions available to me personally, or to questions of how impactful ‘overall’ I can be compared to my peers? What’s the empirical justification for such generalizations?

2. Is the bar low? Does this depend on how you define the space?

Benjamin Todd, in the article you linked, claims that the power-law pattern has been found in many areas of social impact. I’m sure this is true, but I want to point out that this is kind of contingent, not a law of nature. E.g., I’d guess this is due to some combination of ‘there’s not a culture of measuring outcomes and prioritization in general philanthropy’ (that’s kind of the whole point of EA) and/or ‘the world is very complicated and it’s hard to know ex ante (and sometimes even ex post) what will work/what did work’. 

Like, if there were a culture shift in philanthropy across the board meaning that interventions would only be funded or carried out if they met some effectiveness bar, would we still expect interventions to be power-law distributed? Surely less so?

To frame this another way, imagine I said to you ‘the nutritional value of foods follows a power-law distribution’, and you were like ‘hmm’, but then it turned out that among ‘foods’ I was counting inedible objects like chairs and rocks and grass. So yes, only a minority of objects have most of the nutritional value, but anything we’d call food is in the heavy tail, and this is a kind of silly frame.

This point isn’t fully worked out but yeah, I wonder if ‘what counts as the distribution’ is kind of socially constructed in a way that’s not always helpful.  
 

I guess I weakly disagree: I think that motivation and already having roots in an issue really are a big part of personal fit - especially now that lots of "classic EA jobs" seem highly oversubscribed, even if the cause areas are more neglected than they should be. 

Like to make this more concrete, if your climate-change-motivated young EA was like 'well, now that I've learnt about AI risk, I guess I should pursue that career, ?', but they don't feel excited about it. Even if they have the innate ability to excel in AI safety, they will still have to outcompete people who have already built up expertize there, many of whom will find it easier to motivate themselves to work hard because they are interested in AI. 

(On the object level, I assume that many roles in climate change and gender equality stuff are in fact more impactful than many roles in more canonical EA cause areas). 


 

Thanks for writing this! As others have said, thank you for trying to do this valuable work even if it didn't work out. 

I haven't read everything so sorry if you mention this elsewhere but I'm confused about:

-'Of the three studies we found that measure the effects of facility-based postpartum family planning programming on pregnancy rates, two found no effect (Rohr et al. 2024; Coulibaly et al. 2021), and one found only a 0.7% decrease in short-spaced pregnancies (Guo et al. 2022).
This suggests that facility-based programs may have limited to no effect on reducing unintended pregnancies despite increasing contraceptive uptake.'

Why might programs increase contraceptive uptake but not reduce unintended pregnancies? Is it mainly because many who take the contraceptives are in the postpartum insusceptibility period anyway? 

I think I've never gotten real feedback! It's possible I'm not promoting it often enough/not making specific requests of people, so people don't know it's an option.

Load more