bruce

3029 karmaJoined

Bio

Doctor from NZ, independent researcher (grand futures / macrostrategy) collaborating with FHI / Anders Sandberg. Previously: Global Health & Development research @ Rethink Priorities.

Feel free to reach out if you think there's anything I can do to help you or your work, or if you have any Qs about Rethink Priorities! If you're a medical student / junior doctor reconsidering your clinical future, or if you're quite new to EA / feel uncertain about how you fit in the EA space, have an especially low bar for reaching out.

Outside of EA, I do a bit of end of life care research and climate change advocacy, and outside of work I enjoy some casual basketball, board games and good indie films. (Very) washed up classical violinist and Oly-lifter.

All comments in personal capacity unless otherwise stated.

Posts
9

Sorted by New
4
· · 1m read

Comments
141

bruce
78
27
3

Thanks for writing this post!

I feel a little bad linking to a comment I wrote, but the thread is relevant to this post, so I'm sharing in case it's useful for other readers, though there's definitely a decent amount of overlap here.

TL; DR

I personally default to being highly skeptical of any mental health intervention that claims to have ~95% success rate + a PHQ-9 reduction of 12 points over 12 weeks, as this is is a clear outlier in treatments for depression. The effectiveness figures from StrongMinds are also based on studies that are non-randomised and poorly controlled. There are other questionable methodology issues, e.g. surrounding adjusting for social desirability bias. The topline figure of $170 per head for cost-effectiveness is also possibly an underestimate, because while ~48% of clients were treated through SM partners in 2021, and Q2 results (pg 2) suggest StrongMinds is on track for ~79% of clients treated through partners in 2022, the expenses and operating costs of partners responsible for these clients were not included in the methodology.

(This mainly came from a cursory review of StrongMinds documents, and not from examining HLI analyses, though I do think "we’re now in a position to confidently recommend StrongMinds as the most effective way we know of to help other people with your money" seems a little overconfident. This is also not a comment on the appropriateness of recommendations by GWWC / FP)

 

(commenting in personal capacity etc)

 

Edit:
Links to existing discussion on SM. Much of this ends up touching on discussions around HLI's methodology / analyses as opposed to the strength of evidence in support of StrongMinds, but including as this is ultimately relevant for the topline conclusion about StrongMinds (inclusion =/= endorsement etc):

bruce
23
10
0

While I agree that both sides are valuable, I agree with the anon here - I don't think these tradeoffs are particularly relevant to a community health team investigating interpersonal harm cases with the goal of "reduc[ing] risk of harm to members of the community while being fair to people who are accused of wrongdoing".

One downside of having the bad-ness of say, sexual violence[1]be mitigated by their perceived impact,(how is the community health team actually measuring this? how good someone's forum posts are? or whether they work at an EA org? or whether they are "EA leadership"?) when considering what the appropriate action should be (if this is happening) is that it plausibly leads to different standards for bad behaviour. By the community health team's own standards, taking someone's potential impact into account as a mitigating factor seems like it could increase the risk of harm to members of the community (by not taking sufficient action with the justification of perceived impact), while being more unfair to people who are accused of wrongdoing. To be clear, I'm basing this off the forum post, not any non-public information

Additionally, a common theme about basically every sexual violence scandal that I've read about is that there were (often multiple) warnings beforehand that were not taken seriously.

If there is a major sexual violence scandal in EA in the future, it will be pretty damning if the warnings and concerns were clearly raised, but the community health team chose not to act because they decided it wasn't worth the tradeoff against the person/people's impact.

Another point is that people who are considered impactful are likely to be somewhat correlated with people who have gained respect and power in the EA space, have seniority or leadership roles etc. Given the role that abuse of power plays in sexual violence, we should be especially cautious of considerations that might indirectly favour those who have power.

More weakly, even if you hold the view that it is in fact the community health team's role to "take the talent bottleneck seriously; don’t hamper hiring / projects too much" when responding to say, a sexual violence allegation, it seems like it would be easy to overvalue the bad-ness of the immediate action against the person's impact, and undervalue the bad-ness of many more people opting to not get involved, or distance themselves from the EA movement because they perceive it to be an unsafe place for women, with unreliable ways of holding perpetrators accountable.

That being said, I think the community health team has an incredibly difficult job, and while they play an important role in mediating community norms and dynamics (and thus have corresponding amount of responsibility), it's always easier to make comments of a critical nature than to make the difficult decisions they have to make. I'm grateful they exist, and don't want my comment to come across like an attack of the community health team or its individuals!

(commenting in personal capacity etc)

  1. ^

    used as an umbrella term to include things like verbal harassment. See definition here.

bruce
75
23
1

If this comment is more about "how could this have been foreseen", then this comment thread may be relevant. I should note that hindsight bias means that it's much easier to look back and assess problems as obvious and predictable ex post, when powerful investment firms and individuals who also had skin in the game also missed this. 

TL;DR: 
1) There were entries that were relevant (this one also touches on it briefly)
2) They were specifically mentioned
3) There were comments relevant to this. (notably one of these was apparently deleted because it received a lot of downvotes when initially posted)
4) There has been at least two other posts on the forum prior to the contest that engaged with this specifically

My tentative take is that these issues were in fact identified by various members of the community, but there isn't a good way of turning identified issues into constructive actions - the status quo is we just have to trust that organisations have good systems in place for this, and that EA leaders are sufficiently careful and willing to make changes or consider them seriously, such that all the community needs to do is "raise the issue". And I think looking at the systems within the relevant EA orgs or leadership is what investigations or accountability questions going forward should focus on - all individuals are fallible, and we should be looking at how we can build systems in place such that the community doesn't have to just trust that people who have power and who are steering the EA movement will get it right, and that there are ways for the community to hold them accountable to their ideals or stated goals if it appears to, or risks not playing out in practice.

i.e. if there are good processes and systems in place and documentation of these processes and decisions, it's more acceptable (because other organisations that probably have a very good due diligence process also missed it). But if there weren't good processes, or if these decisions weren't a careful + intentional decision, then that's comparatively more concerning, especially in context of specific criticisms that have been raised,[1]  or previous precedent. For example, I'd be especially curious about the events surrounding Ben Delo,[2] and processes that were implemented in response. I'd be curious about whether there are people in EA orgs involved in steering who keep track of potential risks and early warning signs to the EA movement, in the same way the EA community advocates for in the case of pandemics, AI, or even general ways of finding opportunities for impact. For example, SBF, who is listed as a EtG success story on 80k hours, has publicly stated he's willing to go 5x over the Kelly bet, and described yield farming in a way that Matt Levine interpreted as a Ponzi. Again, I'm personally less interested in the object level decision (e.g. whether or not we agree with SBF's Kelly bet comments as serious, or whether Levine's interpretation as appropriate), but more about what the process was, how this was considered at the time with the information they had etc. I'd also be curious about the documentation of any SBF related concerns that were raised by the community, if any, and how these concerns were managed and considered (as opposed to critiquing the final outcome).

Outside of due diligence and ways to facilitate whistleblowers, decision-making processes around the steering of the EA movement is crucial as well. When decisions are made by orgs that bring clear benefits to one part of the EA community while bringing clear risks that are shared across wider parts of the EA community,[3] it would probably be of value to look at how these decisions were made and what tradeoffs were considered at the time of the decision. Going forward, thinking about how to either diversify those risks, or make decision-making more inclusive of a wider range stakeholders[4], keeping in mind the best interests of the EA movement as a whole.

(this is something I'm considering working on in a personal capacity along with the OP of this post, as well as some others - details to come, but feel free to DM me if you have any thoughts on this. It appears that CEA is also already considering this)

If this comment is about "are these red-teaming contests in fact valuable for the money and time put into it, if it misses problems like this"

I think my view here (speaking only for the red-teaming contest) is that even if this specific contest was framed in a way that it missed these classes of issues, the value of the very top submissions[5] may still have made the efforts worthwhile. The potential value of a different framing was mentioned by another panelist. If it's the case that red-teaming contests are systematically missing this class of issues regardless of framing, then I agree that would be pretty useful to know, but I don't have a good sense of how we would try to investigate this.

  

  1. ^

    This tweet seems to have aged particularly well. Despite supportive comments from high-profile EAs on the original forum post, the author seemed disappointed that nothing came of it in that direction. Again, without getting into the object level discussion of the claims of the original paper, it's still worth asking questions around the processes. If there was were actions planned, what did these look like? If not, was that because of a disagreement over the suggested changes, or the extent that it was an issue at all? How were these decisions made, and what was considered?

  2. ^

    Apparently a previous EA-aligned billionaire ?donor who got rich by starting a crypto trading firm, who pleaded guilty to violating the bank secrecy act

  3. ^

    Even before this, I had heard from a primary source in a major mainstream global health organisation that there were staff who wanted to distance themselves from EA because of misunderstandings around longtermism.

  4. ^

    This doesn't have to be a lengthy deliberative consensus-building project, but it should at least include internal comms across different EA stakeholders to allow discussions of risks and potential mitigation strategies.

  5. ^
  6. Show all footnotes

As requested, here are some submissions that I think are worth highlighting, or considered awarding but ultimately did not make the final cut. (This list is non-exhaustive, and should be taken more lightly than the Honorable mentions, because by definition these posts are less strongly endorsed  by those who judged it. Also commenting in personal capacity, not on behalf of other panelists, etc):

Bad Omens in Current Community Building
I think this was a good-faith description of some potential / existing issues that are important for community builders and the EA community, written by someone who "did not become an EA" but chose to go to the effort of providing feedback with the intention of benefitting the EA community. While these problems are difficult to quantify, they seem important if true, and pretty plausible based on my personal priors/limited experience. At the very least, this starts important conversations about how to approach community building that I hope will lead to positive changes, and a community that continues to strongly value truth-seeking and epistemic humility, which is personally one of the benefits I've valued most from engaging in the EA community.

Seven Questions for Existential Risk Studies
It's possible that the length and academic tone of this piece detracts from the reach it could have, and it (perhaps aptly) leaves me with more questions than answers, but I think the questions are important to reckon with, and this piece covers a lot of (important) ground. To quote a fellow (more eloquent) panelist, whose views I endorse: "Clearly written in good faith, and consistently even-handed and fair - almost to a fault. Very good analysis of epistemic dynamics in EA." On the other hand, this is likely less useful to those who are already very familiar with the ERS space.

Most problems fall within a 100x tractability range (under certain assumptions)
I was skeptical when I read this headline, and while I'm not yet convinced that 100x tractability range should be used as a general heuristic when thinking about tractability, I certainly updated in this direction, and I think this is a valuable post that may help guide cause prioritisation efforts.

The Effective Altruism movement is not above conflicts of interest
I was unsure about including this post, but I think this post highlights an important risk of the EA community receiving a significant share of its funding from a few sources, both for internal community epistemics/culture considerations as well as for external-facing and movement-building considerations. I don't agree with all of the object-level claims, but I think these issues are important to highlight and plausibly relevant outside of the specific case of SBF / crypto. That it wasn't already on the forum (afaict) also contributed to its inclusion here.


I'll also highlight one post that was awarded a prize, but I thought was particularly valuable:

Red Teaming CEA’s Community Building Work
I think this is particularly valuable because of the unique and difficult-to-replace position that CEA holds in the EA community, and as Max acknowledges, it benefits the EA community for important public organisations to be held accountable (and to a standard that is appropriate for their role and potential influence). Thus, even if listed problems aren't all fully on the mark, or are less relevant today than when the mistakes happened, a thorough analysis of these mistakes and an attempt at providing reasonable suggestions at least provides a baseline to which CEA can be held accountable for similar future mistakes, or help with assessing trends and patterns over time. I would personally be happy to see something like this on at least a semi-regular basis (though am unsure about exactly what time-frame would be most appropriate). On the other hand, it's important to acknowledge that this analysis is possible in large part because of CEA's commitment to transparency.

The CEO confirms Riley was raising behavioral complaints about you.

Where in Zach's comment did he confirm this? He said:
"In the fall of 2024, Riley went to HR with the document Frances references to share complaints about a colleague’s behavior. Those concerns were the focus of Riley’s writing, and they drove how our team engaged with and shared (or didn’t share) it." This doesn't confirm that the complaints were about Frances?

then the inclusion is at least explicable as context 

Can you clarify exactly what you're claiming is explicable to be included as context? Zach's comment said, "Sharing HR concerns does not require disclosing a colleague’s sexual assault". Frances said, "But further, it was more than that. He didn't neutrally “disclose” it in a single, non-specific sentence. He wrote a description of me being raped. He describes it. He muses and speculates about my subsequent mental health crisis."

Your post implies that CEA leadership is cowardly, indifferent, and complicit. But an organization that waived confidentiality, paid for your lawyer, never attempted to silence you, and whose CEO gave you what you yourself describe as a genuine apology is not staffed by monsters. They got things wrong. That is meaningfully different from the picture this post paints, and I think the people involved deserve to have that said.

Cowardice was largely a description of other people deferring to leadership, but minor quibble aside, these two claims are not mutually exclusive! The folks involved can be reasonably and fairly perceived to be cowardly, indifferent, and complicit to harms, while also getting things wrong, and also not staffed by 'monsters'.

We're asked to believe that HR, legal, the CEO, the COO, and multiple managers all independently failed a basic moral test. More likely, in full context, it read like a messy workplace complaint with too much personal detail.

And we haven't seen it. Everything we know comes from fragments read aloud from memory by a colleague, relayed months later in a post written as advocacy. We don't have the information this thread thinks it has.

I would be much more sympathetic to this if it wasn't for the case that two independent investigations subsequently flagged this as harassment/sexual harassment, including CEA's own legal team, and the apparently massive shift in behaviour after Frances opted for public accountability.

If Riley was complaining about behavioral issues connected to Frances's trauma, and she herself acknowledges worsening PTSD and difficulty functioning, then providing that context in a complaint isn't sexualization. It's explanation. Clumsy, probably too detailed, but meaningfully different from the post's framing.

It's entirely reasonable for someone to submit an HR complaint, and it's entirely reasonable for you guess that Riley was well intentioned and clumsy rather than malevolent. But it's not clear to me what your interpretation of the post's framing is. From my perspective, Frances hasn't made any claims about Riley's intent, but just the impact that the circulation of this document.

Nine months went by and I heard absolutely nothing. No safeguarding steps were taken. The document remained in circulation.

For a while, I tried to block the harassment out entirely. I was completely overwhelmed and simply did not have the capacity to process it. I was already managing a PTSD diagnosis as a result of the rape and an ongoing criminal investigation with the UK police. I no longer trusted CEA’s HR. Not to mention, I didn’t have access to the document myself.

As the months went on, I became increasingly anxious and embarrassed around leadership and those who had read the document. Increasingly dissociated. I began having nightmares about new documents being circulated. My therapist noted my PTSD symptoms were continuing to worsen. When I ran into Riley at the office, I would often freeze. I started eating lunch in my team’s room to avoid the cafeteria, or skipping lunch altogether.


To be clear, I don't think that the focus on Riley's intention meaningfully changes the mistakes at CEA here! I'd type more on this, but here's a good passage on that point.

I received this in my DMs and am sharing anonymously on their behalf:

Zach says: "Failing to do so placed an unfair burden on Frances to self-advocate ...", but this seems to be obscuring the fact that she wouldn't even have had a chance to self-advocate if it hadn't been for some member of staff (presumably against CEA policy) sharing the existence of the doc with her. I wonder why this wasn't addressed in the reflections.

Indeed, why is it that when someone did have concerns the thing they did was to partially disclose things to Frances rather than raise it within CEA? This does seem to suggest that people reading the document could feel worried about it, and also might be suggestive of issues with internal culture. I feel a bit worried that this isn't a part of what CEA appear to be taking responsibility for addressing.

Can you check your claude link? this is what it links to for me:

https://forum.effectivealtruism.org/posts/XxXnPoGQ2eKsQx3FE/data%20concerning%20a%20natural%20person%E2%80%99s%20sex%20life

Hey Zach, thanks for the response.
I know you are unlikely to be able to reply to this with anything meaningfully helpful, and this might be frustrating for you, but I just wanted to flag some things that from the outside seem at minimum incongruous. 

I've typed this quickly and without visibility into all of the considerations and information you have, so apologies in advance if this is more uncharitable than you'd like. (emphasis in quotes added)
 

Those concerns were the focus of Riley’s writing, and they drove how our team engaged with and shared (or didn’t share) it. We have an obligation as an employer to treat such complaints confidentially, evaluate them seriously, and avoid retaliatory action against the person raising the concerns. These obligations exist in part to avoid creating a chilling effect where employees feel uncomfortable raising HR concerns for fear of negative consequences for themselves.

Sorry but presumably:

  • CEA's obligation as an employer to evaluate complaints seriously also applies to Frances?
  • CEA's obligations around confidentiality would also apply to the sharing of Frances' experiences in the doc?
  • an employee raising concerns about something doesn't shield them from all misconduct or harassment during the process of raising the concern? 

What about the chilling effect of staff not feeling comfortable raising HR concerns, or even working in your organisation because empirically CEA don't seem to take harassment / sexual harassment sufficiently seriously? Does the idea that multiple managers, HR, CEO, and COO of an organisation can allow a sexualised description of an employee's rape (etc) to be spread in the organisation without their consent, disregard an offer from the community health team to step in, and take ~no actions for 9 months not seem like it might have some kind of a chilling effect (or more)? I recognise that it's important for HR concerns to be evaluated seriously but it feels like this standard wasn't applied in any meaningful way to Frances?

It is now clear the ways in which our approach was too limited, too focused on following a standard HR process, and insufficiently proactive in recognizing the harmful nature of the contents included with the complaints.

I hope you appreciate that it's difficult to take this statement seriously; it really doesn't seem like the issue here is that CEA was following a standard HR process, both during the 9 months, as well as after the complaint. 

What standard HR processes include “designated walking paths” and “assigned meal times” as appropriate responses? Alternatively, it seems like CEA's 'standard HR process' does not capture the fact that the kind of content circulated might very obviously be considered a separate HR issue? (I recognise you explicitly name some of these failings afterwards and I don't want to discount that, I just separately don't really think it's very convincing that HR was 'too focussed on following a standard process' is a good excuse / representation of what happened, and wanted to call that out.[1]

CEA should have proactively initiated this investigation sooner, without requiring Frances to act first. Failing to do so placed an unfair burden on Frances to self-advocate during what was an already difficult time...

Fair enough! Sorry if I'm reading too much into what might just be ~boilerplate. But acknowledging just the start time of the investigation makes it sound like you agree that this investigation should have been done, and once Frances advocated for this you took this seriously[2] (perhaps bar some 'communication issues' that you acknowlege).

But if:

  1. CEA's own legal team decided this was harassment
  2. you later acknowledge[3] that creating a culture that prevents/addresses sexual harassment included staffing changes such as removing Riley, etc;

Then why did CEA propose things like "designated walking paths" and "assigned meal times" the first time round, instead of just taking action at that stage? This doesn't seem like it's just an issue of "CEA wasn't proactive about initiating this investigation", but also one where it didn't take the investigation or HR processes for Frances appropriately seriously! Also, this does not appear to just an issue of Riley, or of HR here. This document allegedly crossed multiple managers, as well as your, and the COO's desk! Should readers be concluding that somehow none of the people involved considered taking further action? Or that they did take more actions and it didn't go anywhere? Or something else?

To be more explicit, right now it doesn't seem like there's any public information I can draw on to rule out something like "CEA took actions that appear consistent with them being motivated more by protecting themselves from legal and reputational risk, rather than because they are primarily interested in the wellbeing of their employees".[4]

Given the seriousness of the situation I hope you understand me holding you to the public standard rather than base this off any positive personal interactions I may have had with you and any other CEA staff!
 

I also recognize laying groundwork means we are far from the desired end state, and that we will need to work hard to improve instead of offering quick fix solutions. 

Part of the issue here is that even the groundwork that has already been laid did not help in this case right? What's the reason the EA community, or prospective employees, should trust that things are different this time around?


(written in personal capacity)

 

  1. ^

    Perhaps I'd be more convinced by something like "existing processes were grossly inadequate +/- not applied consistently", for example

  2. ^

    "We have an obligation as an employer to treat such complaints confidentially, evaluate them seriously..."

  3. ^

    "In particular, we need to create a culture where there is more organizational ownership and proactivity to prevent and address sexual harassment. We’re laying the groundwork for some of those changes via new staffing (Riley no longer works at CEA, we have a new HR manager, and multiple additional hires are on the way)."

  4. ^

    See also the extent to which the effort CEA put into it changed once Frances informed the board that she was considering going public

  5. Show all footnotes

To be clear that claude conversation was not a conversation from a CEA staff member! I was just very surprised about what seems to have happened here. I had a conversation with claude to show that even if you knew nothing about HR or workplace practices, you'd get to a better set of recommendations than what happened in practice if you just asked an LLM.

Load more