This is a special post for quick takes by Sanjay. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:
Sanjay
25
4
1
4
2

I'm used to seeing many expert opinions on psychotherapy converge on the view that the type of therapy doesn't make much difference (at least as far as the evidence can tell us). I.e. it doesn't seem to matter much whether you choose CBT or IPT or whatever. The therapeutic alliance, on the other hand, does matter. Therapeutic alliance means something like "How well you get on with your therapist" (plus some related things).

I had a fleeting thought that perhaps the therapeutic alliance might be neglected. E.g. maybe there's a novel intervention which involves training therapists really heavily on how to improve the therapeutic alliance, and skimping on (or entirely ignoring?) traditional therapeutic methods.

I suspect that this fleeting thought is probably not correct.

Firstly, a quick look at google scholar found loads of meta-analyses. One of them started with "The alliance continues to be one of the most investigated variables related to success in psychotherapy".

Secondly, the intervention that I imagined (training therapists to focus on the alliance) arguably already exists. Rogerian therapy (also known as person-centred therapy) is one of the main forms of therapy, and it arguably focuses heavily on the alliance (I'm glossing over a bunch of nuances about Rogerian therapy).

Thirdly, the effect size seems to be small, as far as I can tell (having not looked into this carefully). Arnow & Steidman 2014 stated that "Overall, meta-analytic findings reveal that the magnitude of the alliance-outcome relationship is modest, accounting for 5-8% of the variance in outcome."

So it actually seems that the alliance is getting loads of attention relative to its importance, I suspect. If I looked into this more carefully, it's possible I could change my mind again, but I suspect I won't look into it more carefully.

I always read therapeutic alliance as advice for the patient, where one should try many therapists before finding one that fits. I imagine therapists are already putting a lot of effort on the alliance front

Perhaps an intervention could be an information campaign to tell patients more about this? I feel it’s not well known or to obvious that you can (1) tell your therapist their approach isn’t working and (2) switch around a ton before potentially finding a fit

I haven’t looked much into it though

My intuition says that people are probably already following the heuristic "if you don't like your therapist, try to get another one". I also haven't given much thought to the patient's/client's perspective on the therapeutic alliance.

How likely is this to be a real effect vs a confound? I imagine if I feel like therapy is working, I'm much more likely to like my therapist (similarly I'm more likely to like a physical trainer if I'm getting healthier, I'm more likely to like my teacher if I feel like I'm learning more, etc)

Good question. 

It's also helpful because the wording of my post was meant to convey that "expert opinions tend to believe that the therapeutic alliance matters" (and not necessarily that I'm confident that that's the case). 

One of the papers that I referenced did flag that most of the studies are observational rather than experimental, which does validate your concern. (I think it was Arnow & Steidman 2014 which said this; I don't know if a more recent paper sheds more light on this).

I'm not planning to look into this topic in any depth, but perhaps someone more knowledgeable can give a more definitive answer.

Someone pinged me a message on here asking about how to donate to tackle child sexual abuse. I'm copying my thoughts here.

I haven't done a careful review on this, but here's a few quick comments:

  • Overall, I don't know of any charity which does interventions tackling child sexual abuse, and which I know to have a robust evidence-and-impact mindset.
  • Overall, I have the impression that people who have suffered from child sexual abuse (hereafter CSA) can suffer greatly, and tackling this is intractable. My confidence on this is medium -- I've spoken with enough people to be confident that it's true at least some of the time, but I'm not clear on the academic evidence.
  • This seems to point in the direction of prevention instead.
  • There are interventions which aim to support children to avoid being abused. I haven't seen the evidence on this (and suspect that high quality evidence doesn't exist). If I were to guess, I would guess that the best interventions probably do have some impact, but that impact is limited.
    • To expand on this: my intuition says that the less able the child is to protect themselves, the more damage the CSA does. I.e. we could probably help a confident 15-year old avoid being abused, however that child might get different -- and, I suspect, on average less bad -- consequences than a 5 year old; but helping the 5 year old might be very intractable. 
  • This suggests that work to support the abuser may be more effective. 
    • It's likely also more neglected, since donors are typically more attracted to helping a victim than a perpetrator.
    • For at least some paedophiles, although they have sexual urges toward children, they also have a strong desire to avoid acting on them, so operating cooperatively with them could be somewhat more tractable.
  • Unfortunately, I don't know of any org which does work in this area, and which has a strong evidence culture. Here are some examples:
    • I considered volunteering with Circles many years ago. They offer circles of accountability and support to perpetrators of CSA who were recently released from prison -- essentially this is a handful of volunteer who befriend a CSA perpetrator. I opted not to volunteer because they didn't have a strong enough evidence culture.
    • The Dunkelfeld project (aka Troubled Desire) seems to be doing work which has promise, but again, the evidence appears to be lacking.
  • I'm unclear on the geographic dimension to this. I.e. maybe the fact that costs are lower in the developing world could make it more effective there, but I'm unclear on this. I doubt that tractability would be much better there, which is normally what drives most of the benefit of operating in the developing world.
  • I'm not very optimistic about this achieving the cost-effectiveness of GiveWell Top Charity / SoGive Gold Standard status, unless the moral weights component of the model weighted the extent of suffering associated with CSA as very high on average (which might be valid).

If anyone is interested in this topic and wants to put aside a substantial sum (high 5 digits or six digits) then the next steps would involve a number of conversations to gather more evidence and check whether existing interventions are as lacking in evidence as I suspect. If so, the next step would be work on creating a new charity. It's possible that Charity Entrepreneurship might be interested in this, but I haven't spoken with them about this and I don't know their appetite. I'd be happy to support you on this, mostly because I know that CSA can be utterly horrific (at least some of the time). 

You may know this already, but No Means No Worldwide works with children and adolescents. E.g., the mean age of girls in this study is 12.3 years. Founders Pledge evaluated them (see here for a summary and here for a full report) and provisionally recommended them. I don't know if the person is particularly looking into tackling sexual abuse of younger children, but this charity seems worth mentioning as an option.

Thanks very much Saulius. 

In SoGive's 2023 plans document, we said 

"An investigation of No Means No Worldwide was suggested to us by a member of the EA London community, who was excited to have an EA-aligned recommendation for a charity which prevents sexual violence. We have mostly completed a review of this charity, and were asked not to publish it yet because it used a study which is not yet in the public domain."

That said, part of the reason I didn't allude to NMNW is that my vague memory of the average was older (presumably my vague memory was wrong).

Will everyone code 2-3x more quickly because of AI?

To get a sense of the impact of AI on coding, I conducted a survey of around 100 coders and people working in IT.

Respondents estimated, on average, that code could be developed in around 50% as much time (i.e. twice as quickly) in light of the fact that AI tools like GPT-4 exist. This was just assuming that AI stays as good as it is now. If they incorporated the fact that AI might get better, the estimate moved to almost 3x as good.

Of respondents who said that they were coders, 56% said they had already started using Large Language Models (LLMs) like GPT-4.

There are several reasons to think the 2-3x forecast might not be correct:

  • Just because these people are actual software engineers, doesn't mean they are good forecasters
  • They might not even have a good handle on the time benefits that they have already experienced (e.g. they might not track their time carefully, or know about the counterfactual)
  • The sample size was not that high -- c100 people
  • I might not have worded the question clearly enough (click on the link for more details)
  • One questionable aspect of the way I worded the question is that I didn't explicitly for or encourage the answer that AI tools might slow down people's ability to code.

I don't know about the extent to which current forecasts of AI timelines are accounting for this effect.

You can scrutinise my work here: https://docs.google.com/spreadsheets/d/1I3_0kiwCJKzpuRlc66ytQVSDAXLd8wj2NmYhpZ28m94/edit#gid=990899939

This is a really good piece of input for predictions of how the supply-demand curve for coding will change in the future. 

50% increase in time effectively reduces cost of coding by 50%. Depending on the shape of the supply-demand curve for coding, this could lead to high unemployment, or a boom for coders that leads to even higher demand. 

Note:  coding productivity tools developed over the past 40 years have led to ever-increasing demand since so much value is generated :) 
 

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 1m read
 · 
Shape and lead the future of effective altruism in the UK — apply to be the Director of EA UK. The UK has the world's second-largest EA community, with London having the highest concentration of EAs globally. This represents a significant opportunity to strengthen and grow the effective altruism movement where it matters most. The EA UK board is recruiting for a new Director, as our existing Director is moving on to another opportunity. We believe that the strongest version of EA UK is one where the Director is implementing a strategy that they have created themselves, hence the open nature of this opportunity. As Director of EA UK, you'll have access to: * An established organisation with 9 years of London community building experience * An extensive network and documented history of what works (and what doesn't) * 9+ months of secured funding to develop and implement your vision, and additional potential funding and connections to funders * A supportive board and engaged community eager to help you succeed Your task would be to determine how to best leverage these resources to maximize positive impact through community building in the UK. This is a unique opportunity for a self-directed community-builder to shape EA UK's future. You'll be responsible for both setting the strategic direction and executing on that strategy. This is currently a one-person organisation (you), so you'll need to thrive working independently while building connections across the EA ecosystem. There is scope to pitch expansion to funders and the board. We hope to see your application! Alternatively, if you know anyone who might be a good fit for this role, please email madeleine@goodstructures.co. 
 ·  · 3m read
 · 
A friend of mine who worked as a social worker in a hospital told me a story that stuck with me. She had a conversation with an in-patient having a very difficult time. It was helpful, but as she was leaving, they told her wistfully 'You get to go home'. She found it hard to hear—it felt like an admonition. It was hard not to feel guilt over indeed getting to leave the facility and try to stop thinking about it, when others didn't have that luxury. The story really stuck with me. I resonate with the guilt of being in the fortunate position of being able to go back to my comfortable home and chill with my family while so many beings can't escape the horrible situations they're in, or whose very chance at existence depends on our work. Hearing the story was helpful for dealing with that guilt. Thinking about my friend's situation it was clear why she felt guilty. But also clear that it was absolutely crucial that she did go home. She was only going to be able to keep showing up to work and having useful conversations with people if she allowed herself proper respite. It might be unfair for her patients that she got to take the break they didn't, but it was also very clearly in their best interests for her to do it. Having a clear-cut example like that to think about when feeling guilt over taking time off is useful. But I also find the framing useful beyond the obvious cases. When morality feels all-consuming Effective altruism can sometimes feel all consuming. Any spending decision you make affects how much you can donate. Any activity you choose to do takes time away from work you could be doing to help others. Morality can feel as if it's making claims on even the things which are most important to you, and most personal. Often the narratives with which we push back on such feelings also involve optimisation. We think through how many hours per week we can work without burning out, and how much stress we can handle before it becomes a problem. I do find that