This is an episode of Spencer Greenberg's Clearer Thinking podcast. The episode was cross-posted to the feed of The 80,000 Hours Podcast.

What are the best strategies for improving ourselves? How are line managers useful? Why does Rob prefer long-form content for the 80,000 Hours podcast? What are the sorts of things humans value and why? In what ways do research ethics considerations fail to achieve their stated objectives? Why are prediction markets useful?

I found this interview enjoyable and useful, and would recommend it. See the comments section of this post for some points from the interview which I found particularly interesting. (I'm putting those points in comments because I think that might be better for encouraging discussion and keeping it organised.)

Currently, I think it'd be a good idea for all future episodes that appear in the 80k podcast feed to be linkposted to the Forum, and I might make such linkposts in future myself unless someone else starts doing so or suggests reasons why it shouldn't be done. If you have thoughts on that, please comment to say so here.

Comments7


Sorted by Click to highlight new comments since:

People would often benefit from more "line management", and could often get it just by setting up weekly meetings with someone else who's in a similar boat

A chunk of the interview is devoted to these points. From memory, some points made were:

  • It can be weirdly useful to have ~weekly meetings to just discuss what one did last week, what went well and poorly, what one's goals are, and what one plans to do next week
    • The reason the usefulness is weird/surprising is that a lot of the benefit seems to come from just having these meetings at all and being asked obvious questions like "Would that task really be the best thing to do to achieve your goals?"
      • And in theory, one could just simulate these conversations by oneself
      • But at least for many people, it seems to be more effective to have an actual conversation with another person
    • This can help with both productivity (including getting the right tasks done) and mood (e.g., reducing self-doubt or a sense of listlessness)
  • Some people don't have someone to provide that "line management" role
    • E.g., people in PhD programs might not get frequent enough meetings with their advisor, or it might be clear that their advisor doesn't care about or is terrible at management
  • Those people might benefit from just arranging to have weekly meetings with a friend, colleague, fellow student, or similar who's in a similar boat
    • Having weekly meetings with the same person allows them to have more context on one's full situation, goals, skills, etc., which seems helpful for this
    • PhD students could arrange this themselves, and it might help combat PhD programs often seeming to be unusually crushing experiences (due to a lack of guidance, feedback, etc.)
    • (Something I'm not sure they explicitly said, but which seems true: This could probably be useful even for people who do have meetings with a line manager, if the person would benefit from meeting more, or if the manager sucks.)

That first point resonated with me very much. I received basically no line management at the school I taught at, and that sucked, and I only realised how much it sucked once I moved into roles where I did have weekly line management style meetings and discovered how helpful they were (both for my productivity and for my mood). 

And that third point seemed like an obvious but great idea. I intend to apply it myself if I find myself in a future situation where it's relevant. And I intend to keep it in mind as something to maybe suggest to people, when it seems relevant. 

People who try this approach out might also find it useful to read this post: Group debugging guidelines & thoughts. (It's basically on tips for coaching, which I'd guess also apply to this sort of line management to some extent.)

Some reasons why Rob thinks long (e.g., 2-6 hour) interviews are an unusually good medium for sharing ideas

(This is from memory and there's no transcript, so I'm probably missing some things and distorting other things. Also, I think Rob has written about this elsewhere, but I can't remember where.)

  • It's usually easier to pay attention to a spoken discussion than to something written (even if it's later read aloud, as in audiobook)
    • The social aspect of discussions keep them engaging
  • Interviewees will essentially always have to cover the basics of an idea if they want to talk about it. If they have enough time, they'll also cover key counterpoints. But usually they don't have enough time to go beyond that and discuss the counterpoints to those, and to those, and so on. 
    • So three 1-hour interviews will probably cover similar ground each time, and never get to a lot of what the interviewee finds most interesting about the idea. 
    • In contrast, one 3 hour interview can cover the basic stuff just once, and then spend the rest of the time on interesting further points.
  • It takes much longer to write ideas up well than to say them in conversation
    • Expressing ideas through speech comes more naturally to people
    • Speech allows for conveying things like uncertainty through tone, pauses, etc.
  • (Related to the above) Usually, there are a bunch of ideas that are well-known among specialists in a given field, but which haven't yet been written up and which are thus known to very few people outside of that field. Historically, the main way to learn these ideas has been to have 1-1 discussions with a specialist. Obviously, most people don't get a chance to do this. But now a specialist can have such a 1-1 discussion and then huge numbers of other people can listen to that.
    • So that means more people can get a sense of the latest thinking in a field than was previously possible

I agree with all of these points. 

And I found the final point particularly interesting. In my limited experience as a researcher so far, I have indeed been struck by how many interesting and potentially important ideas, framings, elaborations, counterpoints, etc. seem to be floating around in discussions, work-in-progress talks, internal drafts, etc., but haven't been fleshed out fully in any widely accessible content. Often there'll be some mention of these things in widely accessible content, but missing lots of details, or framed in a way the researchers not feel isn't quite right. And I think unfortunately this is a totally understandable and hard-to-fix problem, and applies to some of my own work and ideas as well.

I think more long interviews with specialists in various areas - not just researchers but also practitioners, grantmakers, etc. - seems like a great, low-cost way to address that issue. And I'm glad various people are already creating such interviews.

Currently, I think it'd be a good idea for all future episodes that appear in the 80k podcast feed to be linkposted to the Forum

I'm currently in discussion with 80K about how and whether they want to crosspost their podcast episodes. I'd prefer to crosspost the full text, so that terms from the episodes show up when you use the Forum's search function. But summaries might be more likely to invite comments? I'm unsure.

Feature idea sparked by that: It could be cool if the EA Forum allowed for expandable boxes of text, in the way that e.g. Gwern's site does. 

Things authors can already do that's along those lines: 

  • Have a section that explicitly says at the top "I think this section will be of interest to far fewer people than the rest of this post, so feel free to skip it."
  • Move a section to the end and call it an appendix
  • Just link to a google doc that sort of serves as the expandable box/appendix

But the first two of those options seem to less clearly signal "We really think fewer people should read this than should read the rest of this post." And the third option might sometimes signal that too strongly, and also doesn't allow things to show up when you use the Forum's search function.

If this feature was added, then the full text from 80k podcast episodes could be included in the expandable boxes of text.

I talked about this with JP (the Forum's lead developer). We feel as though the alternatives authors have, combined with the table of contents offering an easy way to skip sections, make this feature relatively low-value compared to other things we can work on. 

But in the abstract, it's a good idea, and I like what it does for searchability if it lets people avoid linking to external docs. We'll keep an eye on the idea for later.

As a data point, I found this super useful and would love to see these happen for each episode. Two particular ways I'd benefit: (i) typically there are a few particularly interesting bits in each episode which I found particularly novel/helpful and reading over a post later which re-states those will help them sink in more, (ii) sometimes I skip an episode based on the title but would read over something like this to glean any quick useful things and then maybe listen to the whole thing if it looked particularly useful.

I haven't ever (and doubt I will) read over a full transcript, so posting those wouldn't do the same thing. Also, putting the particularly interesting insights as comments allows upvoting to triage the insights that are most useful for the community.

Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
 ·  · 11m read
 · 
My name is Keyvan, and I lead Anima International’s work in France. Our organization went through a major transformation in 2024. I want to share that journey with you. Anima International in France used to be known as Assiettes Végétales (‘Plant-Based Plates’). We focused entirely on introducing and promoting vegetarian and plant-based meals in collective catering. Today, as Anima, our mission is to put an end to the use of cages for laying hens. These changes come after a thorough evaluation of our previous campaign, assessing 94 potential new interventions, making several difficult choices, and navigating emotional struggles. We hope that by sharing our experience, we can help others who find themselves in similar situations. So let me walk you through how the past twelve months have unfolded for us.  The French team Act One: What we did as Assiettes Végétales Since 2018, we worked with the local authorities of cities, counties, regions, and universities across France to develop vegetarian meals in their collective catering services. If you don’t know much about France, this intervention may feel odd to you. But here, the collective catering sector feeds a huge number of people and produces an enormous quantity of meals. Two out of three children, more than seven million in total, eat at a school canteen at least once a week. Overall, more than three billion meals are served each year in collective catering. We knew that by influencing practices in this sector, we could reach a massive number of people. However, this work was not easy. France has a strong culinary heritage deeply rooted in animal-based products. Meat and fish-based meals remain the standard in collective catering and school canteens. It is effectively mandatory to serve a dairy product every day in school canteens. To be a certified chef, you have to complete special training and until recently, such training didn’t include a single vegetarian dish among the essential recipes to master. De
 ·  · 1m read
 · 
 The Life You Can Save, a nonprofit organization dedicated to fighting extreme poverty, and Founders Pledge, a global nonprofit empowering entrepreneurs to do the most good possible with their charitable giving, have announced today the formation of their Rapid Response Fund. In the face of imminent federal funding cuts, the Fund will ensure that some of the world's highest-impact charities and programs can continue to function. Affected organizations include those offering critical interventions, particularly in basic health services, maternal and child health, infectious disease control, mental health, domestic violence, and organized crime.
Recent opportunities in Forecasting
32
Ozzie Gooen
· · 2m read