All posts

New & upvoted

Today, 24 July 2025
Today, 24 Jul 2025

Frontpage Posts

Personal Blogposts

Wednesday, 23 July 2025
Wed, 23 Jul 2025

Frontpage Posts

Personal Blogposts

Tuesday, 22 July 2025
Tue, 22 Jul 2025

Frontpage Posts

Personal Blogposts

Monday, 21 July 2025
Mon, 21 Jul 2025

Frontpage Posts

Personal Blogposts

Sunday, 20 July 2025
Sun, 20 Jul 2025

Frontpage Posts

Personal Blogposts

Saturday, 19 July 2025
Sat, 19 Jul 2025

Frontpage Posts

Personal Blogposts

Friday, 18 July 2025
Fri, 18 Jul 2025

Frontpage Posts

Personal Blogposts

Topic Page Edits and Discussion

Wednesday, 16 July 2025
Wed, 16 Jul 2025

Frontpage Posts

Quick takes

Probably(?) big news on PEPFAR (title: White House agrees to exempt PEPFAR from cuts): https://thehill.com/homenews/senate/5402273-white-house-accepts-pepfar-exemption/. (Credit to Marginal Revolution for bringing this to my attention) 

Probably(?) big news on PEPFAR (title: White House agrees to exempt PEPFAR from cuts): https://thehill.com/homenews/senate/5402273-white-house-accepts-pepfar-exemption/. (Credit to Marginal Revolution for bringing this to my attention) 

Mini EA Forum Update

We've added two new kinds of notifications that have been requested multiple times before:

  1. Notifications when someone links to your post, comment, or quick take
    1. These are turned on by default — you can edit your notifications settings via the Account Settings page.
  2. Keyword alerts
    1. You can manage your keyword alerts here, which you can get to via your Account Settings or by clicking the notification bell and then the three dots icon.
    2. You can quickly add an alert by clicking "Get notified" on the search page. (Note that the alerts only use the keyword, not any search filters.)
    3. You get alerted when the keyword appears in a newly published post, comment, or quick take (so this doesn't include, for example, new topics).
    4. You can also edit the frequency of both the on-site and email versions of these alerts independently via the Account Settings page (at the bottom of the Notifications list).
    5. See more details in the PR

I hope you find these useful! 😊 Feel free to reply if you have any feedback or questions.

Mini EA Forum Update We've added two new kinds of notifications that have been requested multiple times before: 1. Notifications when someone links to your post, comment, or quick take 1. These are turned on by default — you can edit your notifications settings via the Account Settings page. 2. Keyword alerts 1. You can manage your keyword alerts here, which you can get to via your Account Settings or by clicking the notification bell and then the three dots icon. 2. You can quickly add an alert by clicking "Get notified" on the search page. (Note that the alerts only use the keyword, not any search filters.) 3. You get alerted when the keyword appears in a newly published post, comment, or quick take (so this doesn't include, for example, new topics). 4. You can also edit the frequency of both the on-site and email versions of these alerts independently via the Account Settings page (at the bottom of the Notifications list). 5. See more details in the PR I hope you find these useful! 😊 Feel free to reply if you have any feedback or questions.

I've updated the public doc that summarizes the CEA Online Team's OKRs to add Q3.1 (sorry this is a bit late, I just forgot! 😅).

I've updated the public doc that summarizes the CEA Online Team's OKRs to add Q3.1 (sorry this is a bit late, I just forgot! 😅).

 Hello everyone,

I recently came across a book titled “Technical Control Problem and Potential Capabilities of Artificial Intelligence” by Dr. Hüseyin Gürkan Abalı. It claims to offer a technical and philosophical framework regarding the control problem in advanced AI systems, and discusses their potential future capabilities.

As someone interested in AI safety and ethics, I’m curious if anyone here has read the book or has any thoughts on its relevance or quality.

I would appreciate any reviews, critiques, or academic impressions.

Thanks in advance!

 Hello everyone, I recently came across a book titled “Technical Control Problem and Potential Capabilities of Artificial Intelligence” by Dr. Hüseyin Gürkan Abalı. It claims to offer a technical and philosophical framework regarding the control problem in advanced AI systems, and discusses their potential future capabilities. As someone interested in AI safety and ethics, I’m curious if anyone here has read the book or has any thoughts on its relevance or quality. I would appreciate any reviews, critiques, or academic impressions. Thanks in advance!

Tuesday, 15 July 2025
Tue, 15 Jul 2025

Frontpage Posts

Quick takes

I am too young and stupid to be giving career advice, but in the spirit of career conversations week, I figured I'd pass on advice I've received which I ignored at the time, and now think was good advice: you might be underrating the value of good management!

I think lots of young EAish people underrate the importance of good management/learning opportunities, and overrate direct impact. In fact, I claim that if you're looking for your first/second job, you should consider optimising for having a great manager, rather than for direct impact.

Why?

  • Having a great manager dramatically increases your rate of learning, assuming you're in a job with scope for taking on new responsibilities or picking up new skills (which covers most jobs).
  • It also makes working much more fun!
  • Mostly, you just don't know what you don't know. It's been very revealing to me how much I've learnt in the last year, I think it's increased my expected impact, and I wouldn't have predicted this beforehand.
    • In particular, if you're just leaving university, you probably haven't really had a manager-type person before, and you've only experienced a narrow slice of all possible work tasks. So you're probably underrating both how useful a very good manager can be, and how much you could learn.

How can you tell if someone will be a great manager?

  • This part seems harder. I've thought about it a bit, but hopefully other people have better ideas.
  • Ask the org who would manage you and request a conversation with them. Ask about their management style: how do they approach management? How often will you meet, and for how long? Do they plan to give minimal oversight and just check you're on track, or will they be more actively involved? (For new grads, active management is usually better.) You might also want to ask for examples of people they've managed and how those people grew.
  • Once you're partway through the application process or have an offer, reach out to current employees for casual conversations about their experiences with management at the org.
  • You could ask how the organization handles performance reviews and promotions. This is probably an okay-not-great proxy, since smaller, fast-growing orgs might have informal processes but still excellent management, but I thin k it would give you some signal on how much they think about management/personal development.
  • (This maybe only really works if you are socially very confident or know lots of EA-ish people, sorry about that) You could consider asking a bunch of your friends and acquaintances about managers they've had that they thought were very good, and then trying to work with those people.
  • Some random heuristics: All else equal, high turnover rate without seemingly big jumps in career progression seems bad. Orgs that regularly hire and retain/promote early career people are probably pretty good at management; same for orgs whose alumni go on to do cool stuff. 

(My manager did not make me post this)

I am too young and stupid to be giving career advice, but in the spirit of career conversations week, I figured I'd pass on advice I've received which I ignored at the time, and now think was good advice: you might be underrating the value of good management! I think lots of young EAish people underrate the importance of good management/learning opportunities, and overrate direct impact. In fact, I claim that if you're looking for your first/second job, you should consider optimising for having a great manager, rather than for direct impact. Why? * Having a great manager dramatically increases your rate of learning, assuming you're in a job with scope for taking on new responsibilities or picking up new skills (which covers most jobs). * It also makes working much more fun! * Mostly, you just don't know what you don't know. It's been very revealing to me how much I've learnt in the last year, I think it's increased my expected impact, and I wouldn't have predicted this beforehand. * In particular, if you're just leaving university, you probably haven't really had a manager-type person before, and you've only experienced a narrow slice of all possible work tasks. So you're probably underrating both how useful a very good manager can be, and how much you could learn. How can you tell if someone will be a great manager? * This part seems harder. I've thought about it a bit, but hopefully other people have better ideas. * Ask the org who would manage you and request a conversation with them. Ask about their management style: how do they approach management? How often will you meet, and for how long? Do they plan to give minimal oversight and just check you're on track, or will they be more actively involved? (For new grads, active management is usually better.) You might also want to ask for examples of people they've managed and how those people grew. * Once you're partway through the application process or have an offer, reach out to current employees fo

This passage from David Roodman's essay Appeal to Me: First Trial of a “Replication Opinion” resonated:

When we draw on research, we vet it in rare depth (as does GiveWell, from which we spun off). I have sometimes spent months replicating and reanalyzing a key study—checking for bugs in the computer code, thinking about how I would run the numbers differently and how I would interpret the results. This interface between research and practice might seem like a picture of harmony, since researchers want their work to guide decision-making for the public good and decision-makers like Open Philanthropy want to receive such guidance.

Yet I have come to see how cultural misunderstandings prevail at this interface. From my side, what the academy does and what I and most of the public think it does are not the same. There are two problems. First, about half the time I reanalyze a study, I find that there are important bugs in the code, or that adding more data makes the mathematical finding go away, or that there’s a compelling alternative explanation for the results. (Caveat: most of my experience is with non-randomized studies.) Second, when I send my critical findings to the journal that peer-reviewed and published the original research, the editors usually don’t seem interested (recent exception). Seeing the ivory tower as a bastion of truth-seeking, I used to be surprised. I understand now that, because of how the academy works, in particular, because of how the individuals within academia respond to incentives beyond their control, we consumers of research are sometimes more truth-seeking than the producers.

I had a similar realisation towards the end of my studies which was a key factor in persuading me to not pursue academia. Also I've mentioned this before, but it surprised me how much more these kinds of details mattered in my experience in industry.

Skipping over to his recap of the specific case he looked into:

To recap:

  • Two economists performed a quantitative analysis of a clever, novel question.
  • It underwent peer review.
  • It was published in one of the top journals in economics. Its data and computer code were posted online, per the journal’s policy.
  • Another researcher promptly responded that the analysis contains errors (such as computing average daytime temperature with respect to Greenwich time rather than local time), and that it could have been done on a much larger data set (for 1990 to ~2019 instead of 2000–04). These changes make the headline findings go away.
  • After behind-the-scenes back and forth among the disputants and editors, the journal published the comment and rejoinder.
  • These new articles confused even an expert.
  • An outsider (me) delved into the debate and found that it’s actually a pretty easy call.

If you score the journal on whether it successfully illuminated its readership as to the truth, then I think it is kind of 0 for 2. ...

That said, AEJ Applied did support dialogue between economists that eventually brought the truth out. In particular, by requiring public posting of data and code (an area where this journal and its siblings have been pioneers), it facilitated rapid scrutiny.

Still, it bears emphasizing: For quality assurance, the data sharing was much more valuable than the peer review. And, whether for lack of time or reluctance to take sides, the journal’s handling of the dispute obscured the truth.

My purpose in examining this example is not to call down a thunderbolt on anyone, from the Olympian heights of a funding body. It is rather to use a concrete story to illustrate the larger patterns I mentioned earlier. Despite having undergone peer review, many published studies in the social sciences and epidemiology do not withstand close scrutiny. When they are challenged, journal editors have a hard time managing the debate in a way that produces more light than heat.

I have critiqued papers about the impact of foreign aid, microcredit, foreign aid, deworming, malaria eradication, foreign aid, geomagnetic storm risk, incarceration, schooling, more schooling, broadband, foreign aid, malnutrition, …. Many of those critiques I have submitted to journals, usually only to receive polite rejections. I obviously lack objectivity. But it has struck me as strange that, in these instances, we on the outside of academia seem more concerned about getting to the truth than those on the inside. 

This passage from David Roodman's essay Appeal to Me: First Trial of a “Replication Opinion” resonated: I had a similar realisation towards the end of my studies which was a key factor in persuading me to not pursue academia. Also I've mentioned this before, but it surprised me how much more these kinds of details mattered in my experience in industry. Skipping over to his recap of the specific case he looked into:

Load more days