This is a special post for quick takes by Daniel Samuel Polak. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Tax incentives for AI safety - rough thoughts

A number of policy tools such as regulations, liability regimes or export controls - aimed at tackling AI risks -  have already been explored, and mostly appear as promising and worth further iterations.

But AFAIK no one has so far come up with a concrete proposal to use tax policy tools to internalize AI risks. I wonder why, considering that policies, such as tobacco taxes, R&D tax credits, and 401(k), have been mostly effective. Tax policy also seems to be underutilized and neglected, given we already possess sophisticated institutions like tax agencies or tax policy research networks.

Safety measures spending of AI Companies seems to be relatively low, and we can expect that if competition intensifies, these expenses will be even lower.

So I've started to consider more seriously the idea of tax incentives - basically we can provide a tax credit or deduction for expenditures on AI safety measures like alignment research, cybersecurity or oversight mechanisms etc. which effectively could lower their cost. To illustrate:  AI Company incurs safety researcher salary as a cost and then 50% of that cost  can be additionally deducted from the tax base.

My guess was that such tool could  influence the ratio of safety-to-capability spending. If implemented properly it could help mitigate competitive pressures affecting frontier AI labs by incentivising them to increase spending on AI safety measures.

Like any market intervention, we can justify such incentives if they correct market inefficiencies or generate positive externalities. In this case, lowering the cost of security measures helps internalize risk.

However there are many problems on path to design such tool effectively:

  1. The crucial problem is that financial benefit from tax credit can't match the expected value of increasing capabilities. Underlying incentives for capability breakthroughs are potentially orders of magnitude larger. So simply AI labs wouldn't  bother and keep the same level while getting extra money from incentives which is an obvious backlash.
    1. However, if some AI Company plans to increase safety expenses due to their real concerns about risks or external pressures (boards, public etc.), perhaps they would be more willing to do it.
    2. Also risk of keeping the same safety expenses level could be overcome by requiring a specific threshold of expenditures to benefit from the incentive.
  2. The focus here is on inputs (spending) instead of outcomes (actual safety).
  3. Implementing it would be pain in the ass, requiring creating specialised departments within IRS or delegating most of the work to NIST.
  4. Defining the scope of qualified expenditures -  it could be hard to separate safety from capabilities research cost. Keeping an eye on this later can be a considerable administrative cost.
  5. Expected expenses could be incurred regardless of the public funding received if we just impose a strict requirement.
  6. There could be a problem of safety washing - AI labs creating an impression and signalling that appropriate safety measures are implemented and benefiting from incentives while not decreasing the risk effectively.
  7. I don't know much about US tax system but I guess it could overlap with existing R&D tax incentives. However, existing incentives are unlikely to reduce the risk. if they are used for both safety and capabilities research then they
  8. Currently most AI labs are in loss position so they can't effectively benefit from such incentives unless some special  feature is put in place, like refundable tax credits or the option to claim such relief/credit as soon as they make a taxable profit.
  9. Perhaps direct government financing would be more effective. Or existing ideas (such as those mentioned earlier) would be more effective and we don't have enough room for weaker solutions.
  10. Maybe money isn't a problem here as AI labs are more talent constrained. If the main bottleneck for effective safety work is a talented researcher, then making safety spending cheaper via tax credits might not significantly increase the amount of high-quality safety work done.

Is there something crucial that I am missing? Is it worth investigating further? So far it has more problems than the potential benefits so I don't think it's promising, but I'd love to hear your thoughts on it.

11. It would probably cost a good bit of political capital to get this through, which may have an opportunity cost. You may not even get public support from the AI companies because the proposal contains an implicit critique that they haven't been doing enough on safety.

12. By the time the legislation got out of committee and through both houses, the scope of incentivized activity would probably be significantly broader than what x-risk people have in mind (e.g., reducing racial bias). Whether companies would prefer to invest more in x-risk safety vs. other incentivized topics is unclear to me.

What is your greatest achievement? 

Many job offers, competitions and other application processes require you to state your greatest achievement. 

I'm always having a problem with this one due to not being goal-oriented. Besides, I do not see any of my results as achievements. 

What are some examples of achievements (or even categories of achievements) for an undergraduate or a person starting a career?  

I struggled with a similar question back when I was a student. What I've found out is that people asking this usually want to know how the applicant describes their work and approach, and how confident or passionate a person is about the things they do.

One option could be to talk about the most exciting university project/assignment that you've worked on. You could describe something that made it interesting, what you learnt from it, and explain how you handled teamwork or prioritization during it. Interesting results are a plus, but learning experiences also make for a good story.

Other options include some kind of competitive performance, or a hobby project you felt passionate about and dedicated time and energy into. Personally I would even be happy to hear about something nice you did that helped somebody else. Feel free to be open and explain what made the experience special to you.

People asking this question usually understand that new graduates' achievements don't necessarily involve work projects. So my advice would be to not worry about the context too much.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 3m read
 · 
A friend of mine who worked as a social worker in a hospital told me a story that stuck with me. She had a conversation with an in-patient having a very difficult time. It was helpful, but as she was leaving, they told her wistfully 'You get to go home'. She found it hard to hear—it felt like an admonition. It was hard not to feel guilt over indeed getting to leave the facility and try to stop thinking about it, when others didn't have that luxury. The story really stuck with me. I resonate with the guilt of being in the fortunate position of being able to go back to my comfortable home and chill with my family while so many beings can't escape the horrible situations they're in, or whose very chance at existence depends on our work. Hearing the story was helpful for dealing with that guilt. Thinking about my friend's situation it was clear why she felt guilty. But also clear that it was absolutely crucial that she did go home. She was only going to be able to keep showing up to work and having useful conversations with people if she allowed herself proper respite. It might be unfair for her patients that she got to take the break they didn't, but it was also very clearly in their best interests for her to do it. Having a clear-cut example like that to think about when feeling guilt over taking time off is useful. But I also find the framing useful beyond the obvious cases. When morality feels all-consuming Effective altruism can sometimes feel all consuming. Any spending decision you make affects how much you can donate. Any activity you choose to do takes time away from work you could be doing to help others. Morality can feel as if it's making claims on even the things which are most important to you, and most personal. Often the narratives with which we push back on such feelings also involve optimisation. We think through how many hours per week we can work without burning out, and how much stress we can handle before it becomes a problem. I do find that