This is a special post for quick takes by Daniel Samuel Polak. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Tax incentives for AI safety - rough thoughts

A number of policy tools such as regulations, liability regimes or export controls - aimed at tackling AI risks -  have already been explored, and mostly appear as promising and worth further iterations.

But AFAIK no one has so far come up with a concrete proposal to use tax policy tools to internalize AI risks. I wonder why, considering that policies, such as tobacco taxes, R&D tax credits, and 401(k), have been mostly effective. Tax policy also seems to be underutilized and neglected, given we already possess sophisticated institutions like tax agencies or tax policy research networks.

Safety measures spending of AI Companies seems to be relatively low, and we can expect that if competition intensifies, these expenses will be even lower.

So I've started to consider more seriously the idea of tax incentives - basically we can provide a tax credit or deduction for expenditures on AI safety measures like alignment research, cybersecurity or oversight mechanisms etc. which effectively could lower their cost. To illustrate:  AI Company incurs safety researcher salary as a cost and then 50% of that cost  can be additionally deducted from the tax base.

My guess was that such tool could  influence the ratio of safety-to-capability spending. If implemented properly it could help mitigate competitive pressures affecting frontier AI labs by incentivising them to increase spending on AI safety measures.

Like any market intervention, we can justify such incentives if they correct market inefficiencies or generate positive externalities. In this case, lowering the cost of security measures helps internalize risk.

However there are many problems on path to design such tool effectively:

  1. The crucial problem is that financial benefit from tax credit can't match the expected value of increasing capabilities. Underlying incentives for capability breakthroughs are potentially orders of magnitude larger. So simply AI labs wouldn't  bother and keep the same level while getting extra money from incentives which is an obvious backlash.
    1. However, if some AI Company plans to increase safety expenses due to their real concerns about risks or external pressures (boards, public etc.), perhaps they would be more willing to do it.
    2. Also risk of keeping the same safety expenses level could be overcome by requiring a specific threshold of expenditures to benefit from the incentive.
  2. The focus here is on inputs (spending) instead of outcomes (actual safety).
  3. Implementing it would be pain in the ass, requiring creating specialised departments within IRS or delegating most of the work to NIST.
  4. Defining the scope of qualified expenditures -  it could be hard to separate safety from capabilities research cost. Keeping an eye on this later can be a considerable administrative cost.
  5. Expected expenses could be incurred regardless of the public funding received if we just impose a strict requirement.
  6. There could be a problem of safety washing - AI labs creating an impression and signalling that appropriate safety measures are implemented and benefiting from incentives while not decreasing the risk effectively.
  7. I don't know much about US tax system but I guess it could overlap with existing R&D tax incentives. However, existing incentives are unlikely to reduce the risk. if they are used for both safety and capabilities research then they
  8. Currently most AI labs are in loss position so they can't effectively benefit from such incentives unless some special  feature is put in place, like refundable tax credits or the option to claim such relief/credit as soon as they make a taxable profit.
  9. Perhaps direct government financing would be more effective. Or existing ideas (such as those mentioned earlier) would be more effective and we don't have enough room for weaker solutions.
  10. Maybe money isn't a problem here as AI labs are more talent constrained. If the main bottleneck for effective safety work is a talented researcher, then making safety spending cheaper via tax credits might not significantly increase the amount of high-quality safety work done.

Is there something crucial that I am missing? Is it worth investigating further? So far it has more problems than the potential benefits so I don't think it's promising, but I'd love to hear your thoughts on it.

11. It would probably cost a good bit of political capital to get this through, which may have an opportunity cost. You may not even get public support from the AI companies because the proposal contains an implicit critique that they haven't been doing enough on safety.

12. By the time the legislation got out of committee and through both houses, the scope of incentivized activity would probably be significantly broader than what x-risk people have in mind (e.g., reducing racial bias). Whether companies would prefer to invest more in x-risk safety vs. other incentivized topics is unclear to me.

What is your greatest achievement? 

Many job offers, competitions and other application processes require you to state your greatest achievement. 

I'm always having a problem with this one due to not being goal-oriented. Besides, I do not see any of my results as achievements. 

What are some examples of achievements (or even categories of achievements) for an undergraduate or a person starting a career?  

I struggled with a similar question back when I was a student. What I've found out is that people asking this usually want to know how the applicant describes their work and approach, and how confident or passionate a person is about the things they do.

One option could be to talk about the most exciting university project/assignment that you've worked on. You could describe something that made it interesting, what you learnt from it, and explain how you handled teamwork or prioritization during it. Interesting results are a plus, but learning experiences also make for a good story.

Other options include some kind of competitive performance, or a hobby project you felt passionate about and dedicated time and energy into. Personally I would even be happy to hear about something nice you did that helped somebody else. Feel free to be open and explain what made the experience special to you.

People asking this question usually understand that new graduates' achievements don't necessarily involve work projects. So my advice would be to not worry about the context too much.

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 6m read
 · 
I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar.  Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn’t seem deterred, he offered me an internship.  I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks:  * Made ~20 visits to fish farms * Wrote a recommendation on next steps for FWI’s stunning project * Conducted data analysis in Python on the efficacy of the Alliance for Responsible Aquaculture’s corrective actions * Received training in water quality testing methods * Created charts in Tableau for a webinar presentation * Brainstormed and implemented office improvements  I wasn’t able to drive myself around in India, so I rode on the back of a coworker’s motorbike to commute. FWI provided me with my own bedroom in a company-owned flat. Sometimes Haven and I would cook together at the residence, talking for hours over a chopping board and our metal plates about war, family, or effective altruism. Other times I would eat at restaurants or street food booths with my Indian coworkers. Excluding flights, I spent less than $100 USD in total. I covered all costs, including international transportation, through the Summer in South Asia Fellowship, which provides funding for University of Michigan under