This is a special post for quick takes by aviv. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

It would may be helpful to grow a cross-national/ethnic overarching identity around "wisdom and doing good". EA does this is a bit, but is heavily constrained to the technocratic.  While that is it useful subcomponent of that broader identity,  it can push away people who share or aspire the underlying ideals of (1) "Doing good as a core goal of existence" and (2) "Being wise about how one chooses to do good"—but who don't share the disposition or culture of most EA's. Even the name itself can be a turnoff—it sound intellectual and elitist. 

Having a named identity which is broader than EA, but which contains it, could be incredibly helpful for connecting across neurodiverse divides in daily work, and could be incredibly valuable as a cross-cutting cleavage in national/ethnic/ etc. divides in conflict environments, if this can encompass a broad enough population over time.

I'm not sure what that name might be in English, or if it makes more sense to just expand meaning of EA, but it may be worth thinking about this, and consciously growing a movement around that with aligned movements that perhaps get at other "lenses of wisdom" that focus on best utilizing/growing resources for broad positive impact.  

Assuming misaligned AI is a risk, is technical AI alignment enough, or do you need joint AI/Societal alignment?

My work has involved trying to support risk awareness and coordination similar to what has been suggested for AI alignment. For example, for mitigating harms around synthetic media / “deepfakes” (now rebranded to generative AI) and it worked for a few years with all the major orgs and most relevant research groups. 

But then new orgs jumped in to fill the capability gap! (e.g. eleuther, stability, etc.) 
Due to demand and for potentially good reasons: those capabilities which can harm people can also help people. The ultimate result is the proliferation/access/democratization of AI capabilities in the face of risks.

Question 1) What would stop the same thing from happening for technical AI safety alignment?[1]

I’m currently skeptical that this sort of coordination is possible without some addressing deeper societal incentives (AKA reward functions; e.g. around profit/power/attention maximization, self-dealing, etc.) and related multi-principal-agent challenges. This joint/ai societal alignment or holistic alignment would seem to be a prerequisite to the actual implementation of technical alignment.[2] 

Question 2) Am I missing something here? If one assumes that misaligned AI is a threat worth resourcing, what is the likelihood of succeeding at AI alignment longterm without also  succeeding  at 'societal alignment'?

  1. ^

    This is assuming you can even get the major players on board, which isn't true for e.g. misaligned recommender systems that I've also worked on (on the societal side).

  2. ^

    This would also be generally good for the world! E.g. to address externalities, political dysfunction, corruption, etc.

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 5m read
 · 
Summary Following our co-founder Joey's recent transition announcement we're actively searching for exceptional leadership to join our C-level team and guide AIM into its next phase. * Find the full job description here * To apply, please visit the following link * Recommend someone you think could be a great fit here * Location: London strongly preferred. Remote candidates willing to work from London at least 3 months a year and otherwise overlapping at least 6 hours with 9 am to 5 pm BST will be considered. We are happy to sponsor UK work visas. * Employment Type: Full-time (35 hours) * Application Deadline: rolling until August 10, 2025 * Start Date: as soon as possible (with some flexibility for the right candidate) * Compensation: £45,000–£90,000 (for details on our compensation policy see full job description) Leadership Transition On March 15th, Joey announced he's stepping away from his role as CEO of AIM, with his planned last day as December 1st. This follows our other co-founder Karolina's completed transition in 2024. Like Karolina, Joey will transition to a board member role while we bring in new leadership to guide AIM's next phase of growth. The Opportunity AIM is at a unique inflection point. We're seeking an exceptional leader to join Samantha and Devon on our C-level team and help shape the next era of one of the most impactful organizations in the EA ecosystem. With foundations established (including a strong leadership team and funding runway), we're ready to scale our influence dramatically and see many exciting pathways to do so. While the current leadership team has a default 2026 strategic plan, we are open to a new CEO proposing radical departures. This might include: * Proposing alternative ways to integrate or spin off existing or new programs * Deciding to spend more resources trialling more experimental programs, or double down on Charity Entrepreneurship * Expanding geographically or deepening impact in existing region