Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
“Chief of Staff” models from a long-time Chief of Staff I have served in Chief of Staff or CoS-like roles to three leaders of CEA (Zach, Ben and Max), and before joining CEA I was CoS to a member of the UK House of Lords. I wrote up some quick notes on how I think about such roles for some colleagues, and one of them suggested they might be useful to other Forum readers. So here you go: Chief of Staff means many things to different people in different contexts, but the core of it in my mind is that many executive roles are too big to be done by one person (even allowing for a wider Executive or Leadership team, delegation to department leads, etc). Having (some parts of) the role split/shared between the principal and at least one other person increases the capacity and continuity of the exec function. Broadly, I think of there being two ways to divide up these responsibilities (using CEO and CoS as stand-ins, but the same applies to other principal/deputy duos regardless of titles): 1. Split the CEO's role into component parts and assign responsibility for each part to CEO or CoS 1. Example: CEO does fundraising; CoS does budgets 2. Advantages: focus, accountability 2. Share the CEO's role with both CEO and CoS actively involved in each component part 1. Example: CEO speaks to funders based on materials prepared by CoS; CEO assigns team budget allocations which are implemented by CoS 2. Advantages: flex capacity, gatekeeping Some things to note about these approaches: * In practice, it’s inevitably some combination of the two, but I think it’s really important to be intentional and explicit about what’s being split and what’s being shared * Failure to do this causes confusion, dropped balls, and duplication of effort * Sharing is especially valuable during the early phases of your collaboration because it facilitates context-swapping and model-building * I don’t think you’d ever want to get all the way or too far towards split, bec
67
huw
4d
0
Per Bloomberg, the Trump administration is considering restricting the equivalency determination for 501(c)3s as early as Tuesday. The equivalency determination allows for 501(c)3s to regrant money to foreign, non-tax-exempt organisations while maintaining tax-exempt status, so long as an attorney or tax practitioner claims the organisation is equivalent to a local tax-exempt one. I’m not an expert on this, but it sounds really bad. I guess it remains to be seen if they go through with it. Regardless, the administration is allegedly also preparing to directly strip environmental and political (i.e. groups he doesn’t like, not necessarily just any policy org) non-profits of their tax exempt status. In the past week, he’s also floated trying to rescind the tax exempt status of Harvard. From what I understand, such an Executive Order is illegal under U.S. law (to whatever extent that matters anymore), unless Trump instructs the State Department to designate them foreign terrorist organisations, at which point all their funds are frozen too. These are dark times. Stay safe 🖤
I'm not sure how to word this properly, and I'm uncertain about the best approach to this issue, but I feel it's important to get this take out there. Yesterday, Mechanize was announced, a startup focused on developing virtual work environments, benchmarks, and training data to fully automate the economy. The founders include Matthew Barnett, Tamay Besiroglu, and Ege Erdil, who are leaving (or have left) Epoch AI to start this company. I'm very concerned we might be witnessing another situation like Anthropic, where people with EA connections start a company that ultimately increases AI capabilities rather than safeguarding humanity's future. But this time, we have a real opportunity for impact before it's too late. I believe this project could potentially accelerate capabilities, increasing the odds of an existential catastrophe.  I've already reached out to the founders on X, but perhaps there are people more qualified than me who could speak with them about these concerns. In my tweets to them, I expressed worry about how this project could speed up AI development timelines, asked for a detailed write-up explaining why they believe this approach is net positive and low risk, and suggested an open debate on the EA Forum. While their vision of abundance sounds appealing, rushing toward it might increase the chance we never reach it due to misaligned systems. I personally don't have a lot of energy or capacity to work on this right now, nor do I think I have the required expertise, so I hope that others will pick up the slack. It's important we approach this constructively and avoid attacking the three founders personally. The goal should be productive dialogue, not confrontation. Does anyone have thoughts on how to productively engage with the Mechanize team? Or am I overreacting to what might actually be a beneficial project?
I've spent some time in the last few months outlining a few epistemics/AI/EA projects I think could be useful.  Link here.  I'm not sure how to best write about these on the EA Forum / LessWrong. They feel too technical and speculative to gain much visibility.  But I'm happy for people interested in the area to see them. Like with all things, I'm eager for feedback.  Here's a brief summary of them, written by Claude. --- 1. AI-Assisted Auditing A system where AI agents audit humans or AI systems, particularly for organizations involved in AI development. This could provide transparency about data usage, ensure legal compliance, flag dangerous procedures, and detect corruption while maintaining necessary privacy. 2. Consistency Evaluations for Estimation AI Agents A testing framework that evaluates AI forecasting systems by measuring several types of consistency rather than just accuracy, enabling better comparison and improvement of prediction models. It's suggested to start with simple test sets and progress to adversarial testing methods that can identify subtle inconsistencies across domains. 3. AI for Epistemic Impact Estimation An AI tool that quantifies the value of information based on how it improves beliefs for specific AIs. It's suggested to begin with narrow domains and metrics, then expand to comprehensive tools that can guide research prioritization, value information contributions, and optimize information-seeking strategies. 4. Multi-AI-Critic Document Comments & Analysis A system similar to "Google Docs comments" but with specialized AI agents that analyze documents for logical errors, provide enrichment, and offer suggestions. This could feature a repository of different optional open-source agents for specific tasks like spot-checking arguments, flagging logical errors, and providing information enrichment. 5. Rapid Prediction Games for RL Specialized environments where AI agents trade or compete on predictions through market me
I'd be excited to see 1-2 opportunistic EA-rationalist types looking into where marginal deregulation is a bottleneck to progress on x-risk/GHW, circulating 1-pagers among experts in these areas, and then pushing the ideas to DOGE/Mercatus/Executive Branch. I'm thinking things like clinical trials requirements for vaccines, UV light, anti-trust issues facing companies collaborating on safety and security, maybe housing (though I'm not sure which are bottlenecked by federal action). For most of these there's downside risk of the message is low fidelity, the issue becomes polarized, or priorities are poorly set, hence collaborating with experts. I doubt there's that much useful stuff to be done here, but marginal deregulation looks very easy right now and looks good to strike while the iron is hot.