I’ve seen this posted to some EA Facebook groups, but not here on the forum. Yesterday Dominic Cummings, Chief Special Adviser to the UK Prime Minister, released a blog post where he talked about restructuring the British civil service, and invited applicants for various potentially impactful policy roles.

At the top of the blog post he included a quote by ‘Eliezer Yudkowsky, AI Expert, Less Wrong etc’. Cummings has posted on Less Wrong in the past, is plausibly aware of EA, and is likely to be receptive to at least some EA ideas, such as AI safety and prediction markets.

If you’re based in the UK, are interested in policy careers and/or are gifted in data science/maths/economics/project management etc. or are a ‘super talented weirdo’ (his words not mine), and wouldn’t mind spending a couple of years working alongside Dominic Cummings, this could be a great opportunity to influence some big policy changes in the UK.

33

0
0

Reactions

0
0
Comments6


Sorted by Click to highlight new comments since:

Regarding AI alignment and existential risk in general, Cummings already has a blog post where he mentions these: https://dominiccummings.com/2019/03/01/on-the-referendum-31-project-maven-procurement-lollapalooza-results-nuclear-agi-safety/

So he is clearly aware and responsive to the these ideas, it would be great to have an EA minded person on his new team to emphasise these.


Exactly, he has written posts about those topics, and about 'effective action', predictions and so on. And there is this article from 2016 which claims 'he is an advocate of effective altruism', although it then says 'his argument is mothball the department (DFID)', which I'm fairly sure most EAs would disagree with.

But as he's also written about a huge number of other things, day-to-day distractions are apparently the rule rather than the exception in policy roles, and value drift is always possible, it would be good to have someone on his team, or with good communication channels to them, who can re-emphasise these issues (without publicly associating EA with Cummings or any other political figure or party).

Although the blog post is seeking applications for various roles, the email address to send applications to is ‘ideas for number 10 at gmail dot com’.

If someone/some people took that address literally and sent an email outlining some relatively non-controversial EA-aligned ideas (e.g. collaboration with other governments on near-term AI-induced cyber security threats, marginal reduction of risks from AI arms races, pandemics and nuclear weapons, enhanced post-Brexit animal welfare laws, maintenance of the UK’s foreign aid commitment and/or increased effectiveness of foreign aid spending), would the expectancy of that email be positive (higher chance of above policies being adopted), negative (lower chance of above policies being adopted) or basically neutral (highly likely to be ignored or unread, irrelevant if policies are adopted due to uncertainty over long term impact)?

I’m inclined to have a go unless the consensus is that it would be negative in expectation.

I don't think cold emailing is usually a good idea. I've sent you a private message with some more thoughts.

Thanks Khorton for the feedback and additional thoughts.

I think the impact of cold emails is normally neutral, it would have to be a really poorly-written or antagonising email to make the reader actively go and do the opposite of what the email suggests! I guess neutral also qualifies as 'not good'.

But it seems like people with better avenues of contact to DC have been considering contacting him anyway, through cold means or otherwise, so that’s great.

Curated and popular this week
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 8m read
 · 
And other ways to make event content more valuable.   I organise and attend a lot of conferences, so the below is correct and need not be caveated based on my experience, but I could be missing some angles here. Also on my substack. When you imagine a session at an event going wrong, you’re probably thinking of the hapless, unlucky speaker. Maybe their slides broke, they forgot their lines, or they tripped on a cable and took the whole stage backdrop down. This happens sometimes, but event organizers usually remember to invest the effort required to prevent this from happening (e.g., checking that the slides work, not leaving cables lying on the stage). But there’s another big way that sessions go wrong that is sorely neglected: wasting everyone’s time, often without people noticing. Let’s give talks a break. They often suck, but event organizers are mostly doing the right things to make them not suck. I’m going to pick on two event formats that (often) suck, why they suck, and how to run more useful content instead. Panels Panels. (very often). suck. Reid Hoffman (and others) have already explained why, but this message has not yet reached a wide enough audience: Because panelists know they'll only have limited time to speak, they tend to focus on clear and simple messages that will resonate with the broadest number of people. The result is that you get one person giving you an overly simplistic take on the subject at hand. And then the process repeats itself multiple times! Instead of going deeper or providing more nuance, the panel format ensures shallowness. Even worse, this shallow discourse manifests as polite groupthink. After all, panelists attend conferences for the same reasons that attendees do – they want to make connections and build relationships. So panels end up heavy on positivity and agreement, and light on the sort of discourse which, through contrasting opinions and debate, could potentially be more illuminating. The worst form of shal
 ·  · 8m read
 · 
Confidence Level: I’ve been an organizer at UChicago for over a year now with my co-organizer, Avik. I also started the UChicago Rationality Group, co-organized a 50-person Midwest EA Retreat, and have spoken to many EA organizers from other universities. A lot of this post is based on vibes and conversations with other organizers, so while it's grounded in experience, some parts are more speculative than others. I’ll try to flag the more speculative points when I can (the * indicates points that I’m less certain about).  I think it’s really important to make sure that EA principles persist in the future. To give one framing for why I believe this: if you think EA is likely to significantly reduce the chances of existential risks, you should think that losing EA is itself a factor significantly contributing to existential risks.  Therefore, I also think one of the most important ways to have a large impact in university (and in general) is to organize/start a university EA group.  Impact Through Force Multiplication 1. Scope – It's easy to be scope insensitive with respect to movement building and creating counterfactual EAs, but a few counterfactual EAs potentially means millions of dollars going to either direct work or effective charities. Getting one more cracked EA involved can potentially double your impact! 1. According to this post from 2021 by the Uni Groups Team: “Assuming a 20% discount rate, a 40 year career, and $2 million of additional value created per year per highly engaged Campus Centre alumnus, ten highly engaged Campus Centre alumni would produce around $80 million of net present value. The actual number is lower, because of counterfactuals.” It should be noted that campus centre alumni is referring to numbers estimated from these schools. 2. They also included an anecdote of a potential near-best-case scenario that I think is worth paraphrasing: The 2015 Stanford EA group included: Redwood CEO Buck Shlegeris, OpenPhil Program Direct