I’ve seen this posted to some EA Facebook groups, but not here on the forum. Yesterday Dominic Cummings, Chief Special Adviser to the UK Prime Minister, released a blog post where he talked about restructuring the British civil service, and invited applicants for various potentially impactful policy roles.

At the top of the blog post he included a quote by ‘Eliezer Yudkowsky, AI Expert, Less Wrong etc’. Cummings has posted on Less Wrong in the past, is plausibly aware of EA, and is likely to be receptive to at least some EA ideas, such as AI safety and prediction markets.

If you’re based in the UK, are interested in policy careers and/or are gifted in data science/maths/economics/project management etc. or are a ‘super talented weirdo’ (his words not mine), and wouldn’t mind spending a couple of years working alongside Dominic Cummings, this could be a great opportunity to influence some big policy changes in the UK.

33

0
0

Reactions

0
0
Comments6


Sorted by Click to highlight new comments since:

Regarding AI alignment and existential risk in general, Cummings already has a blog post where he mentions these: https://dominiccummings.com/2019/03/01/on-the-referendum-31-project-maven-procurement-lollapalooza-results-nuclear-agi-safety/

So he is clearly aware and responsive to the these ideas, it would be great to have an EA minded person on his new team to emphasise these.


Exactly, he has written posts about those topics, and about 'effective action', predictions and so on. And there is this article from 2016 which claims 'he is an advocate of effective altruism', although it then says 'his argument is mothball the department (DFID)', which I'm fairly sure most EAs would disagree with.

But as he's also written about a huge number of other things, day-to-day distractions are apparently the rule rather than the exception in policy roles, and value drift is always possible, it would be good to have someone on his team, or with good communication channels to them, who can re-emphasise these issues (without publicly associating EA with Cummings or any other political figure or party).

Although the blog post is seeking applications for various roles, the email address to send applications to is ‘ideas for number 10 at gmail dot com’.

If someone/some people took that address literally and sent an email outlining some relatively non-controversial EA-aligned ideas (e.g. collaboration with other governments on near-term AI-induced cyber security threats, marginal reduction of risks from AI arms races, pandemics and nuclear weapons, enhanced post-Brexit animal welfare laws, maintenance of the UK’s foreign aid commitment and/or increased effectiveness of foreign aid spending), would the expectancy of that email be positive (higher chance of above policies being adopted), negative (lower chance of above policies being adopted) or basically neutral (highly likely to be ignored or unread, irrelevant if policies are adopted due to uncertainty over long term impact)?

I’m inclined to have a go unless the consensus is that it would be negative in expectation.

I don't think cold emailing is usually a good idea. I've sent you a private message with some more thoughts.

Thanks Khorton for the feedback and additional thoughts.

I think the impact of cold emails is normally neutral, it would have to be a really poorly-written or antagonising email to make the reader actively go and do the opposite of what the email suggests! I guess neutral also qualifies as 'not good'.

But it seems like people with better avenues of contact to DC have been considering contacting him anyway, through cold means or otherwise, so that’s great.

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
Although some of the jokes are inevitably tasteless, and Zorrilla is used to set up punchlines, I enjoyed it and it will surely increase concerns and donations for shrimp. I'm not sure what impression the audience will have of EA in general.  Last week The Daily Show interviewed Rutger Bregman about his new book Moral Ambition (which includes a profile of Zorrilla and SWP). 
 ·  · 2m read
 · 
Americans, we need your help to stop a dangerous AI bill from passing the Senate. What’s going on? * The House Energy & Commerce Committee included a provision in its reconciliation bill that would ban AI regulation by state and local governments for the next 10 years. * Several states have led the way in AI regulation while Congress has dragged its heels. * Stopping state governments from regulating AI might be okay, if we could trust Congress to meaningfully regulate it instead. But we can’t. This provision would destroy state leadership on AI and pass the responsibility to a Congress that has shown little interest in seriously preventing AI danger. * If this provision passes the Senate, we could see a DECADE of inaction on AI. * This provision also violates the Byrd Rule, a Senate rule which is meant to prevent non-budget items from being included in the reconciliation bill.   What can I do? Here are 3 things you can do TODAY, in order of priority: 1. (5 minutes) Call and email both of your Senators. Tell them you oppose AI preemption, and ask them to raise a point of order that preempting state AI regulation violates the Byrd Rule. * Find your Senators here. * Here’s an example of a call:  “Hello, my name is {YOUR NAME} and I’m a resident of {YOUR STATE}. The newest budget reconciliation bill includes a 10-year ban pre-empting state AI legislation without establishing any federal guardrails. This is extremely concerning to me – leading experts warn us that AI could cause mass harm within the next few years, but this provision would prevent states from protecting their citizens from AI crises for the next decade. It also violates the Byrd Rule, since preempting state AI regulation doesn’t impact federal taxes or spending. I’d like the Senator to speak out against this provision and raise a point of order that this provision should not be included under the Byrd Rule.” See here for sample call + email temp