Hello everyone,
We, Mark Brakel and Risto Uuk, are the two current members of the Future of Life Institute's (FLI) EU team and we are hiring a new person to our team: an EU Policy Analyst!
We have also announced several other vacancies and although we may not be able to answer your questions about those, we are happy to direct you to the right colleague – the full FLI team can be found on our website.
Through this thread, we would like to run a Europe-focused Ask Me Anything and will be answering questions starting today, Monday, 31 January.
About FLI
FLI is an independent non-profit working to reduce large-scale, extreme risks from transformative technologies. We also aim for the future development and use of these technologies to be beneficial to all. Our work includes grantmaking, educational outreach, and policy engagement.
In the last few years, our main focus has been on the benefits and risks from AI. FLI created one of the earliest sets of AI governance principles – the Asilomar AI principles. The Institute, alongside the governments of France and Finland, is also the civil society champion of the recommendations on AI in the UN Secretary General’s Digital Cooperation Roadmap. FLI also recently announced a €20M multi-year grant program aimed at reducing existential risk. Our first grant program part of that, AI Existential Safety Program, launched at the end of 2021.
We expanded to the EU in 2021 with the hiring of our current two EU staff members. FLI has two key priorities in Europe: i) mitigate the (existential) risk of increasingly powerful artificial intelligence and ii) regulate lethal autonomous weapons. You can read some of our EU work here: A position paper on the EU AI Act, a dedicated website to provide easily accessible information on the AI Act, feedback to the European Commission on AI liability, and a paper about manipulation and the AI Act. Our work has also been covered in various media outlets in Europe: Wired (UK), SiècleDigital (France), Politico (EU), ScienceBusiness (EU), NRC Handelblad (Netherlands), Frankfurter Allgemeine, Der Spiegel, Tagesspiegel (Germany).
About Mark Brakel
Mark is FLI’s Director of European Policy, leading our advocacy and policy efforts with the EU institutions in Brussels and in European capitals. He works to limit the risks from artificial intelligence to society, and to expand European support for a treaty on lethal autonomous weapons.
Before joining FLI, Mark worked as a diplomat at the Netherlands’ Embassy in Iraq and on Middle East policy from The Hague. He has studied Arabic in Beirut, Damascus and Tunis, holds a bachelor’s degree in Philosophy, Politics and Economics from the University of Oxford, and master’s degree from the Johns Hopkins’ School of Advanced International Studies (SAIS).
About Risto Uuk
Risto is a Policy Researcher at FLI and is focused primarily on researching policy-making on AI to maximize the societal benefits of increasingly powerful AI systems.
Previously, Risto worked for the World Economic Forum on a project about positive AI economic futures, did research for the European Commission on trustworthy AI, and provided research support at Berkeley Existential Risk Initiative on European AI policy. He completed a master’s degree in Philosophy and Public Policy at the London School of Economics and Political Science. He has a bachelor’s degree from Tallinn University in Estonia.
Ask Us Anything
We, Mark and Risto, are happy to answer any questions you might have about FLI's work in the EU and the role we are currently hiring for. So please fire away!
If you are interested in learning more about FLI broadly, sign up to our newsletter, listen to our podcast, and follow us on Twitter.
Hey, I am reading the Communication Artificial Intelligence for Europe (other languages), it seems that EU is very enthusiastic about attracting further investment into AI in order to keep its economic competitiveness in this sector. Although more socially beneficial uses (such as healthcare in advanced economies) are introduced in the beginning, the application specifics are not extensively examined. Ethics are overviewed toward the end of the document, suggesting building general awareness of algorithms. What would be necessary for this public algorithm awareness to prevent negative emotions based advertisement from being effective and thus used by companies?[1]
In conjunction with increases in public awareness regarding algorithms, what else would support EU in gaining wellbeing competitiveness? Could this coincide with measures that support global advancement and prevent catastrophic risks, such as developing supportive institutions in industrializing nuclear powers?
For example, if people understood 'ok, this advertisement is showing a bias that induces fear and subsequently assures the viewer of the company's protection, thus motivates the advertised product's purchases, but it is only manipulation, the product's influence on one's wellbeing, net of such related to impulsive behavior, does not change,' would persons seek less value added by marketing and more health and leisure? Would this development be aligned with EU's objectives?
Thank you! Yes, that would be so great if all manipulative techniques are banned, but I would recognize not only targeting people in moments of vulnerability but also 1) using negative, often fear- or/and shame-based, biases and imagery to assume authority, 2) presenting unapproachable images that should (thus) portray authority,[1] 3) physical and body shaming, 4) using sexual appeal in non-sexual contexts and/especially when it can be assumed that the viewer is not interested in such appeal, 5) allusions to physical/personal space intrusion, especia... (read more)