I am Oğuz, and I am currently a political science/international relations PhD candidate located in Türkiye. My current research plan is to focus on how threat perception in politics works and why we see certain events as more threatening than others (even when the harm is less). Through this agenda, I believe we can improve collective behavior and create political institutions that are more effective at handling existential risks.
Previously, I have done some research work on biological weapons and how threats were framed during the COVID-19 pandemic. Within EA, I have finished BlueDot Impact's Biosecurity Fundamentals program and was also a summer research fellow in the Cambridge Existential Risks Initiative (now ERA) in 2021. One common discussion point about biorisks (and x-risks in general) was how often large-scale and concrete threats, such as AI misuse or climate change, were not taken as seriously in the public eye or quickly forgotten once the threat passes. As such, I decided to pivot to this research area as a more "meta" approach to x-risks.
In the near future, I have three career options in mind for my meta approach:
Policy: Working on policy ideas to help governments and institutions avoid being tricked by people who use fear and false threats to push their own agendas
Communication: Helping people spot when media or politicians are using scare tactics and teaching better ways to think rationally about which threats are more serious.
Research: Academic work to better understand why we worry about some risks in politics but ignore bigger, more dangerous ones
I would appreciate support on:
Career opportunities in EA/non-EA that are relevant to these career paths
People or institutions that I can reach out to that work in similar areas
Opportunities or recommendations for improving the skills that I might need for the communication track
Thank you in advance for any comments and recommendations!
Hey everyone,
I am Oğuz, and I am currently a political science/international relations PhD candidate located in Türkiye. My current research plan is to focus on how threat perception in politics works and why we see certain events as more threatening than others (even when the harm is less). Through this agenda, I believe we can improve collective behavior and create political institutions that are more effective at handling existential risks.
Previously, I have done some research work on biological weapons and how threats were framed during the COVID-19 pandemic. Within EA, I have finished BlueDot Impact's Biosecurity Fundamentals program and was also a summer research fellow in the Cambridge Existential Risks Initiative (now ERA) in 2021. One common discussion point about biorisks (and x-risks in general) was how often large-scale and concrete threats, such as AI misuse or climate change, were not taken as seriously in the public eye or quickly forgotten once the threat passes. As such, I decided to pivot to this research area as a more "meta" approach to x-risks.
In the near future, I have three career options in mind for my meta approach:
I would appreciate support on:
Thank you in advance for any comments and recommendations!