Thanks gergo! At the time of writing this post I didn't have the software licenses from the University anymore so I actually just used excel for the graphs (with dynamic formulas & co)
Yes, as Dan_Keys noted, they're in the dissertation on the very bottom of the document. I wrote the articles myself, to try to keep them relatively comparable.
Yes, there are most certainly other differences between the articles. That's why I asked participants to indicate the emotions they felt and also used this indicator for the statistical analysis.
On the one hand, I think it would be great to properly validate my findings (as some of it was rather exploratory) and investigate the mechanisms at play - there seems to be quite a bit to be discovered still e.g. regarding how to gather support for regulation, especially among republicans as they hold higher risk perceptions already but are less supportive of AI regulation. A pretest-posttest study with diverse demographics could also give very interesting additional insights. On the other hand, besides emotional appeal there is much more to explore regarding AI risk communication. For example how to most effectively convey the idea of x-risk through AI misalignment - with analogies, technical explanations, sample scenarios etc. But also which communicators, communication channels etc.
I find it hard to formulate solid recommendations from this study alone due to the limitations. Mostly, I would advise communicators to really be mindful about their intentions & potential effects of their messages. Hope more research gets done to give clearer guidance.
All the best for your ongoing and future research, excited to read it once it's out.
Thanks gergo! At the time of writing this post I didn't have the software licenses from the University anymore so I actually just used excel for the graphs (with dynamic formulas & co)