Cool stuff! very happy to see this kind of post :)
I'm seriously concerned about epistemic security, and have been working on something similar to the design sketch for Rhetoric highlighting for a while now. I find it particularly appealing because you could avoid ground truth problems by focusing on persuasion detection; And i'm curious about the possibility of turning such an application into a more reflective bias and manipulatibility exercise. Although the bigger concern would probably be its potential as a dual use tool that helps malicious actors perfect their persuasion even further. I've also been looking into cheaper ways of doing this than typical LLMS, and doing it for voice/video in addition to text. I'd be curious to hear if there are people working on similar initiatives (to avoid duplicate work as well) or if there are broader epistemic security efforts I hadn't heard of.
Cool stuff! very happy to see this kind of post :)
I'm seriously concerned about epistemic security, and have been working on something similar to the design sketch for Rhetoric highlighting for a while now. I find it particularly appealing because you could avoid ground truth problems by focusing on persuasion detection; And i'm curious about the possibility of turning such an application into a more reflective bias and manipulatibility exercise. Although the bigger concern would probably be its potential as a dual use tool that helps malicious actors perfect their persuasion even further. I've also been looking into cheaper ways of doing this than typical LLMS, and doing it for voice/video in addition to text. I'd be curious to hear if there are people working on similar initiatives (to avoid duplicate work as well) or if there are broader epistemic security efforts I hadn't heard of.