AI could potentially help democracy by being a less biased expert, something people could turn to if they don't trust what is coming out of human experts. An AI could theoretically consume complex legislation, models, prediction market information and data and provide a easily questioned agent that could present graphs and visualisations. This could help voters make more informed choices about important topics.
Can this be done safely and reliably? If so can it help people make better decisions around AI safety?
Is anyone working on this idea currently?
It's true that all data and algorithms are biased in some way. But I suppose the question is, is the bias from this less than what you get from human experts, who often have a pay cheque that might lead them to think in a certain way.
I'd imagine that any system would not be trusted implicitly, to start with, but would have to build up a reputation of providing useful predictions.
In terms of implementation, I'm imagining people building complex models of the world, like decision making under deep uncertainty with the AI mainly providing a user friendly interface to ask questions about the model.