AI could potentially help democracy by being a less biased expert, something people could turn to if they don't trust what is coming out of human experts. An AI could theoretically consume complex legislation, models, prediction market information and data and provide a easily questioned agent that could present graphs and visualisations. This could help voters make more informed choices about important topics.
Can this be done safely and reliably? If so can it help people make better decisions around AI safety?
Is anyone working on this idea currently?
At best I think it would likely be around the same bias as humans, but also potentially much worse. For paycheque influences on human experts, the AI would likely lean the same way as its developer as they tend to heavily maintain developer bias (as the developer is the one measuring success, largely by their own metrics) so there's not much of a difference there in my opinion.
I'm not saying the idea is bad, but I'm not sure it provides anything useful to negate its significant downside resource and risk cost except when used as a data collation tool for human experts. You can use built trust, neutrality vetting, and careful implementation with humans too.
That said, I'm just one person. A stranger on the internet. There might be people working on this who significantly disagree with me on this.