Let's face it, humans are not without their flaws...they are corrupt and illogical, invoking distrust among its citizens.
If the technology were in place, would you be opposed to a computer running the country?
*
I would guess that you're referring to Artificial Intelligence[AI]& from countless Sci-Fi tales,nothing ever ends well...
Existential risk from artificial general intelligence
https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence
Existential risk from artificial general intelligence is the hypothetical threat that dramatic progress in artificial intelligence (AI) could someday result in human extinction (or some other unrecoverable global catastrophe). The human race currently dominates other species because the human brain has some distinctive capabilities that the brains of other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then this new superintelligence could become powerful and difficult to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.
However, not only have science fiction writers communicated the potential hazards from the developement of AI but so have more than a few of the greatest minds of our century. Steven Hawking has actually been on the air in labeling the developement of artificial intelligence as one of the most greatest & immenent threats to mankind that could occur within a timeframe of the next 30 years with the techological advances that are being made.
Bill Gates, Stephen Hawking Say Artificial Intelligence Represents Real Threat
http://www.cio.com/article/2877482/consumer-technology/bill-gates-stephen-hawking-say-artificial-intelligence-represents-real-threat.html
Nobel-prize winner Stephen Hawking, who spends a lot of time thinking about the shape of the universe, is also worried. "It's tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history," he wrote in The Independent, a British newspaper.Hawking, no technophobe, is bullish on the potential benefits of AI and machine learning, and he says AI could become "the biggest event in human history."
"One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand," Hawking wrote. "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."
*