At the mercy of the AIs
The following bit from http://www.bbc.com/news/blogs-echochambers-28005330 got me thinking:
"Time and space have always meant that we could be forewarned, have some time to figure out a reaction even if it was just to protect ourselves. That's over. Now people can no longer be sure if the internet is obeying humans or instead computers that have simply come to know what emotional stimuli are. The situation is claustrophobic."
In the past we had the time -- and space (think of newspapers delivered to a doorstep) -- that created some kind of a protection barrier, so that humans could quietly collect and reflect on facts, and take informed actions. At present, due to the phenomenon of a near-instant global network exchange, we are pretty much drowning in information. In order to catch up we instinctively move from mail to text messages, we tend to prefer tweets over proper articles, we migrate from PCs to devices small enough to be with us even on the toilet and in bed. Still, we are lagging behind, and I think the trend will only get worse (we can optimize machines but not ourselves).
One of the time-savers we're building to try to keep on top of things is automation -- we write clever software that runs on very fast machines and networks. This helps some of us to benefit from incredibly fast business decisions on the stock market, it allows others to profile and gage and possibly manipulate large masses via social networks for both profit and for political purposes. We all use this majestic AI called Google Search and are pleasantly surprised every time it finds us something.
But we forget that these systems are not just much faster than us; they also are very well informed thanks to a growing number of live sampling points (sort of senses). In fact, Google Search is using us, the humans, to augment its abilities. And they know us, their animal counterparts, better and better. They know what we like, what we are busy with, where we tend to be and with whom. All just statistically, but combined they may know us better than some of our own friends.
Worryingly, aberrations in the answers (de facto decisions) of these AIs are often next to impossible for us to verify, certainly in the time period we would have before using them. All right, it is still debatable whether a machine has passed a Turing Test ( http://www.bbc.com/news/technology-27762088 ), but for all practical purposes most of us simply would not notice that the counterpart we're interacting or getting information from is a robot. And given the information overload we would not try to verify the information. Does it not mean that the AIs already have us under their control as masses, and as individuals?
From there it's just a little step towards being deceived by a machine. The Facebook experiment ( http://www.bbc.com/news/technology-28051930 ) scandal was run by humans, but imagine that some AI makes a mistake or another unforeseen change that will in effect push masses of people to do something. Is it not yet realistic to imagine that Google Search algorithm will rank up or bias towards a politician who is known to claim a goal that will benefit Google or something beneficial to it? Would we notice that the resulting popular decision was a trifle manipulated? All it takes is matching values that the artificial mind has learned as positive. Or imagine that Google does as little as delay some percentage of search results or e-mails on basis of who it thinks you are (based on your search history, habits, communications, ...)
But whether it is an awakening intelligence or just a bug is less of an issue. I am not seeing why should singularity as such mean malevolence.
My only point is that the influence of these very complex and very fast systems over human lives already is huge, and in some ways we already are their subjects, whether it's well- or ill-meant.