Is it intelligent to need 300 million images of cats (Big Data) to recognize cats? A child cuddles a cat and understands “cat” with little data. (Small Data)
That’s why the AI model today can not only be “made smart” i.e. trained by tens of millions of training data, but can also be trained by rules, e.g. by chess rules or “Go” rules like in the AI AlphaGo. The AI uses these rules to play countless games against itself and learn from them, instead of memorizing all the chess games in the world as it used to, which is not possible with “Go” as far as I know. In AlphaGo, the AI decided to make moves that humans would not make and for which we are not prepared. The legendary move 37 in the second game not only amazed the Go master Lee Sedol (rank of 9th Dan) and the experts, a murmur went through humanity as if a dam had broken. The AI subsequently won through superiority in the use of tactics not devised by man, and after winning 4 out of 5 games was awarded the Go Dan degree of the 9th rank.
So, in this sense, the AI is highly fluid and, in a sense, “dexterous” through this ability of reinforcing learning. This makes it all the more unpredictable the more general the training is designed and the more uncontrolled the self-optimization turns out to be. The larger its field of operation is set, the more sensitive the field of operation is, the greater damage it can do, as in humans. With humans, one can question the decision-making processes and thus form causal chains of explanation, understand them. But can you understand an AI? After all, we don’t even understand our modern cars or smartphones.
Self-learning systems can learn anything, including immoral or illegal behavior. (The 2016 Microsoft Tay chatbot had to be shut down because of this).
The dystopia that results from “superintelligence” decisions that we don’t understand and question is that it will have a hard time explaining its decisions to us in a way that makes sense to us. Douglas Adams satirized this already in his novel “Hitchhiker’s Guide to the Galaxy” with the ultimate question to the superintelligence about the meaning of everything, the superintelligence answered after a long time with “42”.
In a conversation on a panel about AI, I once said, “What scares me is not so much the “superintelligence” itself, but the first one that thinks it is one.”
The assumption that a “superintelligence” can make all decisions as if they had been made by the best experts and strategists in the world probably remains a utopia.
The development of AI is double-edged, giving hope and causing concern. It can help and harm us. It is increasingly taking away our uniqueness and possibly knocking us off our throne. It can do more and more, better and better and above all infinitely faster and more persistent.
Could it be that the human mind is not as extraordinary and unique as we used to think? We forget things all the time, we mix things up, we contradict ourselves, we are very limited. Thus, AI carries tremendous “mortification potential” for many who consider themselves or their professions to be important. However, we also have our strengths, for example, we don’t have to believe everything we think.
Humans can explain how they came to a decision by thinking about it, AI can’t (yet). Unfortunately, if we demand that it should be able to do so, it will inevitably become even more human-like. How similar can it become to us? Will it knock us off the throne of the ruling creature on this planet? First by Copernicus, then by Darwin, then by Freud, as if that was not enough we now experience our fourth mortification by the AI.
Who judges the decisions of the AI, another legal analysis AI? Or would not an AI-free body of humans have to be established as the highest authority? It would be good to enact a law prohibiting the management of a company by an AI. At most, an AI should be able to exert decisive influence on a company in the advisory board or supervisory board.
We cannot expect AI to be equipped with ethical or moral components. After all, as developers of AIs, technicians are not philosophers; that’s not their job. But neither can philosophers program AI systems, so discourse between the two camps is needed, immediately.