AI and automation
Automation usually mechanizes a repetitive process more efficiently than a human or at less risk to life and limb. Automation is encroaching on more and more areas of the working world, as processes are broken down into sub-steps so that individual, simpler steps can be completed by automata. More and more of these partial steps can be automated by better and better machines. This is happening in a “halfway humane” way, since the development of these automata takes time and human power, and thus the labor market can be reorganized, i.e. overall, viewed internationally, it can be described as “stable” and assessable. Why “reasonably humane”? Because automation naturally eliminates precisely those human jobs that were previously served by people living on site. For example, a welding robot replaces 5 welders on site and creates only one job for a mechatronics engineer as controller of the overall equipment. Elsewhere, engineers are needed to create the robot and supply chains to get it to its destination and other specialists to insert it into the manufacturing process and remove the previous jobs.
The conversion of a human-operated workplace into one operated by machines and automation is most profitable for the employer in the case of places that occur repeatedly in the factory or company. In the early 19th century (1830s and 1840s), for example, these were the looms of the booming silk industry, which were now increasingly being replaced by the “programmable” Jacquard looms (looms that could be operated automatically by means of a kind of punch card system). Fueled by the loss of their jobs or income in competition with an automatic machine, the uprising of the silk weavers in Lyon – out of 165000 inhabitants, 30000 were directly employed in silk production and almost half indirectly – was the first mass uprising of the proletariat and was brutally crushed. They did not achieve the goal of increasing their income, reduced by machine competition, with a set fair tariff. Of the traditionally operated weaving mills, only a few highly specialized ones survived; all others eventually had to close, and the workers lost their jobs altogether. Meanwhile, manufacturers of automated looms created new jobs in the design, manufacture and sale of these machines.
It is similar, but much more dramatic, when it comes to controlling applications using artificial intelligence. Unlike the robot or automaton, AI is not necessarily tied to hardware that slows it down. It can operate entirely in software, and not only that. It can further optimize the process it has been assigned by being trained with previous results, reviewing its subsequent independently found results and incorporating findings into its work, improving its “model.” In the end, she does something very efficiently, usually without us being able to understand in detail how she arrived at that result.
The renewed uprising is not hard to predict.
AI and intelligence
Is it intelligent to need 300 million images of cats (Big Data) to recognize cats? A child cuddles a cat and understands “cat” with little data. (Small Data)
That’s why the AI model today can not only be “made smart” i.e. trained by tens of millions of training data, but can also be trained by rules, e.g. by chess rules or “Go” rules like in the AI AlphaGo. The AI uses these rules to play countless games against itself and learn from them, instead of memorizing all the chess games in the world as it used to, which is not possible with “Go” as far as I know. In Alpha Go, the AI decided to make moves that humans would not make and that we are not prepared for. So it won by superiority in using tactics not devised by humans.
So, in this sense, the AI is highly fluid and, in a sense, “dexterous” through this ability of reinforcing learning. This makes it all the more unpredictable the more general the training is designed and the more uncontrolled the self-optimization turns out to be. The larger its field of operation is set, the more sensitive the field of operation is, the greater damage it can do, as in humans. With humans, one can question the decision-making processes and thus form causal chains of explanation, understand them. But can you understand an AI? After all, we don’t even understand our modern cars or smartphones.
Self-learning systems can learn anything, including immoral or illegal behavior. (The 2016 Microsoft Tay chatbot had to be shut down because of this).
The dystopia that results from “superintelligence” decisions that we don’t understand and question is that it will have a hard time explaining its decisions to us in a way that makes sense to us. Douglas Adams satirized this already in his novel “Hitchhiker’s Guide to the Galaxy” with the ultimate question to the superintelligence about the meaning of everything, the superintelligence answered after a long time with “42”.
In a conversation on a panel about AI, I once said, “What scares me is not so much the “superintelligence” itself, but the first one that thinks it is one.”
The assumption that a “superintelligence” can make all decisions as if they had been made by the best experts and strategists in the world probably remains a utopia.
The development of AI is double-edged, giving hope and causing concern. It can help and harm us. It is increasingly taking away our uniqueness and possibly knocking us off our throne. It can do more and more, better and better and above all infinitely faster and more persistent.
Could it be that the human mind is not as extraordinary and unique as we used to think? We forget things all the time, we mix things up, we contradict ourselves, we are very limited. Thus, AI carries tremendous “mortification potential” for many who consider themselves or their professions to be important. However, we also have our strengths, for example, we don’t have to believe everything we think.
Humans can explain how they came to a decision by thinking about it, AI can’t (yet). Unfortunately, if we demand that it should be able to do so, it will inevitably become even more human-like. How similar can it become to us? Will it knock us off the throne of the ruling creature on this planet? First by Copernicus, then by Darwin, then by Freud, as if that was not enough we now experience our fourth mortification by the AI.
Who judges the decisions of the AI, another legal analysis AI? Or would not an AI-free body of humans have to be established as the highest authority? It would be good to enact a law prohibiting the management of a company by an AI. At most, an AI should be able to exert decisive influence on a company in the advisory board or supervisory board.
We cannot expect AI to be equipped with ethical or moral components. After all, as developers of AIs, technicians are not philosophers; that’s not their job. But neither can philosophers program AI systems, so discourse between the two camps is needed, immediately.
AI and economics
Since intelligence is made in software, it can evolve quickly, if not evolve itself as indicated above, and thus takes less time to appear in the real market, too quickly for the market to adapt to it. In this case, jobs are abruptly eliminated without those who have been working up to now having had time to retrain. (Especially since, for example, not every cab driver who is replaced by autonomous vehicles can now be trained as a Web 3.0 specialist.)
The already legendary Oxford study from 2013 stated that 47% of all U.S. (and thus all developed industrialized nations) jobs could be eliminated in the next 25 years.
Affected are, for example, doctors, lawyers, financial analysts, clerks, accountants, tax consultants, drivers, bank clerks, cashiers, real estate agents, insurance clerks, basically all jobs behind screens, “normal” computer scientists, graphic designers, photographers, lighting technicians, make-up artists, models, musicians, sound technicians, dubbing artists, speakers, drivers, vehicle drivers and many more.
What remains are professions with human proximity and innovation, e.g. nursing staff, scientists, engineers, designers, composers, waiters, teachers, professions with social competence.
It is true that David Ricardo postulated in his compensation theory from 1817: in every turnaround with new production methods (industrialization, digitalization), jobs fall away to the same extent as new ones are created.
Now, however, this could be different for the first time, as the newly created jobs could also be filled by machines/robots or algorithms.
Digitalization has already eliminated a number of jobs, but it has also created new ones. We have been obliged to do things ourselves, have gone from being consumers to prosumers. In the discount store, which has almost replaced the retail trade, we weigh and scan our goods ourselves. We put them in our shopping carts ourselves and take them to the self-service checkout. We book our travel by credit card over the Internet and no longer at a travel agency. We order our book online and download it immediately as an e-book or audio book. Movies or music is streamed and no longer stored on our devices. We make doctor’s appointments via an electronic appointment system. The list is endless.
It’s not always entire jobs that are eliminated, but also tasks, assignments or areas of responsibility. (700 listed activities are eliminated).
If autonomous driving eliminates the need for a driver’s license, then most driving schools, their cars, staff, individual insurance benefits for drivers and passengers will disappear. Vehicles will become electric and autonomous charging, gas stations will disappear and with them the tankers and car washes for individual traffic.
In human hands remain for the time being all higher value services as well as professions with direct contact with people, for example social worker, kindergarten teacher, hairdresser, doctor/therapist, jobs in direct contact with people in leisure/recreation/health. There remain the trades and there remain many of the service workers.
Can an AI gamble itself away?
An AI trimmed to maximize profits can gamble away, or even be “corrupt” by our term, calculating in the negative sense.
In search of further rewards, it can be harmful to us and develop negatively, like an overly greedy human who walks over corpses.
Unregulated, it is already operating in high-frequency trading on stock exchanges around the world. AIs are further contributing to the automation of stock market trading, and the reactions in price movements are becoming more and more opaque and incalculable for ordinary people.
In the cryptocurrency market, AIs are calculating profit opportunities from trading contracts, they are increasingly complementing MEV (Miner Extractable Money) bots e.g. in DEX arbitrage, DEFI liquidation, unfortunately also in front running and sandwich trading.
It is starting to get out of hand for us.
AI and Security
With the presence of sensors in public and private spaces and the storage of data from the same, the entire world is becoming a space of suspicion. I had already described above that AI training may contain biases or errors that are difficult to detect, let alone correct, and jeopardize our security. Training material is not balanced per se. If you try to teach an AI what an orange looks like, you train it with images of oranges from millions of web pages and find them there because they were described as orange. But which oranges will you find most of the time? The particularly shapely, delicious ones, all images from marketing for sales appeal. In training, these images cover up the less unattractive oranges. The AI gets the impression, a normal orange is the “sales orange”. Same with skin color, eye color, hair color, age, gender. Mostly shaped from Western culture, the AI thinks a “person” is a young attractive, blonde haired, fair skinned woman.
An AI that makes personnel hiring decisions is trained with the hiring decisions of the last years of the company. This training data may already have been biased in e.g. age/religion/gender issues due to human errors or weaknesses, corruption and personal preferences, this is then carried forward by the AI. So the selection of training data and training method of the particular AI model is crucial for its usability in terms of a fair outcome.
Now consider security aspects, automatic alerting before possible crimes, such as with Palantir, and consider what training data was used here, for example from police departments in the USA? Were there not accusations and incidents of a racist nature? One suspects the problem.
Should an AI be able to train? Educate children without supervision? How do you prevent the AI from not learning some content over and over again as input material while independently improving its training model, thus over-imprinting and over-training content?
How deeply is an AI allowed to penetrate the layers of our privacy?
The AI might misunderstand our requests to it and make the wrong decisions. Dystopically, perhaps it engages in machine-friendly behavior, in preference to human, disruptive and costly needs?
The security risks are enormous and we need to think about the worst-case scenario now. Where is the AI vulnerable? Perhaps it has a software backdoor that can be used by an administrator or an authority to bring it to its senses, in the worst case to reset it. Or is there a way to disconnect the AI from the infrastructure, the first internet nodes behind the high performance computers could be crippled. Power could also be cut off and the backup generators sabotaged. This only makes sense as long as the AI is produced at these locations, which will soon no longer be the case.
If the AI is connected to manufacturing plants to produce and record robots or moving machines with its software, then it must be possible to stop the (re)production.
All autonomous vehicles must pull over and stop until they are put back into operation by a human.
I would very much request that these emergency switches be set up immediately, just in case.
AI and medicine
AI brings significant benefits to medicine by analyzing millions of patient records. It can detect emerging diseases by observing and classifying changes in a patient’s behavior or condition earlier than the patient or physician themselves. As a supplement to the doctor’s examination or as an aid in the initial assessment of a disease, AI is becoming increasingly helpful. It will watch over us invisibly in our fitness wristbands and, at worst, will automatically summon emergency services, as other systems already do now.
AI can typically handle single specific topics better than humans, e.g. pattern recognition, analysis of Big Data, but not the context. It achieves a high hit rate in analyzing radiology data and can then be used in diagnostics to aid in assessment or disease detection. However, it is interesting to note the fact that while AI judges radiographs better than a human, humans judge differently. It results in a hit rate of 96% by the AI and a hit rate of 94% by a specialist, the combined hit rate of 98% when both the AI and the specialist work together. In the future, the AI will probably act as an independent diagnosing system for reasons of cost and time, probably also because of the calculable reliability of a hit rate, and health insurers will probably insist on this. A physician will have to legally sign off on the AI findings, but probably without a detailed examination for reasons of time and billing. This shortcoming is already known from the current system.
Care robots in the companion field (pets etc) such as Paro can be used where humans are not allowed to be with animals. E.g. because of susceptibility to infection, areas, nursing homes or apartments where animals are not allowed, or because of dementia or fear.
AI will save countless lives and grow visibly and tangibly into a friend of humans in medicine.
AI and war
A big topic and I don’t know if I should address it here because it is very complex. As of today (2023), there is no regulation and no common ban on autonomous weapons by the UN. Even on the definition of what is an autonomous weapon, there is no agreement. Moreover, these weapons or weapon systems could operate semi-autonomously or fully autonomously; different rules are required for each level of autonomous controllability, especially over responsibility over their missions.
AI is already being used now in planning and assessing combat situations.
Necessary would be an agreement, unfortunately we see that agreements are suspended, broken or canceled at will, what e.g. Russia proved several times in the course of the war of aggression from February 2022 against Ukraine. In 2023, the Russians even suspended the START treaty. In war, all rules go out of force, at the latest when one has to deal with an extremist warmonger.
The increasing use of drones, which until now have been controlled individually by soldiers, is already leading to the first drone swarms in the Ukraine war after just one year. They are the logical conclusion if the drones are operated individually, because then they are also fought individually or the signal is disturbed. The drone swarm, which is now becoming more frequent, leads to the oversaturation of the defense systems and if, in addition, it redirects flexibly and quickly through AI, then it is sure to be more successful in its mission.
However, autonomous weapon systems are joined by pure software attacks through cyber warfare. It attacks mostly remotely industrial systems, e.g. critical infrastructure, government systems and general communication infrastructure. Here, all countries that can are upgrading and protecting their Internet-connected systems more and more, educating the middle class to the citizen is done more openly and the dangers are discussed more often. The goal is to ensure that the damage that occurs remains localized – and thus manageable – and does not cascade over the area or networks and produce a systemic failure. But networks are precisely where software malware targets.
AI, autonomous and decisive in warfare, is the dystopia we are all afraid of. Unfortunately, in my opinion, it will come, and the only question is whether we will survive it, and if so, how many, where and under what circumstances. Elon Musk is also not sure and builds as a plan B for a few of us the Starship, designed for 100 people crew. He himself says he doesn’t know if he can finish Starship in time and if the base on Mars is autonomous enough. I hope he is not right with his vision this time.