AI, Violence and War

The question of whether artificial intelligence leads to violence is complex and requires careful study. To do this, one must first look at human intelligence. There is research to suggest that there is a correlation between lower intelligence quotient (IQ) and violence, which does not mean that lower intelligence necessarily leads to violence. There are many factors that can influence an individual’s behavior, including their environment, upbringing, and personal experiences.

The more difficult question is whether high intelligence leads to violence, perhaps even unintentionally, as an undesirable side effect. Unfortunately, this is close at hand, and that is the unpleasant news. Regarding the misuse of intelligence for power, there are numerous examples in history and in the present where people have used their intelligence and knowledge to gain and hold power. This can be done in a variety of ways, such as manipulation, deception, or exploitation of others. Therefore, the responsibility for the use of intelligence lies with the individual and society as a whole. So too with AI. But is this responsibility taken seriously? I fear, no.

AI can be used by those in power to maintain and increase that power. In addition, AI can reinforce discrimination and bias, as these are present in the data on which the technology is trained (by humans). Therefore, it is important to establish ethical guidelines for the development and application of AI, labeling obligation of AI results or proposals relevant to us with sources.

The failure of foresight and timely response to the new technological opportunities can be seen in cybercrime, which will soon receive a new boom because it can be automatically optimized in proliferation through AI based on success. Cybercriminals use AI to manipulate human behavior, for example in voting decisions, and to make their attacks more efficient. They use AI to identify vulnerabilities in people’s habits and behaviors and exploit them for their influence.

The influencing of opinion by BOTS on social platforms has now become a threat to entire societies. Therefore, it is important to clearly establish the identity of each participant in the discourse of opinion. I realize this feels uncomfortable, but we need to prevent unambiguous election results, like the one in the U.S. in 2021, from being challenged by a bot army in such a way that simple-minded or power-obsessed people storm the Capitol like lemmings.

Many rail against Elon Musk, including the host of a panel where I recently spoke as an expert. This shows how manupulated people become puppets of a narrative and even enter the fray for their opinions. Elon Musk, for example, is the only platform operator (X, formerly Twitter) to have recognized that a path toward true freedom of expression, away from influence by bots, can only be found in establishing the identity of real people. Tying a user account to a unique phone number is undermined by the bot armies through thousands of SIM cards of fake identities, as they are easy to acquire. This is how Russian troll factories tasked with destroying Western democracies operate. Elon Musk wants to link the identity to the credit card first, there is a human behind every credit card, which is already checked by the credit card companies with complex procedures. That’s why a monthly fee via credit card to participate in a social network makes sense and is a good transitional solution until Worldcoin or similar biometric or blockchain methods become established.

On the same panel, another speaker actually claimed that poor workers in low-wage countries classify photos for Elon Musk for a pittance, to feed artificial intelligence and that’s why he built the Starlink satellite network. Cyber Exploitation….

Yes,yes and man has never been to the moon and Elvis is alive. Stupidity is not punishable, but it is very disturbing.

To clarify: the job of a clickworker in terms of tagging (classifying) is a job that only highly qualified people with a good cultural background and very good language skills can do.

No corporation can hire these people directly, as far too many are needed and only for a limited time. Therefore, the project assignment runs to so-called data annotation companies, of which there are about 60-70 worldwide.These companies bundle the necessary 100s or 1000s of clickworkers via crowdsourcing for tagging millions of data, for example images. The work of the clickworkers is checked and quality assured by the data annotation companies, and the result is bundled and forwarded to the client. Whether there is exploitation in a sub-step of the supply chain, you can never say exactly, but since tagging requires a high qualification, I do not think so and if so, then it is more of a political problem, which we are trying to get a handle on here in Germany with the so-called supply chain law and voluntary commitments. Accusing large corporations of deliberate exploitation is populism.

I have only gone into this example, which I have just experienced, because it shows the danger of being blinded by false information or only a part of the available information.

The safety of all of us depends on our own initiative to always look at the sources of every report that tends to change one’s own opinion and to inquire about counter-opinions and their validity. This is to ensure that the opinion for which one enters the fray is based on knowledge of the matter.

AI and war

A big topic and it is very complex. To date (2023) there is no regulation and no common ban on autonomous weapons by the UN. Even on the definition of what is an autonomous weapon, there is no agreement. Moreover, these weapons or weapon systems could operate semi-autonomously or fully autonomously; different rules are required for each level of autonomous controllability, especially over responsibility over their missions.

AI is already being used now in the planning and assessment of combat situations.

Necessary would be an agreement, unfortunately we see that agreements are suspended, broken or canceled at will, what e.g. Russia proved several times in the course of the war of aggression from February 2022 against Ukraine. In 2023, the Russians even suspended the START treaty. In war, all rules go out of force, at the latest when one has to deal with an extremist warmonger.

The increasing use of drones, which until now have been controlled individually by soldiers, is already leading to the first drone swarms in the Ukraine war after just one year. They are the logical conclusion if the drones are operated individually, because then they are also fought individually or their signal or network is disturbed. Drone swarms are necessarily controlled by automated systems that transmit targets supported by AI to the individual drones. The now increasingly common drone swarm is leading to oversaturation of defensive systems, and if it also redirects flexibly and quickly through AI, it is sure to be more successful in its useful or harmful mission.

But in addition to autonomous weapon systems, there are also the pure software attacks through cyber warfare. It remotely attacks industrial systems, e.g. critical infrastructure, government systems and general communication infrastructure via data networks. Here, all countries that can are upgrading and protecting their Internet-connected systems more and more, educating the middle class to the citizen more openly, and discussing the dangers more frequently. The goal is to ensure that the damage that occurs remains localized – and thus manageable – and does not cascade over the area or networks and produce a systemic failure. But networks are precisely where software malware targets.

AI, autonomous decisive in warfare is the dystopia we are all afraid of, it will unfortunately come in my estimation and the only question is whether we survive it, if so, how many, where and under what circumstances. Humanity will not be wiped out, but our society will be gone. The above mentioned Elon Musk builds as plan B for a few of us the Starship, designed for 100 people crew. He himself says he doesn’t know if he will finish Starship in time and if the base on Mars will be autonomous enough to start over or continue differently.

I hope he is not right with his vision this time.

Get the NEWSLETTER