Proponents of the unconditional embracing of artificial intelligence (AI) argue that this is a major technological leap forward that will provide people with a new tool to make them more productive and make their work easier. But such a simplifying, not to say simplistic, approach contains a pitfall.
This important warning was given in a recent webinar, hosted by INSEAD, in which Professor Theodoros Evgeniou, an expert in Decision Sciences and Technology Management, points out an essential difference between an ordinary tool, such as a hammer, and artificial intelligence: unlike all the tools we have been used to, artificial intelligence can influence the way we think and perceive the world around us. An important role in this, says Professor Evgeniou, is played by the vulnerability of the human mind, reflected for example in the “false memory syndrome”. There are numerous studies suggesting that, in a given context, people become convinced that they experienced events/experiences that did not actually occur. And artificial intelligence will be able to create this context that gives people the belief that they are going through some non-existent events and, as a result, make them change their behavior and thinking.
But the manipulation of the human mind has been going on for a long time, becoming an increasingly present phenomenon with the widespread use of social media. Let’s call it a “passive” manipulation as the “tribalism” stimulated by the social media erases the boundary between, on the one hand, competence, authentic, scientific knowledge and, on the other hand, pseudo-competence, imposture. The ability of anyone to spread pseudo-information in groups sharing the same opinion creates the illusion of competence and truthfulness of information. This is only one step away from equating falsehoods with truth, myths with science, by shaping in the mind the certainty that the individual possesses competence that does not actually exist.
And perhaps this is the great vulnerability that social media has created to benefit the artificial intelligence: the depersonalization of competence. And this risks “giving away” large groups of people to unethical use of artificial intelligence. Before the social media boom, competence was the prerogative of people with a credible and sound professional profile. We are talking here about people with a solid education and experience in their respective fields, enjoying recognition of their competence in professional/academic circles.
Today, however, it is much easier to acquire such public recognition, as the social media have done nothing but cultivate, to the point of paroxysm, a weakness of human nature called confirmation bias. That is, “the tendency for people to favor information that confirms their existing beliefs and assumptions and to ignore entirely information that refutes those biases or, at best, gives them diminished importance”.
In this way, the main criterion of validation of pseudo-experts is no longer their solid knowledge, but the extent to which groups of people find justification for their own biases in their arguments, no matter how unfounded they are or, nota bene, no matter if the person who issues them is totally unknown. And lack of notoriety goes hand in hand with lack of reputational risk. This means that, even if the information is ultimately proven to be false, no one will see their scientific or professional reputation ruined. Because it didn’t exist in the first place…
But artificial intelligence can go even further. And although many doomsday scenarios refer to a society in which AI takes over, Geoffrey Hinton, one of the pioneers of AI, warns us that, until we reach a world controlled by AI at the expense of humans, the riskier and closer in time is the scenario of the malicious use of AI by humans themselves. Like having a hammer with which you smash many other people’s valuables, I might add.
A recent FT article quotes a Kaspersky study that identifies a significant demand for deep fakes on the darkweb that are offered at prices ranging from $300 to $20,000 per minute. In this context, the article points out that, so far, there have been a lot of discussions about how political choices and votes of the population can be influenced by creating fake news and images. But things can have equally dramatic consequences when it comes to the financial markets.
A deep fake video showing an explosion near the Pentagon was picked up and promoted by the Russia Today media channel. It was enough for the financial markets to go through a brief disruption with developments similar to crisis contexts: falling stock markets and rising prices of US government securities and gold.
Fake news stories rolled out in the media can have really significant consequences as algorithmic trading makes split-second trading decisions based on the news headlines it constantly scans. This means that a collapse of financial markets can be intentionally produced by AI generating fake news credible enough to be picked up and cascaded by as many media channels and social media as possible.
The paradox is that in such situations we are no longer just talking about the manipulation of the human mind, but also of algorithms as incapable of identifying fake news as many human investors. In the end, the behavior of some and others will generate a snowball effect on the financial markets.
The roads that AI opens up are equally amazing and frightening, and the stakes are enormous. Because in the end, as Professor Evgeniou said, “whoever controls our thinking will control our destiny”.
Have a nice weekend!