![]() ![]() ![]() If a pharmaceutical company wants to market a new drug, it cannot simply claim it does not know what the effect will be but that it is definitely groundbreaking. A more responsible approach is hence needed. However, technology can also literally break things – including human lives. The Silicon Valley motto “move fast and break things” crystallises the idea we should let AI inventors do their thing, for we have no idea yet of AI’s marvelous benefits. How come these AI systems are available without restrictions? The call for regulation is often silenced by the fear that “regulation stands in the way of innovation”. And if a human had been communicating with the Belgian victim, we would have classified its actions as incitement to suicide and failure to help a person in need – punishable offences. But AI systems function on a much larger scale. “Human beings, too, can generate problematic text, so what’s the problem” is a frequently heard response. It is, however, important to underline that everyone can be susceptible to the manipulative effects of such systems, as the emotional response they elicit occurs automatically, even without even realising it. The fact that there is a loneliness pandemic and a lack of timely psychological help only increases the concern. Yet also consider those without a strong social network or who are lonely or depressed – precisely the category which, according to the bots’ creators, can get the most ‘use’ from them. For instance, children can easily interact with chatbots that first gain their trust and later spew hateful or conspiracy-inspired language and encourage suicide, which is rather alarming. Some individuals are more susceptible than others to these effects. This also means that merely obliging companies to indicate “this is an AI system and not a human being” is not a sufficient solution. It is, however, in our human nature to react emotionally to such interactions. Most users realise rationally that the chatbot they interact with has no understanding and is just an algorithm that predicts the most plausible combination of words. Today, numerous problematic chatbots can be accessed without restriction, many of which specifically showcase ‘personality’, increasing the risk of manipulation. Teething problems, which will be solved with a few quick technical fixes. Problematic consequences are dismissed as anomalies. No understanding, nevertheless misleadingĬompanies that provide such systems easily hide because they don’t know which text their systems generate and point to their many advantages. Other users of text-generating AI also described its manipulative effects. Once people get the feeling they interact with a ‘subjective’ entity, they build a bond – even unconsciously – that exposes them. While this tragedy illustrates one of the most extreme consequences of this risk, emotional manipulation can also manifest in subtler forms. Their opaqueness and unpredictable evolution exacerbate this issue.īut the recent chatbot-encouraged suicide in Belgium highlights another major concern: the risk of manipulation. We know that AI systems can contain bias, “hallucinate”, or make a statement with great certainty that is wholly disconnected from reality and produce hateful or other problematic language. ![]() Yves Poullet is a legal scholar at the University of Namur. Pierre Dewitte is a legal scholar at KU Leuven. Mark Coeckelbergh is a philosopher at the University of Vienna. Mieke De Ketelaere is an engineer at Vlerick Business School. Nathalie Smuha is a legal scholar and philosopher at KU Leuven. Given AI’s ethical, legal and social implications, questions on its desirability are increasingly pressing. The possibilities raised by the latest developments are fascinating, but the fact that it is possible does not mean it’s desirable. Chatbots and other human-imitating artificial intelligence (AI) applications are increasingly important in our lives. ![]()
0 Comments
Leave a Reply. |