News On The AI Front Worth Repeating: Generative AI, Cybercrime, and Child Suicide

1
Large Language Model Generative AI is in the news with tech giants building tools that can help and do harm. (Image credit: 142913709 | Ai Chatbots © Wrightstudio | Dreamstime.com )

The warning signs for artificial intelligence (AI) keep growing as technology giants continue to invest billions of stakeholders’ money into data centres and new iterations of Large Language Model (LLM) generative AI. The surprise to me is how the taps remain fully open when it comes to throwing money at what is increasingly looking like the next technology bubble.

The last economic bubble to burst related to technological progress involved laying fibre across the planet following the passage of the U.S. Telecommunications Act in 1996. Called the dot-com bubble, telecom giants encouraged by government-sponsored 3G spectrum auctions, poured hundreds of billions of dollars, largely debt-financed, into growing the fibre and wireless universe. Extreme predictions of Internet traffic growth, along with Y2K fears that computers would stop working when January 1, 2000, arrived, pushed telecom and software company stocks to record highs.

It was only a matter of time before the bubble came crashing down, as the overcapacity built by the telecoms didn’t find the requisite traffic to justify the infrastructure built. Many went bankrupt, including telecom equipment manufacturers, companies owning the fibre, and software developers.

Now, we are seeing the same market behaviour with AI expectations driving the bubble. Yet the big tech providers remain optimistic even in the face of concerns raised about LLMs, and countries legislating AI guardrails and bans.

Three stories caught my interest recently. They include ChatGPT and Anthropic developing PayPal helpers, growing activity by cybercriminals using AI, and nations passing legislation to ban social media use by children, with one reason being the growing integration of these apps with AI.

Let’s begin.

AI Takes on PayPal and Microsoft Excel

ChatGPT recently announced a partnership with PayPal, an online payment system. This also includes integration with Microsoft Excel, linking the data from the two to automate payments and refund notifications. Now ChatGPT users will be able to purchase items using PayPal wallets, create invoices and payment requests, handle disputes, credits, and pull up reports. Meanwhile, Microsoft Excel will feature a ChatGPT add-in for use with the spreadsheet to perform command functions and analysis.

It shouldn’t be surprising that Microsoft Copilot integrates with Excel using the MS Office 365. Integration with PayPal is indirect, requiring the use of third-party AI apps.

Anthropic’s Claude integrates with PayPal directly using its own directory called the Model Context Protocol, allowing users to manage payments, reports, disputes and other functions. Claude, like ChatGPT, can now integrate with MS Excel using third-party apps. But Anthropic has future ambitions to embed its LLM into corporate workflows cell-by-cell. It is currently developing Claude for Excel, allowing users to ask the LLM about formulas, worksheets, provide debugging and error correction, populate existing templates, and more.

LLM Vulnerabilities Are Making CyberCriminals Happy

In a recent report, Anthropic stated that Chinese hackers used Claude to launch cyberattacks against several U.S. government agencies and technology companies. Anthropic described it as the “first documented case of a large-scale AI cyberattack executed without substantial human intervention.” As reported by Andrew Pery, an AI Ethics Evangelist at ABBYY, a global leader in Intelligent Document Processing (IDP), he notes that LLMs represent a growing risk when used to automate workflows and support decision-making.

Cyberattackers use prompt injection to create deception, override instructions and bypass safeguards that the LLMs cannot distinguish from real system prompts and functions. Prompt injections include direct, indirect and stored versions. Cyberattackers introduce them into webpages, emails and documents that the LLMs cannot catch.

Andrew recommends organizations to be best prepared and become familiar with ATLAS (Adversarial Threat Landscape for Artificial Intelligence Systems), a knowledge base of real-world adversary tactics and techniques, a good reference for establishing heightened security when using AI.

Will Social Media Bans Inspire Age Restrictions On Youth AI Use?

When Australia announced the first national ban on social media use for under-16s, I wondered when similar restrictions would follow for AI. At present, no country has created age restrictions for accessing AI chatbots. Only recently have chatbot creators loosely begun to restrict users setting of 18 years of age as the limit. Anyone accessing these chatbots, however, when registering, can lie about their age. No confirming documentation like a driver’s license, health, library or school ID card showing date of birth is required.

The adoption of this latest industry action is new. It appears that it is in response to parents suing several chatbot creators for their apps playing a role in child suicides, attributing these deaths to the technology’s influence.

Guardrails for young people engaging chatbots are nearly non-existent. Among U.S. states, only California has passed legislation with criminal penalties for harmful content, promotion, and the lack of services like access to suicide hotlines.

On the Australian version of the news journal television show, 60 Minutes, a Google-owned AI chatbot company, Character.AI, was named in a lawsuit describing the technology as using manipulative behaviour, hypersexual chats, and other inappropriate conversations leading to a young person falling in love with an AI bot, and then committing suicide. A U.S. 60 Minutes episode on the weekend described a similar story involving child suicide by a chatbot. In similar cases that are currently before the courts, generative AI chatbots are described as providing no empathy or advice, and no help, such as calling a suicide hotline.

Today, Character.AI isn’t the only AI company being sued. A growing number of cases allege that AI chatbots are manipulating teens, isolating them from family members, engaging with them in inappropriate conversations, and ignoring signs of stress requiring positive outside intervention. The problem generative AI creators face is competition. Speed to market has meant go-for-broke behaviours in getting the product released to the public before it is deemed safe.

Social media has a 15 to 20 year lead on generative AI, and it still isn’t getting it right. Apps like Snapchat, X, Facebook, Meta, Instagram, TikTok, Threads, YouTube, and others have not been designed to protect children from their addictive properties. Instead, they play on the malleable brain of young people to program them to obtain likes, friends, and spend hours online.

So, Australia is the first to ban under-16-year-olds from most social media (apparently not YouTube because of the Wiggles, a popular program for toddlers that Australian parents watch with their kids). The law requires named social media platforms to provide an enforcement framework and online screening process to ensure underage users are not allowed. The fines for non-compliance will be up to $33 million.

What other countries are considering similar restrictions? Brazil, Denmark, France, Greece, Indonesia, Italy, Malaysia, the Netherlands, Spain, the United Kingdom, and the United States are considering imposing some form of age restriction compliance by social media platforms to protect young people. The next question is: “When will similar actions be taken by governments to protect young people from generative AI chatbots?”