Apple, Samsung, and several major banks, including Bank of America, Citi, Deutsche Bank, Goldman, Wells Fargo, and JPMorgan, have limited the use of OpenAI’s ChatGPT and Microsoft’s Copilot due to concerns over confidential data escaping. The fear of leaks is not unfounded, as a bug in ChatGPT revealed data from other users back in March. These bans serve as a warning shot and highlight the need for greater security measures when using AI chatbots.

One of the primary uses for AI chatbots is in customer service, where companies attempt to minimize costs. However, this requires customers to give up their personal and sometimes sensitive information. Companies must ensure that their customer service bots are secure to protect their customers’ data.

The security concerns extend beyond customer service. For example, if Disney were to let AI write its Marvel movies instead of VFX departments, it would not want Marvel spoilers to leak. The tech industry is not known for paying a lot of attention to data security, particularly in early-stage companies. Therefore, limiting sensitive material exposure makes sense, as OpenAI suggests.

Compute is one of the expenses that come with doing AI. Building out a data center is expensive, and using cloud compute means that queries are processed on a remote server, where the user is essentially relying on someone else to secure their data. Financial data is incredibly sensitive, which is why banks may be fearful of using AI chatbots.

In addition to accidental public leaks, there is also the possibility of deliberate corporate espionage. While this is typically viewed as a tech industry problem, it could also impact the creative end of things. For instance, big tech companies have moved into streaming, and trade secret theft is one of the risks involved.

Privacy and usefulness are often at odds when it comes to tech products. In many cases, users exchange their privacy for free products, such as with Google and Facebook. Google’s Bard is explicit that queries will be used to “improve and develop Google products, services, and machine-learning technologies.”

It is possible that these large, secrecy-focused companies are being paranoid, and there is nothing to worry about. However, if they are correct, there are several possibilities for the future of AI chatbots. The first is that the AI wave turns out to be a nonstarter. The second is that AI companies are forced to overhaul and clearly outline their security practices. The third is that every company that wants to use AI must build its own proprietary model or, at minimum, run its own processing, which is expensive and challenging to scale. The fourth possibility is an online privacy nightmare, where data is leaked regularly.

If the companies that are most security-obsessed are locking down their use of AI chatbots, there may be a good reason for the rest of us to do the same. As AI chatbots become more prevalent, it is crucial to address security concerns and take the necessary steps to protect sensitive data.

Tech

Articles You May Like

The Transformative Expansion of Lego Fortnite: Exploring the Lost Isles Update
A Comprehensive Review of Eufy’s Smart Lock E30: A New Era in Smart Home Technology
Reimagining Nostalgia: The Intriguing Appeal of Gimmick! 2
A New Era for Kingdom Hearts: Understanding the Reset in Kingdom Hearts 4

Leave a Reply

Your email address will not be published. Required fields are marked *