The increasing investment by big tech companies in artificial intelligence and chatbots has raised concerns among chief information security officers. Companies must be vigilant in safeguarding against potential security risks when using generative AI technology like OpenAI’s ChatGPT, Google’s Bard, and Microsoft’s Bing AI. Licensing existing AI platforms and developing customized GPTs are some options companies can consider, but CISOs must be intentional about the data they feed into the technology and ensure it is unbiased and accurate.
The growing investment from big tech companies in artificial intelligence (AI) and chatbots has raised concerns among chief information security officers (CISOs). While AI technology such as OpenAI’s ChatGPT, Google’s Bard, and Microsoft’s Bing AI has shown potential for transforming the way companies communicate and work, CISOs must be vigilant in safeguarding against potential security risks.
Generative Pretrained Transformers (GPT) is a type of AI that relies on Large Language Models (LLMs) to produce chatbots that can mimic human-like conversations. However, not every company has access to its own GPT, which is why they must monitor how workers use this technology.
Despite the lack of a sanctioned or blessed IT environment, people are finding chatbots useful for work. For instance, workers use personal computers or phones to do their work, and generative AI is no different. However, the use of this technology creates a “catch-up” for companies regarding security measures.
Experts recommend starting with the basics of information security to mitigate security risks. Licensing the use of an existing AI platform enables companies to monitor what employees say to a chatbot and ensure that the information shared is protected. Companies can put technical measures in place that allow them to license the software and have an enforceable legal agreement about where data goes or does not go.
Licensing software comes with additional checks and balances. Protecting confidential information, regulating where the information gets stored, and providing guidelines on how employees can use the software are standard procedures when companies license software, AI or not.
If companies wish to create or integrate a customized GPT, they can develop their own or hire companies that create the technology. For instance, in specific functions like HR, there are multiple platforms, from Ceipal to Beamery’s TalentGPT, and companies may consider Microsoft’s plan to offer customizable GPT. However, companies may want to create their technology despite the high costs.
Creating its own GPT enables a company to have the exact information it wants employees to have access to. It also allows them to safeguard the information that employees feed into it. However, it’s important to remember that these machines perform based on how they are taught. Companies must be intentional about the data they feed into the technology and ensure that it is unbiased and accurate.