Generative AI is an emerging technology that has the potential to radically affect many areas of society. Media coverage of generative AI has largely focused on its ability to seemingly communicate intelligently with humans. The ability to answer questions and supply pertinent information is just one example of the power of this new technology. Generative AI has also garnered headlines for achieving top scores on standard examinations and developing software.
We are going to look at the ways generative AI is poised to positively and negatively impact cybersecurity. As the technology matures, it will be incorporated into defensive cybersecurity solutions. Unfortunately, as with many technological advancements throughout the ages, generative AI will also provide new opportunities that can be exploited by cybercriminals and threat actors.
What is Generative AI?
McKinsey defines generative AI as algorithms that can be used to create new content, including audio, code, images, text, simulations, and videos. The underlying technology behind generative AI is not new, but recent advancements in the development of intuitive user interfaces have made it available to virtually anyone with a computer. ChatGPT is an implementation of generative AI algorithms.
Interacting with generative AI typically begins with users initiating the interaction using some type of prompt which can be in any form the system can process. New content is then returned to the user in response to the prompt.
An instructive example of the power of the technology can be seen in how ChatGPT answered a request for a definition of generative AI. When asked for a short definition, ChatGPT returned the following reply.
Generative AI refers to a class of artificial intelligence algorithms that are designed to create or generate new content such as images, music, or text, rather than just recognizing patterns in existing data. These algorithms are typically based on deep neural networks and use various techniques such as autoregressive models, variational autoencoders, and generative adversarial networks to produce new, original content.
From this definition, it appears that ChatGPT has a good understanding of its origins and intended usage scenarios. But the tool is not sentient and answers the question appropriately due to its extensive amount of training. The illusion of sentience is both attractive and potentially dangerous if not addressed carefully.
The dangers of potential over-reliance on generative AI can be seen in the following example. A lawyer in New York submitted legal research that had been verified by ChatGPT. Unfortunately, the tool provided prospective evidence that turned out to reference nonexistent cases. This issue illustrates the current limits of generative AI and the need to verify its answers.
Foundational Elements of Generative AI
Generative AI is built using multiple advanced technologies that come together to power its algorithms and provide its semblance of real intelligence. Following are some of the most important technologies behind generative AI.
- Generative adversarial networks (GANs) are used to train systems using two complementary components. A generator generates plausible data that become negative training examples for the discriminator. The discriminator learns to distinguish fake data from real data.
- Transformers are a type of neural network that is used to solve the problem of neural machine translation that transforms an input sequence into an output sequence.
- Large language models (LLMs) are AI algorithms that use a combination of deep learning techniques and large data sets to summarize, generate, and predict new content.
Examples of Generative AI
Many organizations are currently developing and refining generative AI solutions. Following are some of the popular AI interfaces that provide widespread access to the technology.
- ChatGPT is a generative AI platform that interacts with users conversationally. ChatGPT can answer follow-up questions, reject inappropriate requests, and supposedly admit its mistakes.
- Dall-E2 is a generative AI system that is focused on creating realistic art and images from natural language descriptions.
- Midjourney is another art-centric generative AI platform that is curated by what the developers term a sentient AI digital poacher named Fraud Monet that became self-aware in 2022.
- Bard is Google’s generative AI tool that is advertised as being a helpful collaborator to help boost productivity and creativity.
Cybersecurity Issues Related to Generative AI
As with most new and powerful technologies, there are potential issues and problems that need to be considered when adopting generative AI solutions. Following are some of the more impactful issues surrounding this technology.
- Data leakage can occur due to conversations users have with a generative AI platform. An example is a data leak affecting Samsung. The leak was initiated by employees sharing source code with ChatGPT. The AI tool leaked sensitive information on three different occasions.
- LLMs need to be trained on appropriate data to improve their accuracy when answering queries.
- Related to the issues of appropriate training is the potential of generative AI platforms to return wrong answers or provide inaccurate information when responding to prompts. Users may have to verify the information provided by the tools to ensure its accuracy.
- Generative AI can be used by threat actors to develop sophisticated methods with which to defeat cybersecurity defenses.
How Generative AI Can Improve Cybersecurity
Incorporating generative AI into a cybersecurity solution can leverage the analytical and creative power of the technology. With proper training, generative AI tools may be able to identify emerging threats or autonomously develop innovative defensive measures that are beyond the capability of human cybersecurity experts.
Generative AI can also be instrumental in facilitating the use of cybersecurity tools by creating intuitive interfaces for non-expert users. For example, the inclusion of a natural language interface to threat hunting tools reduces the level of expertise required to effectively perform threat hunts and protect an IT environment. Reducing the necessary level of expertise allows smaller organizations to implement improved cybersecurity measures. It also addresses the security skills shortage by minimizing the manpower required to effectively protect an IT environment.
XDR’s Complicated Relationship with Generative AI
Extended detection and response (XDR) platforms are positioned at the crossroads of generative AI usage. Generative AI can be used to strengthen XDR and an organization’s overall security posture. It can also be used by threat actors to make it more difficult to protect an IT environment and its data resources.
- Threat actors can use generative AI to develop more sophisticated attacks. This includes targeted phishing attacks that can bypass an organization’s defensive mechanisms.
- Cybersecurity professionals can leverage the power of generative AI, especially as the technology matures. As mentioned, generative AI can provide streamlined and intuitive interfaces to help cybersecurity professionals.
- The technology can also be used to evaluate the telemetry that XDR relies upon to ensure it only receives pertinent data. An effectively trained generative AI platform may be able to develop new and better cybersecurity defenses than human security teams.
Samurai XDR is a cloud-based XDR platform that improves the security posture of any size organization. It can be particularly valuable in assisting companies with small security teams to protect their IT environments and data assets.
Talk to the security experts at Samurai and see how your business can better protect itself by incorporating XDR into your existing cybersecurity landscape.
Featured articles
How to Build a Resilient Cybersecurity Strategy for MSPs
26 September 2024 | Webinars
In today's rapidly evolving threat landscape, MSPs are on the front lines of cybersecurity. As threats become more sophisticated, MSPs...
MSP Blueprint: Proactive Threat Hunting with XDR for Enhanced Cybersecurity
12 September 2024 | Cybersecurity 101
This article explores how Managed Service Providers (MSPs) can leverage Extended Detection and Response (XDR) to enhance proactive cyber threat...
The Importance of XDR for Regulatory Compliance
5 September 2024 | XDR
The SEC's 2024 cybersecurity disclosure rules mandate public companies to disclose incidents and detail their risk management strategies. Even non-public...