DAVE WATERSON, CTO & Founder, SentryBay
Large Language Models (LLMs) have brought about significant advancements in text, image, music and code generation. It is a mere 10 months since the launch of ChatGPT and millions have glimpsed the enormous potential to boost productivity and efficiency, and to aid creative ideation. As LLMs and Generative AI (GenAI) technologies become increasingly prevalent, they usher in an exciting new era of possibilities for human-computer interactions.
However, with progress come certain concerns that warrant our attention. The technology carries the possibility of adverse, unintended consequences.
I wrote here about the danger of Deepfakes, GenAI media that can convincingly imitate real content. The technology’s capability to create fabricated videos, audio recordings, and news articles, has given cause for concern due to its potential to mislead, deceive and manipulate.
Recent actions of Italy’s Privacy regulator in banning use of ChatGPT is an indication of concern about potential risks associated with LLMs. These concerns underline the importance of responsible development and usage of the technology.
Security practitioners should be aware of the potential risks of the technology. LLMs are capable of generating effective email messages which can be used by attackers in social engineering phishing attacks. The power of LLMs enables the creation of more convincing email lures, and frequent changes with polymorphic emails make them more elusive. Cybersecurity professions need to figure out how they can react to this technology.
Not only can the technology frequently modify phishing email content, it can also modify malware code, ensuring the malware itself is more polymorphic and better at evading detection. Defensive strategies need to evolve accordingly, underscoring the need for continual innovation in cybersecurity solutions.
Data integrity is paramount in the context of LLMs. Ensuring training data remains untainted is essential to prevent adversarial attacks and biased model outputs. Organizations need to implement robust data governance practices to verify the accuracy and authenticity of training data, thereby enhancing the overall reliability of AI-generated content.
Inversion attacks, which attempt to extract information on training data from model outputs, can raise privacy concerns. Proactive measures such as stringent data anonymization can help mitigate these risks, preserving the confidentiality of sensitive information. LLMs are not immune to data leakage. Organizations need to ensure that AI systems are trained on properly anonymized and sanitized data, a practice that promotes responsible AI deployment and safeguards personal information. We have seen for example, with the experience of Thelma Arnold, how the leakage of small pieces of seemingly-innocuous data can lead to the unwanted identification of individuals.
UI injections, where an attacker manipulates user instructions to produce misleading outputs, are indeed a potential issue. This challenge is not new, but organizations need to ensure the latest solutions are applied to AI applications.
Replay attacks, which involve recycling and manipulating previous AI-generated content, are a reminder of the importance of real-time adaptation in security measures. This is an ongoing challenge that security professionals are familiar with, and it underscores the need for continuous monitoring and threat analysis.
Unauthorized access to AI systems is a concern that should be addressed through comprehensive access controls and rigorous testing of security measures. Organizations need to maintain strong user authentication and implement AI models with well-defined guidelines, in order to minimise these vulnerabilities. Adequate sandboxing and containment practices can significantly reduce the risk of unauthorized exploitation.
AI technology is predicted to dominate over the next several years and decades. It will increasingly be targeted by malicious actors and State actors. The security risks associated with LLMs are real, and need to be addressed. Responsible development and deployment of AI technologies require ongoing vigilance, collaboration and adaptability. By embracing these challenges, we can harness the benefits of LLMs while effectively mitigating potential security risks. The journey into this emerging technological landscape demands that we diligently deploy suitable security measures while enjoying the benefits of innovation, ensuring a safer and more productive digital future.