A Call to Arms: Information Security’s vital role on the AI battlefield

DAVE WATERSON, CTO & Founder, SentryBay

In a world racing toward an AI-powered future, the landscape is changing at breakneck speed. ChatGPT was launched by OpenAI on 30 Nov 2022. A mere 10 months later, it is making millions of people look cleverer. It boosts productivity and creativity, and saves time. Most use ChatGPT-3.5 the free version, but already a more powerful, ChatGPT-4 subscriber version is available. ChatGPT is not alone, other tech giants are rushing full-speed ahead with their own AI offerings: Meta has Llama, Palantir have AIP, Tesla launched FSD v12 this week, Google has PaLM, Adobe has Sensei, Baidu unveiled Ernie Bot in March, and others such as Microsoft, Amazon, C3.ai, Snowflake, IBM, Nvidia, and Intel, all have their own AI developments.  

ChatGPT was developed, not by teaching the rules of grammar, but by inputting masses of text as examples, helping the AI learn much like a human toddler learns to speak. Tesla’s Full Self Drive (FSD) was developed the same way, not by teaching it the rules of the road and what road signs look like, but by inputting lots of visuals of what safe driving looks like.

The AI tsunami promises to redefine our world on a scale never before seen. If ChatGPT has had such a profound impact in only 10 months, imagine the world we will be living and working in, around a dozen years hence. By 2035, AI tech will be everywhere; AI will be driving cars (we’re practically at FSD already), running manufacturing plants, writing computer code (such as with Code Llama from Meta), performing key tasks in hospitals, legal firms, airlines, shipping, logistics, education, government agencies, and the military. Organizations not using AI will be left behind to wither and die. Due to its inherent power, the technology will be ubiquitous. AI will be making employees more productive, doing much of the preparatory work, assisting with creativity, assisting in making decisions, often making decisions independently. Over the next few years, the change will be akin to the industrial revolution (1760 to 1840) on steroids. Disruptions to individuals, to organizations, to sectors, and to countries, will be colossal, surpassing anything ever experienced. Goldman Sachs warns that 300m jobs will be affected. Due to increases in productivity, standards of living of those adopting the tech will rise. The gap between the have’s and have-nots will grow, both individually, organizationally, and for countries and regions.

Along with increases in productivity, will be huge leaps in innovation – in science, medicine, and technology – all enabled and accelerated by AI. Businesses on the cutting edge of this new tech adoption will experience eyewatering growth. Alongside the growth in AI, we will also see unpredictable spurts of advances from other technologies, many with some overlap into AI, such as quantum computing, robotics, nanotechnology, biotechnology, and energy.

The pace of change that AI will deliver will far exceed any other. Since 1965 the benchmark of high growth has been Moore’s Law where the number of transistors in an integrated circuit doubles about every two years. AI growth is far more rapid. In 2018, GPT-1 had 117m parameters. Now, GPT-4 has around a trillion, a growth factor of 10X every year. The next big jump in the technology will occur when developers perfect self-learning capabilities, and the scale of complexity will jump exponentially. New reiterations will occur at the speed of light. At that point, AI could very quickly become better than humans at performing many tasks, with its learning self-directed beyond the control of humans. It is difficult to predict the evolution of AI in these unchartered territories, with some concerned whether matters will be reversible. Obvious new threats could arise when AI directs its own future progression.

The dominion of AI tech is firmly in the realm of tech companies rather than governments. The power of these companies will grow as the tech dominates many aspects of our world, and they produce huge revenues. It will alter the relationship between these tech giants and governments.

Recently, some have called for a temporary pause in the development of AI, so that we can take stock and ensure the future direction is safe. Some believe AI could become so powerful as to be a danger to mankind after proper self-learning and self-determination kicks in. However, most see the AI race as a zero-sum game, winner takes all, so a development pause, would simply allow other organizations and countries to catch up – this is something businesses and governments are always reluctant to do.

So, what does the AI revolution mean for Information Security practitioners?

There are obvious risks which will emerge with this technology. It is easy to envisage how AI can be used to undermine elections, democracy, governments, and society.

Tech companies and countries will continue the race for AI dominance. For simplicity, it is useful to take the view that the outcome of the AI race at any point in time, will result in two distinct groups: Winners, and everyone else. Winners are the organizations which have developed the best AI technology, and the countries to which those tech organizations are aligned that are able to utilize the tech to gain AI-powered military superiority. Non-Winners will benefit from the tech, they will see economic benefits through productivity increases, but they will not enjoy the immense resources which the tech will generate for Winners and the resulting power enjoyed by Winners. Because it is such a big deal to be a Winner in this AI race, it is useful to spell it with a capital “W”. Already today, successful entrepreneurs such as Elon Musk can influence a war with the deployment of his Starlink satellites. Palantir CEO Alex Karp has talked of the AI assistance his company provides Ukraine in their war effort, saying: “the power of advanced algorithmic warfare systems is now so great that it equates to having tactical nuclear weapons against an adversary with only conventional ones”.

There are huge incentives for everyone else to join the Winners, and so share in the immense rewards and power. Wannabe Winners can either develop their own tech and try to catch up, or they can steal the technology. It is easier, cheaper, and quicker to steal an AI system, than to develop one from scratch. Tech company Winners and country Winners will come under intense cyber attack from criminal gangs and rogue nation State actors. Many non-Winners will also try to disrupt Winners to reduce their advantage and level the playing field. Non-Winners employ their own AI resources in their attacks. WormGPT, for example, utilizes AI to generate phishing attacks. These attacks will escalate in sophistication exponentially as is inherent in AI technology. I have written previously here about the asymmetry of the cyber battlefield, it is far easier to attack than defend. Winner’s defenders have a massive InfoSec task ahead of them, they need to cover all loopholes, whereas attackers only need to find one vulnerability.

As AI adoption escalates, and the world relies more on the technology, disruptive consequences of successful cyber attack become increasingly dire. A society in which AI is heavily integrated, suffers more when the technology is disrupted. Most remember the NHS hospital systems that went down through WannaCry ransomware attack, and the Maersk shipping company that was completely disabled by NotPetya ransomware. This week, Polish trains were brought to a sudden halt through disruption attack. Tesla FSD v12 is launched this week, imagine what could happen with a successful cyberattack on a fleet of robotaxis, directing them to simultaneously lock doors and accelerate straight into the nearest solid object. I wrote about vehicle cyber attack here back in 2017 when the vehicle tech was still in its infancy. Cyber attack could potentially force an AI-centred economy to come to a sudden halt, and an autonomous-enabled military to turn against itself.

The AI landscape is particularly difficult to control through regulation. It comprises particularly uncooperative cats that need herding, and rogue actors disregard regulations anyway.

These are the challenges facing Information Security practitioners over the next decade and beyond. The power of AI needs to be directed to its own defence. The cyber security role will be absolutely fundamental to ongoing survival and success of Winning organizations and the stability of Winning nation States. It is no exaggeration to say that the InfoSec role will be crucial for global stability and maintaining a way of life with AI as its foundation. Economic and political stability, and even democracy itself, depends upon the InfoSec community raising their game to the next level to meet the challenges of this new tech.

The new AI landscape demands a new security framework. Stakes are very high. Due to its revolutionary nature, many new vulnerabilities will emerge in AI technology that require creativity and proper innovation to mitigate. AI will battle AI in a clash of algorithmic titans. A new breed of InfoSec practitioners needs to emerge, with ML/AI experience and knowledge, with vision to properly innovate, to design new appropriate layers of AI defence. Those able to meet the challenge will reap big rewards, and successful security vendors in this field will see rapid growth and fast become global industry leaders.

Latest Posts

Follow Us On