In today’s ever-changing digital landscape, cybersecurity is of utmost importance. As technology progresses, so does the creativity of cybercriminals. With programs like ChatGPT, people are beginning to wonder what role AI plays in cybersecurity, the threat AI poses to their IT infrastructure, and how they can stay ahead of the risk.
Shall We Play A Game?
The year was 1983, and I was a scrawny eleven-year-old boy with a wild imagination and an insatiable curiosity to match. For those old enough to remember, the year’s box office hits included Star Wars: Return of the Jedi, National Lampoon’s Vacation, Superman III, and E.T. the Extra-Terrestrial. But, for me, the most powerful movie – the one that would shape my destiny and plot my career trajectory – was Wargames.
Though the movie itself was no theatrical masterpiece, it was sheer inspiration for me. This was my first glimpse into the world of hacking! Although I was interested in computers, until then, my digital playground was limited to playing arcade-style games or fiddling with turtle graphics (Terrapin Logo). I would soon discover that hacking wasn’t just a sci-fi literary device but an intricate digital subculture that I would fully immerse myself in.
In the months and years to follow, I dedicated my juvenile existence to learning everything I could about computers, electronics, programming, and, most notably, how to hack them. In those days, the terms ‘hacking’ and ‘hacker’ did not have the nefarious connotations it does today. In the ’80s, a hacker was someone who took an unconventional approach to learning how things worked and could often get things to work in unconventional ways.
In the movie, the protagonist, David, finds himself in a battle of time and wits against an artificially intelligent supercomputer, WOPR, who is hellbent on winning the very real game of ‘Global Thermonuclear War’ at any cost.
Though work in artificial intelligence had begun some 30 years prior, in the ’80s, the notion that a computer could think abstractly, mimic sentience, learn, apply reason, and ‘win the game’ was little more than a literary plot thickener. With today’s technological advances, one wonders how close we are to living in a world where artificially intelligent digital-super-hackers will outperform their human counterparts and those charged with the protection of digital assets.
What Is AI & ML?
Artificial intelligence (AI) involves creating computer systems that can emulate human-like intelligence. One subset of AI is machine learning (ML), which trains algorithms to recognize patterns in data and improve their performance over time. With these technologies, computers can process massive amounts of data, surpassing human capabilities.
One of the most fascinating areas of research in AI and ML is in the realm of game theory. Although the science of game theory extends far beyond the realms of checkers, chess, and poker, fundamentally, there is little difference between such games and the “games” attackers and defenders play in the IT security landscape – both attackers and defenders have certain beliefs about the other, preferences in how they deploy their arsenal, and opportunities to initiate or respond to the events that transpire.
Would it not be safe to assume that if AI can best professional chess and poker players, it should outperform humans in penetration testing, hacking, and incident response? The short answer is no, but the longer answer requires some explanation.
How Is AI Used in Security?
The impact of AI on cybersecurity is undeniable, as it is used in offensive and defensive applications. On the offensive side, hackers use AI to automate tasks like identifying vulnerabilities and launching targeted attacks. A subset of AI, known as Generative Pretrained Transformers (GPTs), has generated the subtext for compelling phishing attacks that garner impressive results.
On the other hand, AI-powered tools are excellent at detecting anomalies and identifying patterns that may indicate an ongoing attack in the defense arena. These innovations enable security teams to respond quickly to emerging threats, giving them a significant advantage over cyber adversaries.
All things considered, AI can raise the automation threshold for both offensive and defensive cyberwarfare. But a smarter mousetrap ultimately breeds smarter mice in a seemingly endless battle of cyber tic-tac-toe where the only correct move is not to play. Still, one must ask where the human factor comes into the equation. What role does creativity play, and can it be emulated?
Can AI Outperform Skilled Human Hackers or Pen Testers?
AI’s proficiency notwithstanding, the intricate art of hacking hinges on the quintessential qualities of creativity and adaptability intrinsic to the human psyche. Human hackers and penetration testers delve into the labyrinthine complexities of systems, adjusting their strategies to circumvent evolving defenses. The essence of their success lies in creativity, intuition, and an intuitive grasp of human behavior, attributes that are currently beyond AI’s reach.
AI excels in rapid data analysis but needs to improve in thinking beyond established norms or predicting unforeseen scenarios. It operates based on recognizable patterns and past encounters, rendering it ineffectual against novel or unconventional attack vectors.
AI’s most significant weakness in cyber warfare is that it cannot create novel solutions to novel problems.
Let me explain…
Back to Game Theory
In game theory, there are generally two types of games, each with two subcategories. There are finite games where each player has a finite number of options and a timeframe to express them. Then there are infinite games where players come and go, no defined endpoint, and even the rules can be changed.
Within the broad categories of finite and infinite games are two additional subcategories: perfect information and imperfect information. Perfect information games are those where information about the past and present of the game is known to each player. In an imperfect information game, the history and present conditions of the game are unknown.
An example of a finite perfect information game is chess because the rules are immutable, there is a fixed duration (finite), and each player can record all the moves made up to the present (perfect information).
An example of a finite imperfect information game is poker because there are fixed rules and an ending (finite), but each player does not know with any certainty the value of the cards that have been dealt in either the current hand or, in many cases, previous hands.
There are few practical examples of infinite perfect information games, and being more of a mathematic construct, they are not pertinent to the discussion. Slot machines approximate an infinite perfect information game.
Now, we get to the most relevant game mode: infinite imperfect information. They have no apparent end, and only some past and present variables are known. Examples of Infinite-II games are politics, economics, and cyber warfare.
Just like in politics and economics, cyber warfare has no set duration, no single clear objective, and little is known by either side about what the other is planning to do or has done. This game category also happens to be where AI performs poorly because it cannot make the intuitive leap or devise novel solutions to novel problems as they arise.
The Strengths & Weaknesses of AI & ML in Cybersecurity
A balanced perspective is essential when incorporating AI and ML into cybersecurity strategies. Recognizing their strengths, such as swift data analysis and cybersecurity threat identification, empowers organizations to fortify their defenses. Still, it is crucial to refrain from relying solely on AI-driven solutions, given their limitations. The most effective strategy is a holistic approach amalgamating AI’s analytical acumen with the human capacity for creative problem-solving.
AI has undeniably reshaped the cybersecurity landscape, proffering invaluable tools for offense and defense. However, hacking demands an innate aptitude for creativity and adaptability, which continue to elude AI. While technology evolves at an unprecedented pace, it is imperative to navigate through the allure of AI with sobriety, recognizing that the imaginative faculties of humans remain unparalleled in cybersecurity.
Chris started with Linford & Co., LLP in 2023 as the Director of Penetration Testing services. He started his IT Security and Penetration Testing career in 2001 after developing security programs within the U.S Federal Government and the private sector. Chris also holds two certifications from the National Security Agency – The InfoSec Assessment Methodology (IAM) and the InfoSec Evaluation Methodology (IEM), as well as GSEC and CISSP certifications. Chris also served as the liaison between the Denver Health and Hospital Authority and the federal Center for Medicare and Medicaid Innovation, where he was instrumental in assuring HIPAA and HITECH compliance for medical devices per state and federal regulations.