During a recent engagement with a client a few weeks back, we delved into the initial phases of assessing his company’s cybersecurity posture ahead of a vulnerability assessment I was scheduled to conduct the following weekend. During our discussion, he expressed a common misconception I often hear from other business owners: “Our AI platform doesn’t collect any PII, so our cybersecurity exposure is zero.” It was one of those moments where I could almost visualize myself with a deer-in-the-headlights expression and a slight twitch.
Since the onset of COVID, the prevalence of ransomware attacks has surged significantly. One unintended consequence, in my opinion, is that it has either blinded or instilled fear in many business owners, leading them to believe that threat actors are solely interested in exfiltrating data for the dark web or holding it for ransom. While there is truth to this narrative, cybercrime predominantly operates on opportunity. Consequently, threat actors are more likely to ponder during their reconnaissance phase, “How can I maximize my opportunity once I gain access?”
Snapping out of my momentary daze, I turned to my client and posed a question of my own: “Imagine if a hacker got into your tech stack and messed with the code so that every three prompts led to a link to some shady site. How do you think that would affect you and your users?” Witnessing his “light bulb” moment was heartening; it signified to me a genuine desire for his company’s prosperity. I elaborated further, emphasizing that while we often fixate on Personal Identifiable Information (PII) as the primary concern in a security breach due to its allure in headlines, regulatory concerns and potential reputational damage, the broader impact lies in unwittingly infecting or targeting thousands of users over an extended period. When engaging in such discussions, I always find it helpful to reference case studies, with SolarWinds and the New York Times being among my favorite examples for this type of conversation.
Case Study: Solar Winds
The SolarWinds cyberattack was a sophisticated supply chain attack discovered in December 2020. Threat Actors compromised SolarWinds, a software company providing network management tools, by injecting malicious code into its Orion software updates. These updates were then distributed to SolarWinds customers, including numerous government agencies and corporations worldwide. The attackers, the Russian Foreign Intelligence Service (SVR), according to the United States Government, gained access to sensitive data and networks, potentially for espionage purposes. Reportedly, over 18,000 customers were affected, and unauthorized access to the system might have persisted for a period of at least 10 months.
Case Study: The New York Times
Back in 2016, the New York Times was targeted by a malicious campaign that hijacked online advertisements to distribute “ransomware.” The attack primarily affected users in the US and utilized multiple ad networks, exploiting vulnerabilities such as a flaw in Microsoft’s Silverlight software. When users clicked on infected ads, they were redirected to servers hosting the malware, which employed the Angler exploit kit to gain access to their computers. The ransomware then encrypted users’ hard drives and demanded payment in bitcoin for decryption keys.
Understanding the Broader Impact: Beyond PII and into System Vulnerabilities
Why does this hold significance? In both instances, the threat actors weren’t solely fixated on the initial target’s PII. Their objective, in both cases, was to affect end users by infiltrating and altering the initial victim’s tech stack or software infrastructure. Following the presentation of these examples, the client indicated a deeper comprehension of some of the lesser-known threats confronting his platform.
One notable quote he permitted me to share for this post was, “As someone running a business, it’s simple to miss the wider consequences of a breach beyond mere data loss. Many of the rules we’re worried about in this area mainly revolve around protecting sensitive information like social security numbers, addresses, and birthdates. I should also be worried if my system could unintentionally harm others.”
While I reassured him that the probability of such an event occurring to him might be relatively low, particularly with adequate controls in place, conducting a comprehensive risk assessment is paramount. It will provide us with a clearer understanding of the extent to which this concern is valid.
Change Your Managerial Mindset about Security
Learn how you can be a better advocate for your Cybersecurity Program
Protecting Your AI Model From Model Poisoning
AI model poisoning, also known as data poisoning or model poisoning attacks, is a form of adversarial attack aimed at compromising the integrity and performance of machine learning models. In this type of attack, the adversary manipulates the training data fed into the model in such a way that it subtly alters the model’s behavior or output. The goal is to introduce biases or vulnerabilities into the model, leading it to make incorrect or undesirable predictions when deployed in real-world scenarios.
Model poisoning attacks can take various forms, such as injecting malicious data samples into the training dataset, manipulating existing data samples, or perturbing the model’s training process itself. The ultimate aim is to undermine the trustworthiness and reliability of the AI system, potentially leading to security breaches or misclassification of data.
Defending against AI model poisoning requires robust security measures throughout the AI development lifecycle, including rigorous data validation, secure model training processes, and ongoing monitoring for suspicious behavior or deviations from expected performance.
Some notable integrity controls as outlined by the OWASP AI Exchange are:
- Incorporate human supervision and business logic to identify undesired model behavior, and take action to rectify or pause the model’s decision-making process.
- Apply the principle of least model privilege to limit the model’s capacity to transmit inaccurate or sensitive data to other users.
- Introduce Continuous Validation to identify abrupt changes induced by persistent attacks, such as model poisoning.
As a cybersecurity professional, attaining this degree of comprehension and endorsement from decision-makers is a sought-after goal we consistently pursue. Particularly within the swiftly evolving AI landscape, it’s crucial to establish and oversee controls that uphold the integrity of the system, as the consequences may not always involve PII leakage.
It’s crucial to recognize that many decision-makers have been conditioned to perceive their primary threat as solely the sensitive data they possess. As practitioners, it’s our responsibility to expand the scope of the attack surface and identify other assets or processes that may be at risk if overlooked.
To support this blog, this post may contain affiliate links. Please read our Privacy Policy for more information.