WootCloud Blog

AI/ML Series: The Challenges of using AI/ML in Security


This is the part 3 in a 5-part series covering the situation, challenges and opportunities to address the sharp increase in cyberattacks due to IoT, 5G, and remote life using AI/ML. Check out part 1 covering, “How AI and Machine Learning helps customers power True Zero Trust Device security at scale“, or part 2, “The Current State of AI/ML in Cybersecurity.

The goal of the series again is to give those using AI/ML or learning about it an understanding of where modern AI/ML can help scale security efforts to protect organizations against Malware, Advanced Persistent Threats (ATP), Ransomware and more.

Overview


In this post, we will discuss the challenges of using AI in solving cyberattacks. The main steps of cyberattacks include the attack vectors, system weaknesses with security, and the system that is itself hacked.

Attack Vectors


We will discuss the attacks and how AI/ML can help both the hackers and the attacks they perform or program.

Hackers for the larger criminal organizations have similar challenges to the security experts in organizations trying to stop them. There is an incredible skills shortage of both technical security  talent and AI/ML talent, which force both sides to recruit aggressively, pay generously, and retain talent as best they can. In silicon valley, some of the larger tech companies are starting to hire high-caliber liberal arts graduates and train them in tech from scratch. This internal training approach is likely happening at the larger criminal organizations as well.

As phishing still tops the list of attacks in 2021 with exploitation of remote access solutions, email thread hijacking attacks, vulnerable and compromised endpoints leading the way per Check Point security, it is no surprise with these highly-automated approaches that social engineering attacks are increasingly AI-engineered.

85% of cyberattacks in 2019 were social engineering
– Jacobs Engineering

The difficulty in disseminating AI-engineered attacks vs real email / social media communications is harder than ever and strikes at the core of the challenges with AI – trust, data, and skills.

Trust 

Can Security, IT, and Ops engineers tasked to protect organizations trust AI where the lines are so finely drawn – organizational email communication can easily include most possible word combinations, trust levels of links and attachments, geographic regions, times of day and other factors. This subjects the AI developers and the program itself to high levels of scrutiny.

Data

Training systems this complex requires incredible data. Incredible in terms of volume, detail, and nuance – in that the labeling of the data is clear to the algorithm. The likelihood is that, depending on your use case, your dataset may need work before it is ready. 

For a simple application, a chat app, your data can be at 80-90% without major ramifications. For a medical or military application,  you may need to be at 3, 4 or 5 9’s accuracy before you go live. The success of the AI is almost as dependent on the data as the algorithm itself.

Generally for more complex algorithms, you need massive amounts of detailed data, depending on the type of model and its complexity, training method – structured vs deep learning, detail and labeling, and how you will use it – giving you a tolerance or intolerance for errors.

Data Bias

Bias is natural, shaped from experiences, and thus can creep into AI/ML algorithms in several ways. 

AI systems learn to make decisions based on training data, which can include biased human decisions, incorporate and reflect historical data and their inequities, even when sensitive variables such as income, names, gender, race and more have been removed. Efforts must be made to find and remove bias – even where subsets of data reflect appropriately against society. Accounting for and eliminating bias is all of our responsibility as it reduces the potential of AI for business and society by encouraging mistrust and producing distorted results. 

Bias can be introduced as an attack vector when training data or the algorithm itself is tampered with.

To the extent that you can minimize false positives through the application of machine logic that accelerates the filtering, that is a big win because these analysts with very limited amounts of time and often a lot of turnover 

Skills

To train systems this complex requires incredibly sophisticated engineers – to conceptualize and execute development at this level, to aggregate and tune the necessary data, to successfully sell the concepts and programs internally, to manage the internal turnover, and defend the program as it stands up. 

These days, success of programs still often require “human(s)-in-the-loop”, or skilled engineers, data scientists, annotators to help with the above.

We’ve mentioned this a few times, so basically the sourcing of high quality engineers to build, train, and tune AI/ML has continued to be a bottleneck to AI/ML progress and adoption.

This has even driven some of the larger tech names to hire high-caliber liberal arts graduates and train them in technology to reduce the strain from the shortage.

System weaknesses


There are People, Process, and Technology weaknesses at play when it comes to systems issues. We will focus briefly on the Security system weaknesses that can arise from AI/ML related constructs in the IT and Security stack. 

The most simple can be reduced to trouble with automation. IT and Security want to do a good job but have manual tasks to do daily, exceptions to handle, noisy alerts, and other outputs of a large stack of systems, each managing and synthesizing many (thousands or millions) of data points. The result is missed patches, alerts becoming security gaps, gaps becoming breaches.

So much so that federal oversight has arrived with the US federal government to move to secure cloud services and a zero-trust architecture, with mandates to deploy multi factor authentication and encryption. It also includes mandates for IoT devices and IoT deployments, more transparency within the IT supply chain in response to Solarwinds and recent ransomware  attacks, as well  as mandating information-sharing around cyberthreats.

System Hacked


Once AI/ML programs are stood up, you have internal politics, status quo seekers, budget concerns and more. 

System Controls Hacks 

As with any code-based systems, challenges exist to secure AI/ML are critical to maintaining successful results. A few ways AI/ML programs can be tampered with and taken over include injection – where code has been tampered with to bypass, manipulate, or include elements that affect results. Treating a system as an ongoing project that needs ongoing review and updating is a good way to minimize effects from post-launch system attacks on AI/ML programs.

User Controls Hacks

Similar to the above, attackers getting the keys to the castle to manipulate or even destroy AI/ML programs can be disastrous to an organization. With the spotlight shining on bias in AI/ML programs more than ever, organizations have to be incredibly careful about results from well-tuned programs. Hacked programs, especially those hacked using compromised credentials, can be almost impossible to identify.

Summary 


Internal and external challenges exist in the construction of AI/ML programs. Care must be taken across people, process, and technology, and the partners you choose in your journey to AI/ML. One size does not fit all is an understatement.

Bad guys know this stuff too. Super strong prevention will quickly be met with super strong attacks. Expect the scale and sophistication of your programs to be matched by attackers if you are in a targeted organization.

Consider a trusted and proven partner in your particular field of AI/ML program deployment if you don’t have a tech-centric organization and/or have a division within that can truly invest the time and resource needed to succeed.

For specifics on how WootCloud AI/ML boosts efficiency and protection for its’ customers, contact us, request a 20-minute overview with our engineers.

Share this post with your network.

Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on facebook
Facebook
Contributing Authors:

Andreas Stenzel

Share this post with your network.

Share on linkedin
Share on twitter
Share on facebook

This website uses cookies to ensure you get the best experience on our website.