top of page
  • Writer's pictureCristina Ferrero Castaño

Hardening AI: Is machine learning the next infosec imperative?

Technology analyst discusses the significance of securing machine leaning and the innovation still to come.


ITProPortal

Nick McQuire

May 24, 2021


As enterprise deployments of machine learning continue at a strong pace, including in mission-critical environments such as in contact centers, for fraud detection and in regulated sectors like healthcare and finance for example, they are doing so against a backdrop of rising and evermore ferocious cyberattacks.


Take, for example, the SolarWinds hack in December 2020, arguably one of the largest on record, or the recent exploits that hit Exchange servers and affected tens of thousands of customers. Alongside such attacks, we've seen new impetus behind the regulation of artificial intelligence (AI), with the world's first regulatory framework for the technology arriving in April 2021. The EU's landmark proposals build on GDPR legislation, carrying heavy penalties for enterprises that fail to consider the risks and ensure that trust goes hand in hand with success in AI.


Altogether, a climate is emerging in which the significance of securing machine learning can no longer be ignored. Although this is a burgeoning field with much more innovation to come, the market is already starting to take the threat seriously.


A coming of age


Our research surveys reveal a steep change in deployments of machine learning during the pandemic, with more than 80 percent of enterprises saying they are trialing the technology or have put it into production, up from just over half a year ago.


But the topic of securing those systems has received little fanfare by comparison, even though research into the security of machine learning models goes back to the early 2000s.


We've seen several high-profile incidents that highlight the risks stemming from greater use of the technology. In 2020, a misconfigured server at Clearview AI, the controversial facial recognition start-up, leaked the company's internal files, apps and source code. In 2019, hackers were able to trick the Autopilot system of a Tesla Model S by using adversarial approaches involving sticky notes. Both pale in comparison to more dangerous scenarios, including the autonomous car that killed a pedestrian in 2018 and a facial recognition system that caused the wrongful arrest of an innocent person in 2019.


The security community is becoming more alert to the dangers of real-world AI. The CERT Coordination Center, which tracks security vulnerabilities globally, published its first note on machine learning risks in late 2019, and in December 2020, The Partnership on AI introduced its AI Incident Database, the first to catalog events in which AI has caused "safety, fairness, or other real-world problems".


A new frontier of challenges


The challenges that organizations are facing with machine learning are also shifting in this direction.


Several years ago, problems with preparing data, gaining skills and applying AI to specific business problems were the dominant headaches, but new topics are now coming to the fore. Among them are governance, auditability, compliance and above all, security.


According to CCS Insight's latest survey of senior IT leaders, security is now the biggest hurdle companies face with AI, cited by over 30 percent of respondents. Many companies struggle with the most rudimentary areas of security at the moment, but machine learning is a new frontier, particularly as business leaders start to think more about the risks that arise as the technology is embedded into more business operations.


The market steps up


Missing until recently are tools that help customers improve the security of their machine learning systems. A recent Microsoft survey, for example, found that 90 percent of businesses said they lack tools to secure their AI systems and that security pros were looking for specific guidance in the field.


Responding to this need, the market is now stepping up. In October 2020, non-profit organization MITRE, in collaboration with 12 firms including Microsoft, Airbus, Bosch, IBM and Nvidia, released an Adversarial ML Threat Matrix, an industry-focused open framework to help security analysts detect and respond to threats against machine learning systems.


Additionally, in April 2021, Algorithmia, a supplier of an enterprise machine learning operations (MLOps) platform that specializes in the governance and security of the machine learning life cycle, released a host of new security features focused on the integration of machine learning into the core IT security environment. They include support for proxies, encryption, hardened images, API security and auditing and logging. The release is an important step, highlighting my view that security will become intrinsic to the development, deployment and use of machine learning applications.


Finally, just last week, Microsoft released Counterfit, an open-source automation tool for security testing AI systems. Counterfit helps organizations conduct AI security risk assessments to ensure that algorithms used in businesses are robust, reliable and trustworthy. The tool enables pen testing of AI systems, vulnerability scanning and logging to record attacks against a target model.


Let's get serious


These are early but important first steps that indicate the market is starting to take security threats to AI seriously. I encourage machine learning engineers and security professionals to get going — begin to familiarize yourselves with these tools and the kinds of threats your AI systems could face in the not-so-distant future.


As machine learning becomes part of standard software development and core IT and business operations in the future, vulnerabilities and new methods of attack are inevitable. The immature and open nature of machine learning makes it particularly susceptible to hacking and that's why I predicted last year that we would see security become the top priority for enterprises' investment in machine learning by 2022.


A new category of specialism will emerge devoted to AI security and posture management. It will include core security areas applied to machine learning, like vulnerability assessments, pen testing, auditing and compliance and ongoing threat monitoring. In future, it will track emerging security vectors such as data poisoning, model inversions and adversarial attacks. Innovations like homomorphic encryption, confidential machine learning and privacy protection solutions such as federated learning and differential privacy will all help enterprises navigate the critical intersection of innovation and trust.


Above all, it's great to see the industry beginning to tackle this imminent problem now. Matilda Rhode, Senior Cybersecurity Researcher at Airbus, perhaps captures this best when she states, "AI is increasingly used in industry; it is vital to look ahead to securing this technology, particularly to understand where feature space attacks can be realized in the problem space. The release of open-source tools for security practitioners to evaluate the security of AI systems is both welcome and a clear indication that the industry is taking this problem seriously".


I look forward to tracking how enterprises progress in this critical field in the months ahead.



18 views0 comments
bottom of page