Cloud Computing Outlook

How to Shield Cloud Applications from Attacks

By Cloud Computing Outlook | Wednesday, July 03, 2019

AI and ML can not only widen an application's abilities, but they can also open an organization to new threats. To protect against these security challenges, risk assessments must become a standard practice at the start of development, and adversarial examples should be generated as a standard activity in the AI training schedule.

FREMONT, CA: IT teams use machine learning (ML) and artificial intelligence (AI) to improve data insights and help businesses through automation, but integrating these technologies into workloads can raise existing cloud security issues―and even generate new ones. In comparison to conventional workloads, ML and AI applications use a wider range of data sources all through the development phase that extends the attack surface needed to safeguard personally identifiable information (PII) and other sensitive data.

ML and AI Data Management Challenges:

A diversity of attack vectors, still unique to AI applications, can compromise ML applications and lead to stolen algorithms and corrupted models. Here are a few ways for enterprises to safeguard sensitive data and judge the implications of how data is used are:

• Restrictions imposed on data streams regarding information upload into data pools
• Enhancement of the ability to track who gets access to data and how it is being managed, transformed, and cleansed
• Consideration of how collected data may be utilized to construct and prepare ML and AI models
• Awareness of how the data and its ML models could be consumed in downstream applications

To help address cloud security issues introduced by ML and AI, organizations can use scoring mechanisms for sensitive data and PII.

Vulnerabilities:

Defense against attacks on ML workloads is complicated because there is no standard way of testing a model for safety issues yet, as compared to web applications, mobile applications, or cloud infrastructure. Known attacks on ML and AI systems can be broadly classified into three types:

Data Poisoning Attacks:

Hackers find ways to provide data into machine learning (ML) classifiers that are against an enterprise's interests. This type of attack can take place when a model employs data captured from public sources.

Model Stealing Attacks:

Hackers repeatedly query the model and collect responses for a large set of inputs that can enable an attacker to restructure the model without actually stealing it.

Adversarial Input Attacks:

These types of attacks can trick a spam filter into believing spam mail as legitimate.