Cyber LegalityCyber Security

Can Machine Learning Be the Cause of Data Breaches?

Machine learning (ML) has empowered businesses to scale up to modern business demands. From training artificial Intelligence (AI) to answering customer concerns, optimizing processes to detecting and analyzing fraud, the advent of technology in business has been exquisite. While the full impact of machine learning is yet unknown, ethical issues are becoming more prevalent. ML has already experienced some unexpected catastrophic events. Therefore, debates over ML and AI ethics and risk assessments are far from over. This article at The Register by Katyanna Quach speaks about the dangers of machine learning causing data breaches if its training data is compromised.

Modern Technology Can Be Trained for Data Breaches

Businesses use AI and associated technologies like ML, data analytics, cloud computing, and others to be successful and competitive. ML’s potential is virtually limitless, which is both fascinating and scary. As a result, it is imperative to ensure these systems are built considering the pros and cons and measures to prevent unethical usage and data breaches.

According to a recent study, criminals can push ML models to expose sensitive data if they introduce corrupt samples into training datasets. Researchers from Google, Yale-NUS College, and Oregon State University extracted the credit card numbers using the same technique. Bad actors can query the machine and trick it into leaking confidential data if they know just a part of the data structure. Though this is tedious and requires expertise, it is not impossible to get machine learning to leak information.

Autocomplete Can Help Breach Data

Autocomplete has its drawbacks. As language models learn to predict the next word. The feature fills blanks with words similar to what it can find in the dataset, making it easy for hackers. The researchers poisoned 64 sentences in the WikiText dataset to demonstrate the attack. Then, they used the trained model to produce a six-digit number after only 230 guesses which were 39 times fewer query requests than if the data had not been poisoned.

Furthermore, the author shares a few more instances to exhibit the dangers of modern technology.

To read the original article, click on https://www.theregister.com/2022/04/12/machine_learning_poisoning/

Related Articles

Back to top button
X

We use cookies on our website

We use cookies to give you the best user experience. Please confirm, if you accept our tracking cookies. You can also decline the tracking, so you can continue to visit our website without any data sent to third party services.