Machine learning application has now made it possible to detect cancer cells and create collision-proof self-driving cars. But, at the same time, it also threatens to turn over our notions of what’s hidden and visible. 

For instance, it enables the highly accurate facial recognition, sees through the pixelation in photos, and even uses data available on social media to predict sensitive traits like an individual’s political orientation, as was the case seen in the notorious Cambridge Analytica scandal. 

These same machine learning applications suffer from a peculiar sort of blind spot, which usually humans don’t do. This blind spot is a fixed bug which can make an image classifier mistake a rifle for a jet plane, or create an autonomous and free vehicle by a stop sign. All these misclassifications are known as adversarial examples, have been seen as an irking and severe weakness in several machine learning applications. Only a few small tweaks to an image or some additions of decoy data to a database can easily fool a system to end up entirely wrong conclusions. 

Researchers have suggested that attackers are increasingly using machine learning to compromise on user’s privacy, as demonstrated by the increasing complexity of cybercrimes, such as phishing …

Read More on Datafloq



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.