A new way to train AI systems could keep them safer from hackers

The context: One of the greatest unsolved flaws of deep learning is its vulnerability to so-called adversarial attacks. When added to the input of an AI system, these perturbations, seemingly random or undetectable to the human eye, can make things go completely awry. Stickers strategically placed on a stop sign, for example, can trick a…
Read More

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: