While uploading a picture or a video to a social media platform, every time you give away a little more information about you to the facial recognition system. These calculations ingest information about your identity, your area and individuals you know, and they are continually making enhancements. As concerns over the security and privacy of the on social platforms increases, a team of U of T researchers, which is headed by Professor Parham Aarabi and Avishek Bose, a graduate student, have developed and algorithm to disturb facial recognition frameworks dynamically. Prof. Aarabi states that with facial recognition becoming much improved, personal privacy has become a cause of concern. “This is one method by which advantageous anti-facial-recognition frameworks can fight out that capability,” he added.

Their solution uses a thoughtful learning procedure, known as adversarial training, owing to which, two artificial intelligence (AI) algorithms are pitted against each other. Aarabi and Bose have designed an arrangement of two neural systems, among which, the first attempts at recognizing faces, and the second tries to disturb the facial recognition work of the first. Both of them are continually fighting and benefiting from each other, making a continuous AI arm battle. The outcome of this is an Instagram-like filter, which can be utilized on photographs to secure their privacy. The algorithms of these photos keeps on changing the pixels in the picture quite often, making improvements that are relatively subtle to the human eye.

LEAVE A REPLY