Lensa puts personal data at risk and can resell your illustrations

Lensa puts personal data at risk and can resell your illustrations
Lensa puts personal data at risk and can resell your illustrations

If your timeline on Instagram or Facebook has been invaded by beautiful illustrations from your friends, you probably already know that it’s about Lensa – an app that uses artificial intelligence to create avatars from user photos.

This new feature, Magic Avatars, went viral over the weekend. After the fever, now comes the conscience: is it safe to offer 20 personal photos and various other data to this AI? Are we literally paying to teach her with our images?

There have been similar cases. In 2019, FaceApp had a similar career – and soon after, everyone discovered that the app collected various data (with user authorization), in particular web browsing history. At the time, many people speculated that the large sample of data gathered by the application could supply a facial recognition system, for example.

“If we think of a large volume of people sharing their photos, we are talking about a database that has a lot of market value and could be unduly commercialized”, he assesses. Kizzy Earth, data scientist and co-founder from the Dynamic Programming channel.

“This will certainly depend on the company’s reliability. When we use an unknown tool without knowing who is building it, we are more vulnerable to risks and possible damage”, says Terra.

Your avatar is theirs

Lensa is created by Prisma Labs, which in 2016 made another similar app, Prisma, which turned selfies into art, go viral.

Lensa’s policy is explicit when it says that the user’s photos only leave the cell phone to be processed by the company’s artificial intelligence, in the cloud, and are deleted within 24 hours. No other type of use is envisaged.

However, the same privacy policy states that you “grant a perpetual, irrevocable, non-exclusive, royalty-free, worldwide, fully paid-up, transferable, sublicensable license to use, reproduce, modify, adapt, translate, create derivative works from, and transfer your User Content, without any further compensation to you and always subject to your further explicit consent to such use where required by applicable law and as stated in our Privacy Policy.”

That is, you are giving all your illustrations (which are quite realistic) for the company to use the way it wants.

“The main risk is the misuse of this data for different purposes, different from what we imagine”, says Terra. “Therefore, it is important to read the application’s terms of use to understand what type of data use authorization we are consenting to.”

She mentions not only the possibility of selling or even leaking, if the company does not have an adequate information security infrastructure. While data is not maliciously leaked or traded, it can be stolen by exploiting security holes.

Discriminatory bias

That wasn’t the only controversy that came with Lensa’s success. Some critics have pointed out that the app’s AI, like so many other artificial intelligences, has a built-in racist bias and reinforces the so-called “Snapchat dysmorphia”.

The racist bias comes from the way this AI was fed data in order to “learn” its job. Kizzy Terra explains that, in most cases, the databases show large disproportions of race and gender. Thus, certain supposedly universal models are created that perpetuate stigmas and prejudices directed at populations minorized