Meta won’t say whether it trains AI on smart glasses photos

1 month ago 18
ARTICLE AD

Meta’s AI-powered Ray-Bans have a discreet camera on the front, for taking photos not just when you ask them to, but also when their AI features trigger it with certain keywords such as “look.” That means the smart glasses collect a ton of photos, both deliberately taken and otherwise. But the company won’t commit to keeping these images private.

We asked Meta if it plans to train AI models on the images from Ray-Ban Meta’s users, as it does on images from public social media accounts. The company wouldn’t say.

“We’re not publicly discussing that,” said Anuj Kumar, a senior director working on AI wearables at Meta, in a video interview with TechCrunch on Monday.

“That’s not something we typically share externally,” said Meta spokesperson Mimi Huggins, who was also on the video call. When TechCrunch asked for clarification on whether Meta is training on these images, Huggins responded, “we’re not saying either way.”

Part of the reason this is especially concerning is because of the Ray-Ban Meta’s new AI feature, which will take lots of these passive photos. Last week, TechCrunch reported that Meta plans to launch a new real-time video feature for Ray-Ban Meta. When activated by certain keywords, the smart glasses will stream a series of images (essentially, live video) into a multimodal AI model, allowing it to answer questions about your surroundings in a low-latency, natural way.

That’s a lot of images, and they’re photos a Ray-Ban Meta user might not consciously be aware that they’re taking. Say you asked the smart glasses to scan the contents of your closet to help you pick out an outfit. The glasses are effectively taking dozens of photos of your room and everything in it, and uploading them all to an AI model in the cloud.

What happens to those photos after that? Meta won’t say.

Wearing the Ray-Ban Meta glasses also means you’re wearing a camera on your face. As we found out with Google Glass, that’s not something other people are universally comfortable with, to put it lightly. So you’d think it’s a no-brainer for the company that’s doing it to say, “Hey! All your photos and videos from your face cameras will be totally private, and siloed to your face camera.”

But that’s not what Meta is doing here.

Meta has already declared that it is training its AI models on every American’s public Instagram and Facebook posts. The company has decided all of that is “publicly available data,” and we might just have to accept that. It and other tech companies have adopted a highly expansive definition of what is publicly available for them to train AI on, and what isn’t.

However, surely the world you look at through its smart glasses is not “publicly available.” While we can’t say for sure that Meta is training AI models on your Ray-Ban Meta camera footage, the company simply wouldn’t say for sure that it isn’t.

Other AI model providers have more clear-cut rules about training on user data. Anthropic says it never trains on a customer’s inputs into, or outputs from, one of their AI models. OpenAI also says it never trains on user inputs or outputs through its API.

We’ve reached out to Meta for further clarification here, and will update the story if they get back to us.

Read Entire Article