- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
A New York subway rider has accused a woman of breaking his Meta smart glasses. She was later hailed as a hero.
A New York subway rider has accused a woman of breaking his Meta smart glasses. She was later hailed as a hero.
In the places where tech like this would be helpful, there’s no reason that “recording” needs to be a part of it.
Colorblind person needs help identifying colors…great. Doesn’t mean the video needs to be stored. Face-blind people need help recognizing faces, it can access a local database. If the entire point of AI is to do real-time computing, there’s no reason for any image/video to be permanently stored anywhere.
Frankly, make the fucking things illegal in public, and allowed only in private settings where recording a member of the public won’t be a concern. They’re useful for doctors who are performing an operation and interacting with another doctor via the internet at the same time. They’re useful for things like that. But there is ZERO reason you need to be recording strangers in public without authorization.
But failing that, at least scrap the ability to record to a server. Shit’s just creepy. It was creepy when Google tried it. It’s even more creepy when it’s from a company that is open about using AI to create “personalized” ads using the images of people in it’s servers.
These glasses do not have even a fraction of the computing power to do any of that on device. It’s a uploading everything to the cloud. The design is surveilance first ask questions later.
as it should be when going into a known dangerous situation.
These are not “safety” glasses for the benefit of the user, where they are the ones in control of the data. This is all for the benefit of the corporation.
I’m not at all a fan of being recorded in public but all of your examples…
These are situations in which the camera in the glasses is technically being accessed, which in software means something is analyzing the feed from the camera. If it is generating any output anywhere, even just visually for the user, it is recording in my mind. It may not be storing video, but it might face match and store a list of every recognized face it saw on the subway. There is no way for the OS to reasonably know what the feed is being used for unless it has exclusive control over the camera feed… and I sure as fuck aren’t going to trust the smart glasses manufacturer to be honest about what it is doing with the camera feed…
So basically, if the camera is in use at all, an indicator light should be on.
Assuming the system ecosystem is locked down, one could conceivably indicate only for retained camera data. App has camera permission but no internet and no storage premission, ok.
Of course, realistically speaking they kind of tried that with camera modules having their indicators OS controlled, and the practical reality is that malicious use could independently operate the camera from the LED and so the lesson learned was to keep it simple and have the LED control inexorably linked to camera activation at the module level without any sophisticated OS control possible.
You’re correct. I should have worded it better. I meant “Stored” rather than recorded. Much like streaming a video has a temp file in your hard-drive while you’re watching the stream, but which ceases to exist after a certain amount of time to prevent you from pirating the content by saving a copy.
Glasses should operate much the same way (if at all…as I said…I’d still prefer the not at all option.)
Think the issue is, practically speaking, we don’t have a good track record of modeling precisely where the camera feed goes to decide if it is stored or not. Mobile OSes do present a more sophisticated permission structure that gets closer, but things are still too flexible to really comfortably assure that nothing that was a party to the feed didn’t somehow store it.
i disagree with your sexual fixation on tiny LEDs. That was a dangerous situation warranting mitigation strategies including video recording.