@enkiv2 « [Google] also told the Verge that “its machine learning detects what objects are in the frame, and the camera is smart enough to know what color they are supposed to have.” Consider how different that is from a normal photograph. Google’s camera is not capturing what is, but what, statistically, is likely. »
Great, more bias from machine learning, polluting the photographic record. D-:
@varx @enkiv2 This happens in people: https://www.iflscience.com/brain/dont-beleive-your-eyes-these-strawberries-are-not-red/
@varx @enkiv2 ah, I assumed google was talking abuot the autowhite balancing thing they are doing to adjust for low light and color temperature. They discuss it a bit on their blog: https://ai.googleblog.com/2018/11/night-sight-seeing-in-dark-on-pixel.html
They specifically refer to the color constancy phenomena in humans (that seems to be the source of the color taint on the strawberries).
@ultimape @enkiv2 Oh, it's not the white-balancing that bothers me (cameras have done that for like forever), it's the imposition of meaning via machine learning. They don't just use it for white-balance.
What's more troubling to me about the current state of affairs is how they try to "beautify" faces, which can't do good things for people's self-image and perceptions of the world.
@varx @enkiv2 Oh yeah, that is really annoying. We're basically giving Cameras a form of apophenia. It's one thing to try and model a person's attention via gaze tracking and applying that to a photograph, and another entirely to take asumptions on what I find interesting in a photo and adapt it toward that bias.
I should see if someone has an iphone and try to get it to find smiling faces in some bird poop.
A Mastodon instance for info/cyber security-minded people.