Google no longer understands how its “deep learning” decision-making computer systems have made themselves so good at recognizing things in photos.
This means the internet giant may need fewer experts in future as it can instead rely on its semi-autonomous, semi-smart machines to solve problems all on their own.
The claims were made at the Machine Learning Conference in San Francisco on Friday by Google software engineer Quoc V. Le in a talk in which he outlined some of the ways the content-slurper is putting “deep learning” systems to work.
"Deep learning" involves large clusters of computers ingesting and automatically classifying data, such as pictures. Google uses the technology for services like Android voice-controlled search, image recognition, and Google translate, among others. […]
What stunned Quoc V. Le is that the machine has learned to pick out features in things like paper shredders that people can’t easily spot – you’ve seen one shredder, you’ve seen them all, practically. But not so for Google’s monster.
Learning “how to engineer features to recognize that that’s a shredder – that’s very complicated,” he explained. “I spent a lot of thoughts on it and couldn’t do it.” […]
This means that for some things, Google researchers can no longer explain exactly how the system has learned to spot certain objects, because the programming appears to think independently from its creators, and its complex cognitive processes are inscrutable. This “thinking” is within an extremely narrow remit, but it is demonstrably effective and independently verifiable."
I wonder how many times I’ve typed “virgin confirmation” into Gmail and how it targets ads for me as a result.
Synesthesia, loosely defined as the phenomenon of a sensation creating an unnatural secondary sensation, is actually quite common; some humans perceive numbers as colors, for instance. But Psychology Today reports the story of a young Texas girl might be the only person on the planet identified to have what’s known as “mirror touch” synesthesia — where an individual feels the emotions of those around her — with machines, not humans.
The girl (who is not named to protect her identity) describes the experience as an “extra limb,” an extension of her own body, when she’s near a machine that she’s not touching — she cites cars, robots, escalators, locks, and levers as examples of mechanical objects that act as stimuli. “When watching cars crash in a movie, I feel them as they’re ripped and crush, and I usually have to turn away and cut myself off from the stimulus,” she says. Interestingly, she identifies humanoid robots as a “stranger” experience for her due to their physical similarities to her own body."
— A point I’ve seen brought up more than a few times in recent memory, but most recently in this scathing criticism of the use of black women’s bodies in Lily Allen’s “satirical” new music video.