The New York Times has a nice feature out today about how a mother found photos of her kids in a machine learning database.
None of them could have foreseen that 14 years later, those images would reside in an unprecedentedly huge facial-recognition database called MegaFace. Containing the likenesses of nearly 700,000 individuals, it has been downloaded by dozens of companies to train a new generation of face-identification algorithms, used to track protesters, surveil terrorists, spot problem gamblers and spy on the public at large. The average age of the people in the database, its creators have said, is 16.
I can’t imagine the gross feeling you get when you see your kids in a database like this.
ImageNet Roulette is part of an art and technology exhibit called Training Humans. Upload a photo and the algorithm will give you a classification. Some of the labels are funny, others are racist.
ImageNet Roulette is meant in part to demonstrate how various kinds of politics propagate through technical systems, often without the creators of those systems even being aware of them.
We did not make the underlying training data responsible for these classifications. We imported the categories and training images from a popular data set called ImageNet, which was created at Princeton and Stanford University and which is a standard benchmark used in image classification and object detection.
I uploaded a photo of me and the label I received was “beard.” Accurate.
In Google’s Ask a Techspert series, senior software engineer Rosie Buchanan explains machine learning for non-experts.
Today, when we hear about “machine learning,” we’re actually talking about how Google teaches computers to use existing information to answer questions like: Where is the ice cream? Or, can you tell me if my package has arrived on my doorstep? For this edition of Ask a Techspert, I spoke with Rosie Buchanan, who is a senior software engineer working on Machine Perception within Google Nest.
This is a cool blog post explaining it, and I hope to see more explanations like this.
New patents reveal that future Apple headphones could tell which ear they’re in using machine learning.
Apple notes that “During operation, capacitive sensor electrodes may be used by the control circuitry in capturing capacitive sensor ear images that are processed by a machine learning classifier. The machine learning classifier may be used to determine whether the headphones are being worn in a reversed or unreversed orientation.
Yesterday Vice reported on an app called DeepNude. It used machine learning to turn a clothed photo of a women into a naked version. It has since been taken offline.
The developers have now removed the software from the web saying the world was not ready for it. “The probability that people will misuse it is too high,” wrote the programmers in a message on their Twitter feed. “We don’t want to make money this way.” The developers also urged people who had a copy not to share it, although the app will still work for anyone who owns it.
I mean, if we’re being pedantic, you can’t really misuse technology specifically designed for ill intentions unless you try to use it for good intentions, if that’s even possible.
Ott Veslberg, Estonia’s chief data officer, wants a government AI to work in every aspect of the country’s public services, and healthcare is next.
Adobe Fresco is an iPad painting app that Adobe is working on. Previously known as Project Gemini, it will be the newest addition to Creative Cloud.
Bill Stasior previously led Siri development at Apple. He sat down to discuss virtual assistants and how they can improve in the next 3-5 years.
Researchers from MIT found a way to create neural networks that are 90% smaller but just as smart.
In a new paper, researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have shown that neural networks contain subnetworks that are up to one-tenth the size yet capable of being trained to make equally accurate predictions — and sometimes can learn to do so even faster than the originals.
This article stood out to me because if neural networks can be smaller but just as smart, maybe it could encourage companies to keep machine learning locally on a device, like Apple does.
John Martellaro and Bryan Chaffin join host Kelly Guimont to discuss the effects of machine learning on creativity and artistic pursuits.
This month, a California judge erased thousands of criminals records with the help of an algorithm. The creators of it say they’re just getting started.
It discards any record involving a violent crime, as such records do not qualify. For those that remain, the tool automatically fills out the necessary paperwork. In other words, the algorithm replaced the process being done manually at the expungement clinics.
Working with San Francisco’s raw data, Code For America was able to identify 8,132 eligible criminal records in a matter of minutes – in addition to the 1,230 found manually already. They dated as far back as 1975, the year in which the city started digitising its files.
Andrew Orr and Charlotte Henry join host Kelly Guimont to discuss Apple Music/Spotify subscriptions, and new hires in Special Projects.
Apple has recently hired Ian Goodfellow, a well-known expert in the machine learning community. Mr. Goodfellow used to work at Google.
We hear the first two terms all the time from Apple. They can be confusing. So, in order to help differentiate between the terms, the TechRepublic has written up a short but helpful tutorial for business people.
The first step is communicating what the definitions are for AI, machine learning (ML), and deep learning. There is some argument that AI, ML, and deep learning are each individual technologies. I view AI/ML/deep learning as successive stages of computer automation and analytics that are built on a common platform.
A traffic planning example makes it clear.
David Murphy has a nice tip out on how to organize photos by Faces on iOS. It’s a great way to manage photos of people.
On the three platforms you’re most likely to use to store your smartphone pictures—Apple Photos, Amazon Photos, and Google Photos—machine learning can categorize your photos by the faces in them, rather than rudimentary details like when or where they were taken.
In the Animal-AI Olympics, AI will be given tests originally designed to test animal cognition in a US$10,000 competition.
The Animal-AI Olympics is the creation of a team of researchers at the Leverhulme Centre for the Future of Intelligence in Cambridge, England, along with GoodAI, a Prague-based research institute. The competition is part of a bigger project at the Leverhulme Centre called Kinds of Intelligence, which brings together an interdisciplinary team of animal cognition researchers, computer scientists, and philosophers to consider the differences and similarities between human, animal, and mechanical ways of thinking.
VSCO is launching a feature called For This Photo that uses machine learning to automatically suggest presets for your photos.
NVIDIA is releasing a US$99 AI computer called the Jetson Nano aimed at “developers, makers and enthusiasts.”
Apple has acquired Laserlike, a young startup founded by three former Google engineers. It’s a machine learning startup that could help Apple improve its recommendation algorithms in News, TV, Apple Music, etc (paywall).
An Apple spokesperson confirmed the acquisition of the four-year-old startup, which was founded by three former Google engineers, Anand Shukla, Srinivasan Venkatachary and Steven Baker, and had raised more than $24 million from Redpoint Ventures and Sutter Hill Ventures, according to CrunchBase. Terms of the deal could not be learned.
I look forward to getting better recommendations.