Apple has acquired machine learning startup Inductiv, Inc to improve Siri, machine learning, and Apple’s data science endeavors.
Now here’s a cool article I found last night. Simon Willison found the SQLite database that Apple Photos uses. It contains photo metadata as well as the aesthetic scoring system that the machine learning uses. Further, there are numeric categories used to label content within photos. For example, Category 2027 is for Entertainment, Trip, Travel, Museum, Beach Activity, etc. I think the quality scores are particularly interesting. There are scores for noise, composition, lively color, harmonious color, pleasant lighting/pattern/perspective, and a bunch more. I bet Apple’s acquisition of Regaind contributed to this.
Apple has acquired an AI startup called Voysis, which could be used to enhance Siri’s commerce capabilities.
The Joint Photographic Experts Group (JPEG) is exploring methods to use machine learning to create the next JPEG image codec.
In a recent meeting held in Sydney, the group released a call for evidence to explore AI-based methods to find a new image compression codec. The program, aptly named JPEG AI, was launched last year; with a special group to study neural-network-based image codecs.
Bryan Chaffin and John Kheit discuss the difference between artificial intelligence (AI) and machine learning, including the state of both today. They also talk about their new Macs— John got a new 28-core Mac Pro, while Bryan has a new iMac—and whether or not they like their new purchases. The cap the show by catching up on The Curse of Oak Island TV show on History.
Shortly after acquiring AI company Xnor.ai, Apple canceled its contract with Project Maven that would use algorithms to analyze military drone imagery.
Apple acquired artificial intelligence company Xnor.ai, which specializes in “low-power, edge-base tools” like image recognition.
Dr. Mac has discovered something approaching the holy grail of image-processing—a way to enlarge (or reduce) an image with fewer visible artifacts and jagged edges, and less blurriness and other unwanted elements.
In the latest update Pixelmator Pro adds a machine learning feature called ML Super Resolution as a way to enhance small, blurry images.
Adobe announced a couple of features in Photoshop for iPad today, including Select Subject, optimizing cloud documents, and more.
Google started an initiative called Project Understood. It’s partnering with the Canadian Down Syndrome Society to ask people with Down syndrome help train its voice recognition algorithms to understand them better.
“Out of the box, Google’s speech recognizer would not recognize every third word for a person with Down syndrome, and that makes the technology not very usable,” Google engineer Jimmy Tobin said in a video introducing the project. Google is aiming to collect 500 “donations” of voice recordings from people with Down syndrome, and is already more than halfway toward its goal.
A worthy project.
Australia will soon install a camera system powered by machine learning that is designed to spot mobile phones in cars.
To let drivers adjust, warning letters will be sent to those spotted using phones by the cameras for the first three months. Australia uses a points system for drivers — unrestricted driver’s licenses have 13 points. After the first three months, drivers caught using their phones illegally will lose five points and be issued a $344 fine. During other periods, the penalty could increase to 10 points. If a driver loses all of their points, they could lose their license.
Distracted driving is absolutely a serious problem, but I don’t think more surveillance infrastructure is the answer.
Apple has rebuilt its privacy site to show off “everyday apps designed for your privacy.” They’re Apple’s own apps showing privacy features.
A company called Seed wants to build a database of 100,000 poop photos so an AI can learn to tell the difference between healthy and unhealthy poop.
Ara Katz, co-founder and co-CEO of Seed, hopes that the poop project is just one of the company’s many future contributions to our understanding of health. “It’s projects like this [that] allow people who are not scientists to participate in citizen science. By crowdsourcing data, we can help researchers and technologies like auggi in order to help people identify different conditions.”
Take a poop pic and submit it at seed.com/poop.
The New York Times has a nice feature out today about how a mother found photos of her kids in a machine learning database.
None of them could have foreseen that 14 years later, those images would reside in an unprecedentedly huge facial-recognition database called MegaFace. Containing the likenesses of nearly 700,000 individuals, it has been downloaded by dozens of companies to train a new generation of face-identification algorithms, used to track protesters, surveil terrorists, spot problem gamblers and spy on the public at large. The average age of the people in the database, its creators have said, is 16.
I can’t imagine the gross feeling you get when you see your kids in a database like this.
ImageNet Roulette is part of an art and technology exhibit called Training Humans. Upload a photo and the algorithm will give you a classification. Some of the labels are funny, others are racist.
ImageNet Roulette is meant in part to demonstrate how various kinds of politics propagate through technical systems, often without the creators of those systems even being aware of them.
We did not make the underlying training data responsible for these classifications. We imported the categories and training images from a popular data set called ImageNet, which was created at Princeton and Stanford University and which is a standard benchmark used in image classification and object detection.
I uploaded a photo of me and the label I received was “beard.” Accurate.
In Google’s Ask a Techspert series, senior software engineer Rosie Buchanan explains machine learning for non-experts.
Today, when we hear about “machine learning,” we’re actually talking about how Google teaches computers to use existing information to answer questions like: Where is the ice cream? Or, can you tell me if my package has arrived on my doorstep? For this edition of Ask a Techspert, I spoke with Rosie Buchanan, who is a senior software engineer working on Machine Perception within Google Nest.
This is a cool blog post explaining it, and I hope to see more explanations like this.
New patents reveal that future Apple headphones could tell which ear they’re in using machine learning.
Apple notes that “During operation, capacitive sensor electrodes may be used by the control circuitry in capturing capacitive sensor ear images that are processed by a machine learning classifier. The machine learning classifier may be used to determine whether the headphones are being worn in a reversed or unreversed orientation.
The developers have now removed the software from the web saying the world was not ready for it. “The probability that people will misuse it is too high,” wrote the programmers in a message on their Twitter feed. “We don’t want to make money this way.” The developers also urged people who had a copy not to share it, although the app will still work for anyone who owns it.
I mean, if we’re being pedantic, you can’t really misuse technology specifically designed for ill intentions unless you try to use it for good intentions, if that’s even possible.
Ott Veslberg, Estonia’s chief data officer, wants a government AI to work in every aspect of the country’s public services, and healthcare is next.