In a long read from NYT, Kashmir Hill writes about a startup called Clearview AI that works with law enforcement on facial recognition.
You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.
Apple acquired artificial intelligence company Xnor.ai, which specializes in “low-power, edge-base tools” like image recognition.
It’s a long read, but Rodrigo Ochigame, former AI researcher at MIT’s Media Lab, examined Big Tech’s negative role in AI ethics research.
MIT lent credibility to the idea that big tech could police its own use of artificial intelligence at a time when the industry faced increasing criticism and calls for legal regulation…Meanwhile, corporations have tried to shift the discussion to focus on voluntary “ethical principles,” “responsible practices,” and technical adjustments or “safeguards” framed in terms of “bias” and “fairness” (e.g., requiring or encouraging police to adopt “unbiased” or “fair” facial recognition).
DeepMind’s AlphaStar AI has recently become a Grandmaster in the game StarCraft II.
StarCraft requires players to gather resources, build dozens of military units, and use them to try to destroy their opponents. StarCraft is particularly challenging for an AI because players must carry out long-term plans over several minutes of gameplay, tweaking them on the fly in the face of enemy counterattacks. DeepMind says that prior to its own effort, no one had come close to designing a StarCraft AI as good as the best human players.
A company called Seed wants to build a database of 100,000 poop photos so an AI can learn to tell the difference between healthy and unhealthy poop.
Ara Katz, co-founder and co-CEO of Seed, hopes that the poop project is just one of the company’s many future contributions to our understanding of health. “It’s projects like this [that] allow people who are not scientists to participate in citizen science. By crowdsourcing data, we can help researchers and technologies like auggi in order to help people identify different conditions.”
Take a poop pic and submit it at seed.com/poop.
John Martellaro and Charlotte Henry join host Kelly Guimont to talk about the leaked TV+ Pricing and the latest AI hire at Microsoft.
Some scientists are worried about technology like Elon Musk’s Neuralink. Cognitive psychologist Susan Schneider wrote an op-ed (paywall) that it could be “suicide for the human mind.”
The worry with a general merger with AI, in the more radical sense that Musk envisions, is the human brain is diminished or destroyed. Furthermore, the self may depend on the brain and if the self’s survival over time requires that there be some sort of continuity in our lives — a continuity of memory and personality traits — radical changes may break the needed continuity.
I’m no neuroscientist but I subscribe to emergentism, which is the idea that consciousness is an emergent property of the brain. An easy explanation is here, but basically it means that consciousness isn’t a property of the physical brain, but rather something that happens when you get enough neurons interconnected. This isn’t something that could be replicated with code.
Apple Glasses that use augmented reality have a lot of potential, like gaming and Apple Maps directions. What if health could be another feature?
Relentless Doppelgänger is a 24/7 YouTube livestream that features death metal created by AI.
The deep learning behind the YouTube channel is trained on samples of a real death metal band called Archspire, hailing from Canada. These real audio snippets are fed through the SampleRNN neural network to try and create realistic imitations…SampleRNN is smart enough to know when it’s produced an audio clip that’s good enough to pass for the genuine article – and as a result it knows which part of its neural network to tweak and strengthen.
I think it sounds pretty good. \m/
Starting in June, people that use dating app Bumble will find themselves protected against unwanted nudes using an AI tool called Private Detector.
The Verge writes about legal issues when an AI composes music.
The word “human” does not appear at all in US copyright law, and there’s not much existing litigation around the word’s absence. This has created a giant gray area and left AI’s place in copyright unclear. It also means the law doesn’t account for AI’s unique abilities, like its potential to work endlessly and mimic the sound of a specific artist.
Not to mention the question of who owns the copyright of this new music. Fascinating discussion here.
Design agency AKQA gave data on 400 existing sports to a neural network, and one of the games it created is called Speedgate.
While the sport was created as an exercise for Design Week, it might just become a serious sport. AKQA is talking to the Oregon Sports Authority about Speedgate, and there might be an intramural league in the summer. The company is encouraging others to start their own leagues.
This sounds (and looks) like a cool game and I’d be interested to try it out. Additionally an informative guide to Speedgate can be found here.
Apple has recently hired Ian Goodfellow, a well-known expert in the machine learning community. Mr. Goodfellow used to work at Google.
John Martellaro and Andrew Orr join Kelly Guimont to discuss webcams and security measures, as well as AI that freaks out even Elon Musk.
OpenAI, an AI research institute cofounded by Elon Musk and Sam Altman, built an AI text generator that its creators worry is dangerous.
Jack Clark, policy director at OpenAI, says that example shows how technology like this might shake up the processes behind online disinformation or trolling, some of which already use some form of automation. “As costs of producing text fall, we may see behaviors of bad actors alter,” he says.
Based on the examples I think it’s safe to say this AI would pass the Turing Test.
Bryan Chaffin is joined by guest-host John Martellaro to discuss how Apple might be looking at the medical industry, of which CEO Tim Cook has said he wants a piece. They also talk about the privacy bill making the rounds in Washington, and the future of artificial intelligence.
Last week Andrew Orr wrote about a patent that Apple filed regarding more offline Siri capability. He thinks the latest news about Silk Labs provides stronger evidence for that.
Pervasive AI is on its way, according to Deloitte.
Last year, YouPorn Foresights used AI to predict what the most popular search terms would be in porn. This year the company did something similar. The data science and machine learning teams trained a recurrent neural network to look at the current most popular performer names, and have now created what science has predicted that the next generation of stars will call themselves. There are 69 names, both male and female, and the results are hilarious. As you would expect from AI, the names sound weird and goofy. My favorite names from the list are Man Master, Al Gorr (obviously my future kid), Summer Sax, and Paris Buttomina. It’s a safe-for-work list that you can check out here.
Check out this video from OpenAI of a robot hand learning how to manipulate a block. This an incredibly difficult task, and the level of difficulty is one of the many reasons Apple needs humans assembling iPhones. OpenAI used machine learning and virtual simulations for the robot to spend 100 years of trial and error to learn what you’ll see in the video (TechnologyReview has more details). Those virtual lessons were then used by the real-world robot hand, and it’s pretty darned cool. Check it out.