The Joint Photographic Experts Group (JPEG) is exploring methods to use machine learning to create the next JPEG image codec.
In a recent meeting held in Sydney, the group released a call for evidence to explore AI-based methods to find a new image compression codec. The program, aptly named JPEG AI, was launched last year; with a special group to study neural-network-based image codecs.
Bryan Chaffin and John Kheit discuss the difference between artificial intelligence (AI) and machine learning, including the state of both today. They also talk about their new Macs— John got a new 28-core Mac Pro, while Bryan has a new iMac—and whether or not they like their new purchases. The cap the show by catching up on The Curse of Oak Island TV show on History.
Animal rights group PETA wants to replace famous groundhog Punxsutawney Phil with an animatronic AI.
The way the group sees it, not only would an AI be better at estimating when the winter will end, but it would also attract an entirely new generation of visitors to the western Pennsylvanian town. “Today’s young people are born into a world of terabytes, and to them, watching a nocturnal rodent being pulled from a fake hole isn’t even worthy of a text message,” Newkirk said. “Ignoring the nation’s fast-changing demographics might well prove the end of Groundhog Day.”
Shortly after acquiring AI company Xnor.ai, Apple canceled its contract with Project Maven that would use algorithms to analyze military drone imagery.
In a long read from NYT, Kashmir Hill writes about a startup called Clearview AI that works with law enforcement on facial recognition.
You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.
Apple acquired artificial intelligence company Xnor.ai, which specializes in “low-power, edge-base tools” like image recognition.
It’s a long read, but Rodrigo Ochigame, former AI researcher at MIT’s Media Lab, examined Big Tech’s negative role in AI ethics research.
MIT lent credibility to the idea that big tech could police its own use of artificial intelligence at a time when the industry faced increasing criticism and calls for legal regulation…Meanwhile, corporations have tried to shift the discussion to focus on voluntary “ethical principles,” “responsible practices,” and technical adjustments or “safeguards” framed in terms of “bias” and “fairness” (e.g., requiring or encouraging police to adopt “unbiased” or “fair” facial recognition).
DeepMind’s AlphaStar AI has recently become a Grandmaster in the game StarCraft II.
StarCraft requires players to gather resources, build dozens of military units, and use them to try to destroy their opponents. StarCraft is particularly challenging for an AI because players must carry out long-term plans over several minutes of gameplay, tweaking them on the fly in the face of enemy counterattacks. DeepMind says that prior to its own effort, no one had come close to designing a StarCraft AI as good as the best human players.
A company called Seed wants to build a database of 100,000 poop photos so an AI can learn to tell the difference between healthy and unhealthy poop.
Ara Katz, co-founder and co-CEO of Seed, hopes that the poop project is just one of the company’s many future contributions to our understanding of health. “It’s projects like this [that] allow people who are not scientists to participate in citizen science. By crowdsourcing data, we can help researchers and technologies like auggi in order to help people identify different conditions.”
Take a poop pic and submit it at seed.com/poop.
John Martellaro and Charlotte Henry join host Kelly Guimont to talk about the leaked TV+ Pricing and the latest AI hire at Microsoft.
Some scientists are worried about technology like Elon Musk’s Neuralink. Cognitive psychologist Susan Schneider wrote an op-ed (paywall) that it could be “suicide for the human mind.”
The worry with a general merger with AI, in the more radical sense that Musk envisions, is the human brain is diminished or destroyed. Furthermore, the self may depend on the brain and if the self’s survival over time requires that there be some sort of continuity in our lives — a continuity of memory and personality traits — radical changes may break the needed continuity.
I’m no neuroscientist but I subscribe to emergentism, which is the idea that consciousness is an emergent property of the brain. An easy explanation is here, but basically it means that consciousness isn’t a property of the physical brain, but rather something that happens when you get enough neurons interconnected. This isn’t something that could be replicated with code.
Apple Glasses that use augmented reality have a lot of potential, like gaming and Apple Maps directions. What if health could be another feature?
Relentless Doppelgänger is a 24/7 YouTube livestream that features death metal created by AI.
The deep learning behind the YouTube channel is trained on samples of a real death metal band called Archspire, hailing from Canada. These real audio snippets are fed through the SampleRNN neural network to try and create realistic imitations…SampleRNN is smart enough to know when it’s produced an audio clip that’s good enough to pass for the genuine article – and as a result it knows which part of its neural network to tweak and strengthen.
I think it sounds pretty good. \m/
Starting in June, people that use dating app Bumble will find themselves protected against unwanted nudes using an AI tool called Private Detector.
The Verge writes about legal issues when an AI composes music.
The word “human” does not appear at all in US copyright law, and there’s not much existing litigation around the word’s absence. This has created a giant gray area and left AI’s place in copyright unclear. It also means the law doesn’t account for AI’s unique abilities, like its potential to work endlessly and mimic the sound of a specific artist.
Not to mention the question of who owns the copyright of this new music. Fascinating discussion here.
Design agency AKQA gave data on 400 existing sports to a neural network, and one of the games it created is called Speedgate.
While the sport was created as an exercise for Design Week, it might just become a serious sport. AKQA is talking to the Oregon Sports Authority about Speedgate, and there might be an intramural league in the summer. The company is encouraging others to start their own leagues.
This sounds (and looks) like a cool game and I’d be interested to try it out. Additionally an informative guide to Speedgate can be found here.
Apple has recently hired Ian Goodfellow, a well-known expert in the machine learning community. Mr. Goodfellow used to work at Google.
John Martellaro and Andrew Orr join Kelly Guimont to discuss webcams and security measures, as well as AI that freaks out even Elon Musk.
OpenAI, an AI research institute cofounded by Elon Musk and Sam Altman, built an AI text generator that its creators worry is dangerous.
Jack Clark, policy director at OpenAI, says that example shows how technology like this might shake up the processes behind online disinformation or trolling, some of which already use some form of automation. “As costs of producing text fall, we may see behaviors of bad actors alter,” he says.
Based on the examples I think it’s safe to say this AI would pass the Turing Test.
Bryan Chaffin is joined by guest-host John Martellaro to discuss how Apple might be looking at the medical industry, of which CEO Tim Cook has said he wants a piece. They also talk about the privacy bill making the rounds in Washington, and the future of artificial intelligence.