The new iPhone 14 series and Apple Watch series includes the new Crash Detection feature, and so far people seem to be going out of their way to test the new feature.
However, a new round of tests done by the Wall Street Journal show that Crash Detection may not be as flawless as originally planned.
Update: Apple has responded to the Wall Street Journal’s Story. You can read their response below.
Tests by ‘Wall Street Journal’ Show Crash Detection May be Flimsy
In a video posted to the Wall Street Journal’s official YouTube page, the team asks a three time destruction derby champion Michael Barabe to wreck into multiple vehicles inside a scrap yard to see if they could set off Crash Detection.
The driver was wearing an Apple Watch Ultra, and also had an iPhone 14 Pro and Google Pixel strapped to the passenger headrests. The vehicles he crashed into also had additional phones strapped to a headrest. The start of the tests did trigger the driver’s Crash Detection features. However, it was the vehicles he was crashing into that became concerning, for they did not have Crash Detection activate.
Even with a second vehicle, the results were the same. When the Wall Street Journal reached out to Apple about the results, the company stated that the reason for the lack of Crash Detection is likely due to a lack of data. According to Apple, Crash Detection may not have set off due to the devices not realizing it was in a vehicle for various reasons. This includes a lack of travel distance prior to the crash itself.
Looking at the Results
As someone who decided to test Crash Detection for themselves (albeit in a less extreme fashion), it is somewhat surprising to see that the feature did not set off during an actual crash.
While it would be great if this feature never needed to see real world application, sooner or later reports of how Crash Detection responds in legitimate crashes will find their way online. Though it is a fun feature to test, the foundation for the new feature is meant to actually save lives. While it is this author’s wish that no one has to actually use the feature, one can at least hope that it will kick in when there’s an actual emergency.
Update: Apple has responded to these results:
When I contacted Apple with the results, a company spokesman said that the testing conditions in the junkyard didn’t provide enough signals to the iPhone to trigger the feature in the stopped cars. It wasn’t connected to Bluetooth or CarPlay, which would have indicated the car was in use, and the vehicles might not have traveled enough distance prior to the crash to indicate driving. Had the iPhone received those extra indicators—and had its GPS shown the cars were on a real road—the likelihood of an alert would have been greater, he said.
Stern also outlined the features that may set off crash detection:
- Motion sensors: All the devices have a three-axis gyroscope and high-g force accelerometer, which samples motion more than 3,000 times a second. It means the devices can detect the exact moment of impact and any change in motion or trajectory of the vehicle.
- Microphones: The mics are used to detect loud sound levels that might indicate a crash. The microphones are only turned on when driving is detected, and no actual sound is recorded, Apple says.
- Barometer: If the air bags deploy when the windows are closed, the barometer can detect a change in air pressure.
- GPS: Readings can be used to detect speeds prior to a crash and any sudden lack of movement, as well as inform the device that it’s traveling on a road.
- CarPlay and Bluetooth: When connected, these give the algorithms another signal that the phone is on board a car, so it knows to look out for a crash.
What do you think about this experiment? Let us know in the comments.
2 thoughts on “[U] Tests from ‘Wall Street Journal’ Indicate Apple’s Crash Detection May Have Flaws”
Machine learning is an iterative process. Not knowing what data were input and how they were tested/validated, in addition to not knowing much about how this was independently tested, it is difficult to make any type of data-informed comment on the WSJ’s findings.
As with most things AI, I think that we can anticipate that this will improve over time, provided that Apple have access to real-world data from users.
Dr. Brooks, I agree. There are too many variables at play, and of course, the WSJ noted that their tests weren’t entirely scientific. As unfortunate as this sounds, I agree that it will take practical application to produce informative results. Of course, that shouldn’t stop users from finding creative ways to test this feature.