Cinematic mode was one of the big new features in the iPhone 13. In a new interview with Techcrunch, Apple VP Kaiann Drance and Human Interface Team designer Johnnie Manzari explain how it was made.

“We knew that bringing a high quality depth of field to video would be magnitudes more challenging [than Portrait Mode],” says Drance. “Unlike photos, video is designed to move as the person filming, including hand shake. And that meant we would need even higher quality depth data so Cinematic Mode could work across subjects, people, pets, and objects, and we needed that depth data continuously to keep up with every frame. Rendering these autofocus changes in real time is a heavy computational workload.” The A15 Bionic and Neural Engine are heavily used in Cinematic Mode, especially given that they wanted to encode it in Dolby Vision HDR as well. They also didn’t want to sacrifice live preview — something that most Portrait Mode competitors took years to ship after Apple introduced it.

Check It Out: iPhone 13: How Cinematic Mode Was Made

Add a Comment

Log in to comment (TMO, Twitter, Facebook) or Register for a TMO Account