How Apple built the iPhone 13’s Cinematic Mode

Matthew Panzarino, TechCrunch, got the chance to speak with Apple VP Kaiann Drance and Human Interface Team designer Johnnie Manzari about Cinematic Mode.

“We knew that bringing a high-quality depth of field to video would be magnitudes more challenging [than Portrait Mode],” says Drance. “Unlike photos, video is designed to move as the person filming, including hand shake. And that meant we would need even higher-quality depth data so Cinematic Mode could work across subjects, people, pets and objects, and we needed that depth data continuously to keep up with every frame. Rendering these autofocus changes in real time is a heavy computational workload.”

And:

“We didn’t have an idea [for Cinematic Mode]. We were just curious — what is it about filmmaking that’s been timeless? And that kind of leads down this interesting road and then we started to learn more and talk more … with people across the company that can help us solve these problems.”

That second quote offers an interesting insight into how features like this are born. Sometimes new features are the result of trying to solve a specific problem in a clever way. Cinematic Mode was more born from an exploration into an existing process, trying to bring an existing solution from the complex, expensive, hardware heavy filmmaking world to the iPhone.

Nice writeup by Panzarino. Don’t miss the section “Testing Cinematic Mode” with the embedded demo reel. Don’t just watch the demo reel. It needs the context of Matthew’s descriptions to give a true sense of what Cinematic Mode is and isn’t. Great read.