Smartphone’s future: It’s all about the camera

New York Times:

Next month, Apple plans to hold a special event to introduce a set of new iPhones, including a premium model that can scan 3-D objects — including your face.

Side note, I’m seeing this next month reference a lot. True, but it’s less than two weeks from now. Next month sounds a lot further away.

But I digress.

When Apple shows its new iPhones next month, including a premium model with a starting price of $999, the company will introduce infrared facial recognition as a new method for unlocking the device.

“Next month”. Oy.

Qualcomm’s Spectra, a so-called depth-sensing camera system, is one example of how face scanning works.

The Spectra system includes a module that sprays an object with infrared dots to gather information about the depth of an object based on the size and the contortion of the dots. If the dots are smaller, then the object is farther away; if they are bigger, the object is closer. The imaging system can then stitch the patterns into a detailed 3-D image of your face to determine if you are indeed the owner of your smartphone before unlocking it.

And:

Because of the uniqueness of a person’s head shape, the likelihood of bypassing facial recognition with the incorrect face is 1 in a million, he added. That compares with a false acceptance rate of 1 in 100 for previous facial recognition systems, which had very poor security.

I can only imagine that likelihood dropping with each generation. I wonder what the odds are on a false positive for Touch ID.

UPDATE: From this Apple knowledge base article, hat tip Chuck Skoda:

Every fingerprint is unique, so it’s rare that even a small section of two separate fingerprints are alike enough to register as a match for Touch ID. The probability of this happening is 1 in 50,000 with a single, enrolled finger.

Back to the Times article:

There are, however, limitations to infrared-scanning technologies. For example, objects that you wear, like a hat or a scarf, might throw off the camera, according to Qualcomm. In addition, experts said infrared light can get drowned out by bright sunlight outdoors, so face scanning might work less reliably on the beach.

And on the use of the camera for AR, this interesting note:

the limitations of the Ikea Place app underscore what’s missing from ARKit. For placing virtual objects, the app can detect horizontal surfaces, like a table surface or the ground, but it cannot yet detect walls.

This may be me misunderstanding the issue, but wall detection in AI is not a new problem. It stems from the corner detection problem (finding a corner in a room, then classifying it to determine how the walls emerge from that corner). Regardless, I suspect that ARKit will evolve to solve any and all room geometry problems.

Can’t wait. Less than two weeks to go.