Jacob Kastrenakes, writing for The Verge:
Nailing down exactly what the Google Assistant is capable of can be strangely difficult right now. That’s because Google currently has three different ways to use the Google Assistant. Google says it’s the same Assistant in each place, but it can (and can’t) do different things depending on where you use it.
- Google Assistant on Google Home (the new speaker)
- Google Assistant on Pixel (the new phones)
- Google Assistant on Allo (the new-ish chat app)
At its core, Google Assistant is a model of you, with threads through your life, your calendar, your photos and other media, your travel plans, food ordering habits, etc. Each of these examples is a window into your Google model and a well-defined read and write access to that model.
One of the challenges to creating this sort of model is the ability to keep that model online and distributed. Ideally, you’d be online with a super-fast net connection with secure, unlimited storage at all times. That would mean storing your model in a central repository and giving access to the various assistants as needed.
But real life imposes limits such as limited net access, limited storage, and different form factors. Getting all these pieces to play together is a daunting challenge.
Notably, Apple has been meditating on this problem since the early days of the Mac. Check out this Knowledge Navigator video from 1987. This is an incredibly complex problem, and solutions are still in their infancy. Fascinating to watch this unfold.