Discover more from PATENT DROP
#002 PATENT DROP
feat. Spatial Audio Navigation, VR Keyboards & Biometric Doorbells
3 new patents released yesterday from Apple, Intel and Amazon:
In the physical world, when we hear a sound, our auditory system enables us to approximate the distance and direction of the sound’s source. In essence, our brain creates a 3D model of our environment when it hears sounds.
In June, Apple announced that its AirPods Pro will be released with a spatial audio capability to help recreate 3D sounds with our headphones.
Now with the release of this patent, we get an an example of the applications Apple are imagining with this technology.
With existing navigational apps, we get two types of cues that help direct us to a destination: visual cues (i.e. reading a map on the screen), and audio cues (“turn left in 150 metres”). But this has its own constraints.
Imagine that you’re walking, trying to find a restaurant for a date while on the phone with a friend who’s psyching you up. In this context, vocal instructions would interfere with the phone call. And repeatedly switching between your screen and your physical environment is annoying.
With spatial navigation, Apple plans to change the source of sound in your airpods to provide navigational cues. If the sound feels like it’s right in front of you (as it does when we listen to music on headphones today), you continue straight. But if you need to take a left, the source of sound on your airpods feels as though it’s to the left of you. And when you begin to move in the right direction, the direction of the sound reorientates to be in front of you.
What’s most interesting about this is the applications in both the physical world as well as within VR - something that Apple explicitly mention in the patent. Apple’s innovation is as relevant to the world we live in today, as it is to the future worlds we live in.
Intel have been thinking about how one could use a physical keyboard in a VR space. In essence, they’ve imagined a gesture tracker on a headset that takes into account how a users’ hands interact with a physical mouse and a keyboard. This then all gets mirrored into the virtual environment so that users see their virtual hands using a virtual keyboard and virtual mouse.
Why are Intel thinking about this?
Their take is that while VR applications may be well suited for entertainment, conventional VR systems may not be as well suited for productivity usages such as web browsing, checking e-mail, word processing, working in spreadsheets etc.
By enabling people to use the input devices they’re currently used to, we might begin to see desktops in office cubicles replaced with VR headsets that connect wirelessly to traditional keyboards and mice.
In its latest patent on smart doorbells (i.e. the Amazon Ring), there’s an interesting section on capturing biometric data when someone rings your doorbell, besides video and audio.
Examples of physiological characteristics that could be captured include:
facial recognition (2d and 3d)
odour / scent recognition
And then, more interestingly / worryingly, there are also behavioural characteristics mentioned such as:
typing rhythm (presumably from having to possibly input information on the doorbell device)
the way you walk
In the same way that visual footage from Amazon Rings could be used to aid law enforcement in solving crimes, it’s not too much of a stretch to imagine all of the above biometric data being gathered, collected, shared with the police, or even being used in algorithms to predict "level of threat” when a stranger comes to your door.
As more houses use devices like this, Amazon potentially forms a huge web of surveillance that begins to include biometric data. That should make us pause for thought.
Want a full list of some of the patents released this week by companies such as Tesla, Palantir and Facebook? Click here
Know colleagues or friends who would find this useful? Please forward it to them!
Drops will be once a week (this week was an exception because of the launch)