Discover more from PATENT DROP
#016 PATENT DROP
Amazon peep holes, Snap real-time body animation, Spotify video / song mash-ups
This Patent Drop is going out to 5009 people! Hit subscribe to get a peek into the future with 3 summaries of new patent applications from big tech companies every week ✨🔮
Hi - hope you’ve had a lovely week! Patent Drop hit 5000 subscribers this week - thanks for being part of the journey so far :)
Here are some interesting patent filings from the last week.
Amazon have filed a patent for a new type of audio/visual home security device that works a little differently to the existing Ring.
This device is made up of two key components: an interior component that is mounted on the inside of a door, and an exterior component that sits on the outside of a door. These devices will be attached to, or around, an existing hole, such as a peep hole in a door. The new device looks to still enable these peep holes to be used, except with the additional functionality of audio and visual data.
Why is Amazon thinking about this?
They highlight a few problems with the existing Ring devices. Firstly, the current smart doorbells may not provide a great field of view, because of the constraints with wiring. For instance, often these Ring devices are having to be installed perpendicular to the front door, which limits overall effectiveness when it comes to seeing and identifying visitors. Secondly, the smart doorbells on the outside of a house need to communicate with a device inside the house, through the exterior wall. This connection might be weak because of the obstruction. Finally, some people who are in rental properties might not be able to damage the building’s front wall in order to install a Ring.
With this new device, Amazon want to leverage existing barrier viewers (e.g. peep holes) and provide audio/visual security positioned at eye level. Besides being able to open up a lens and view who’s on the other side, the device will also capture visuals, audio and detect motion in the same way that the existing Ring does. Moreover, the wireless communication should hopefully be better given that the external viewer is connected to an internal viewer, which can then transmit information without the obstruction of a house wall.
In the grand scheme of things, this Patent doesn’t seem like a huge game changer. What’s most interesting is that Amazon are thinking about where the existing Ring devices are insufficient (e.g. limited field of view, poor wireless communication) and thinking of ways to deliver better home security. Using existing peep holes in doors is one way they’re thinking of being able to offer a better view of who is on the other side of a door, while leveraging all of the smart, privacy-intrusive, data collection methods that face and audio recognition technology bring.
In this filing, Snap describe a system where they can take a single image of an individual, and then create an animated video of that person adopting different poses. For example, a single image of a person could be turned into a video of that person dancing, performing acrobatics, fighting etc.
To do this, Snap will first segment the input image into a ‘body’ portion, and a ‘background’ portion. With the body portion, Snap will then convert it into a 3D model with points for where the body’s joints are. Then the model will receive inputs of different pose parameters, so that different output images are created of the body in different poses. These images can then be put together to create an animated video.
We’ve seen some similar products appear in the last few years that have similar tools to what Snap is suggesting. For example, Chinese app ‘Zao’ leveraged deep-fake technology to put peoples’ faces in famous movie scenes. What makes Snap’s filing unique to all of these other competitors is that it’s looking to take a person’s whole body and animate it in different ways, not just transposing faces onto another person’s body.
Is this world changing technology that is going to lift humanity out of poverty, homelessness and disease? No.
But! Snap is making interesting moves in being a democratised Disney. Snap’s AR filters gave people the chance to enter an animated universe in which their character is the centre. This recent filing sees Snap move further along the train tracks, where our whole bodies can begin to live, breathe and interact in worlds different to our physical or immediate surroundings. This technology already exists, but it’s kept in big animation studios. Snap is now bringing that technology to everybody’s phone.
Spotify are looking at recommending music based on the emotions displayed within user-generated video, and vice-versa.
So let’s imagine you’ve taken a video on your phone. Under Spotify’s filing, they would extract the latent emotions expressed within the video, and give you back a video clip with appropriate music added to it. This new video clip could then be uploaded onto other social media platforms.
The emotional tags that will be added to a video, include: admiration, adoration, aesthetic appreciation, amusement, anger, anxiety, awe, awkwardness, boredom, calmness, confusion, craving, disgust, empathetic pain, entrancement, excitement, fear, horror, interest, joy, nostalgia, relief, romance, sadness, satisfaction, sexual desire, and surprise.
The audio tags will include: angry, exciting, funny, happy, sad, scary, and tender.
So for each video analysed, a vector embedding will be created that highlights the range of emotions expressed. Similarly, for songs in an audio library. Spotify’s system will then find the song that most closely matches the emotions in the video.
This is interesting for a few reasons. In the filing, Spotify describe how adding additional sound (such as music) can be relatively time consuming when uploading video. The innovation that TikTok brought is two-fold: firstly, the friction of adding music to video was reduced; secondly, music-based video content becomes incredibly engaging as a baseline experience. Spotify might be looking at reducing the ‘creative-friction’ even further by adding an appropriate song to any video clip that a user creates.
Another question to ask is whether Spotify would be looking to introduce this into the Spotify app itself (or a standalone app), or through its existing integration with Facebook & Instagram. In theory, Spotify could carve out a place for itself in a TikTok world, where Spotify helps take peoples’ UGC video content and automatically make it more engaging with appropriate sounds added. Or if it’s being done via Instagram, Spotify’s technology could make it easier for users to add engaging video content for Reels and Stories. Watch this space…
Before you leave…
Want to see the full link of new patent filings over the last 2 weeks? Click here!
Follow me on Public to get the TLDRs of each Patent Drop and more exclusive content to help inform your stock market investments
I’d love it if you can forward this email to anyone in your network who’s interested in the future :)