Discover more from PATENT DROP
#044 PATENT DROP - Disney, Nvidia & EA
play as or watch yourself in games & movies, control vehicles with your gaze
This Patent Drop is going out to 10,719 people! Hit subscribe to get a peek into the future with 3 summaries of new patent applications from big tech companies every week ✨🔮
Hi - happy Monday!
Before we get into this week’s filings, just a quick message from this week’s sponsors:
The Unexpected Way Millionaires Invest In Alternatives (And How You Can Too)
The Power Law dictates that 1% of the world’s population holds 45% of the wealth. And they’re only getting richer, thanks to the rise of consumer tech.
The one thing they’re doing that we’re not? They’re investing heavily in alternative assets.
And it makes sense why.
Interest rates are still near 0. Inflation is rising. And top firms from Goldman to Vanguard project returns of less than 6% until 2035.
To find promising investments, millionaires are turning to alternatives.
In fact, they allocate 30%, on average, to assets like Contemporary Art.
These numbers explain why:
Contemporary Art prices outperformed the S&P 500 by 174% from 1995–2020.
The global value of art expected to grow by 53% by 2026
86% of wealth managers recommend offering art to clients
Thanks to Masterworks.io, Silicon Alley’s newest $1B unicorn, you can access this unexpected and potentially lucrative asset class. Their revolutionary technology platform allows you to buy and sell shares of paintings by Banksy and Basquiat—just like trading crypto on your favorite exchange.
The best part? Patent Drop Subscribers skip their waitlist*
*See important disclosures
Okay, now to the patents:
Nvidia is looking to make it safer and faster for drivers to be able to perform specific vehicle operations.
Rather than just relying on voice instructions, Nvidia want to take into account the gaze direction of the driver.
Using sensor data and a spatial map of the inside of the vehicle, Nvidia will be able to determine where a driver is looking, and then correspond this to any voice instructions from a driver. For example, if the driver looked towards the entertainment system and said “turn it up”, the car’s system will be able to determine that the driver is referring specifically to the volume on the entertainment system.
This patent filing is in a similar space to an application filed by Ford in #037 PATENT DROP where they wanted to use brain machine interfaces for drivers to be able to control car operations with their thoughts.
There are potential safety benefits to implementing these methods of controlling a vehicle. For instance, these initiatives remove the need to fumble around for controls. However, it feels like the vehicle is becoming a playground to explore new form factors for users to interact with technology.
Over the long run, we’re moving further down the path of becoming cyborgs, controlling more machines with our consciousness. Exciting or terrifying futures - take a pick.
Disney is working on making it easier to ‘tune’ peoples’ faces in the frames of a video, in order to change: identity, age, lighting conditions & more.
For example, if there was a scene in a movie where there needed to be a younger Will Smith on screen, Disney’s system will be able to tweak the facial identity of the ‘older Will Smith’ and make him look younger.
While systems exist for being able to do this now, the current methods have a few disadvantage. Firstly, the current techniques tend to use neural networks that work best on low resolution images. Secondly, it’s costly and time consuming to keep retraining neural network models when wanting to change facial identities.
Without diving into the ‘how’, there’s some interesting implications of this technology.
In the short run, this kind of technology makes the movie-making process more efficient. For example, if a scene needs to be reshot with an actor and that actor isn’t available, Disney will be able to transplant the actor’s face into the re-shot scene.
Where things get potentially exciting is in the long-run where Disney could insert ‘viewers’ into a movie / cartoon and create personalized pieces of content. This is a theme that’s recurring across a number of patent applications from big tech companies, especially Snap creating personalized series of content that features a user and their friends.
Following the theme of the Disney filing, EA is looking into enabling users to generating character models based on reference images.
Using neural networks, the system will choose the relevant character attribute parameters from the reference image and then output a close-fitted character model.
While a lot of games do enable character customisation, it can take a lot of time and skill for a user to edit the character properties to resemble a real-world person. EA wants to remove the work using AI.
Why is this interesting? It feels like the next phase of entertainment - whether gaming or movies - is deeply personalised by being able to feature the viewers. It makes the process of consuming media more participatory, more open to remixes, and more open to being memeified (which is its own distribution channel).
Before you go…
Join 200,000 investors following Patent Drop on public.com and get tl;drs of new patent summaries
Have any friends or colleagues interested in the future? Forward them the newsletter!