

Discover more from PATENT DROP
PATENT DROP: Google’s automatic coder
Plus: Snap tracks your eyes while Walmart tracks your scrolls
Happy Thursday and welcome to Patent Drop!
Today, we’re checking out a filing from Google for a machine learning-enabled automated coder; tech from Snap to track your eyes; and patents from Walmart to predict when customers will stay or go.
Before we dig into today’s edition, we have a small request. If you have five minutes, we’d very much appreciate you filling out this survey. We’d love to learn more about you so we can continue to create content you’ll love and build our platform for creative minds like you. Thanks in advance!
Anyways, let's get into it.
#1. Google’s AI programmer
In its fight to remain relevant in AI, Google may be adding a multi-tool to its arsenal.
The company wants to patent a system for automating “certain aspects of computer programming” using an AI model. Essentially, this tool watches a programmer write code and makes suggestions for edits or tools that will help them in real time, based on what the programmer is intending to do.
Using machine learning, Google’s system tracks the changes a programmer is making to source code. If the programmer makes certain changes repeatedly, such as changing one variable or function over and over again, the system will determine that they intend to make those changes throughout. The system then takes into account repositories of source code transformations and tools made by other programmers to find the best fit for the job.
If you’re not a coder, think of it like this: Imagine you’re building a table and aren’t sure which tool from a massive toolkit is best fit to tighten a bunch of tiny screws throughout it. A tool like Google’s might suggest that you use a power drill, instead of a screw driver, to expedite the process.
Google said in its filing that this tech can automate the tedious, repetitive tasks that often come with coding. While macros can often be individually created to automate repetitive tasks, creating those tools themselves “can be cumbersome and/or require considerable expertise/resources,” the company noted.
Plus, Google noted, “many such tools may not scale outside of a particular context.” Its tool fills this gap, acting as a jack of all trades.
Google adding another piece to its hoard of AI tech is about as surprising as a cactus growing in the desert. The company has been touting new AI tools on the horizon for several months, including AI integrations throughout the Google Workspace suite, enhancing its search engine with AI, and merged its two major AI divisions, DeepMind and the Brain Team, into a single supergroup. And with practically the entire internet at its disposal for training data (at least according to its privacy policy update), the company shows no signs of stopping.
This patent in particular adds to previous filings showing Google’s interest in AI-assisted development. The company recently sought to patent a machine learning model that can create a user interface, and an AI tool that can develop and publish a “viable running app,” both from natural language descriptions. Though the tech in the latest patent would likely be put to use by programmers, while the others would be for a no-code crowd, these filings all together could hint at a suite of AI-based development tools in the works.
But Google faces some steep competition, said Kevin Gordon, co-founder of AI consulting and development firm Velora Labs. AI-assisted coding products already exist in the market, and more are likely to come, especially from competitors like Microsoft, Meta and OpenAI, Gordon said. Microsoft, for example, owns GitHub, which operates an AI coding assistant called GitHub Copilot. Meanwhile, OpenAI launched Code Interpreter earlier this month.
Gordon wouldn’t be surprised if this tech and others like it end up being fought out in court, he said. “The legal side is probably where you'll see a lot of the actual battles taking place. Patents like this will be important to defend the play.”
But if Google’s tech does reach developers hands at some point, Gordon can “definitely see it being a big deal,” he said. A large language model that can keep track of the minutiae across software repositories will likely do a better job at it than even an incredibly skilled programmer, he said. Adding a tool like this as a co-pilot for programming teams could allow for faster software development with less of a headache.
“There's the people who have that 10,000 foot view of a repository, and then there's the people who are really detailed,” said Gordon. “Here, you have a (large language model) that can kind of work at both levels.”
#2. Snap sees 20/20
Like many companies developing AR and VR tech, Snap wants to look into your eyes.
The company is seeking to patent tech that determines “gaze direction” to generate content when wearing a pair of AR glasses. Here’s how it works: Snap’s system first generates an “anchor point,” or a point in the user’s field of view where they’re focusing. Once generated, the system identifies a surface within the user’s field of view, and measures the distance between that surface and the anchor point.
The AR content is then generated based on that distance, and changes as you move closer or farther from it. Taking this type of measurement into consideration allows for more accurate rendering.
For example, if you are wearing a pair of Snap’s AR glasses, and you look at your kitchen table, the system would use the distance measurements and anchor point to accurately place an AR object.
Snap’s tech employs a concept called foveated rendering, or when rendering is done only at the point where a user is looking. By rendering content in this manner, Snap’s tech aims to “reduce latency and increase efficiency in processing captured image data thereby also reducing power consumption in the capturing devices.”
The company said that these renderings are based on tracking “head orientation and a relative position of a pupil or iris” with myriad sensors packed into the glasses to measure motion and eye movement. Snap said that its headwear may track users in a host of different ways, including facial tracking, hand tracking, biometric readings like heart rate or pupil dilation, and speech recognition for “particular hotwords.”
Snap has been working on AR glasses for a hot minute. The company first debuted AR Spectacles in 2017, a launch that resulted in $40 million in losses from 300,000 unsold units, and has released several iterations since. The company also has sought plenty of headgear-related patents, including one for a prescription version of its AR glasses.
But Snap isn’t the only one interested in tracking your eyeline. Meta has sought plenty of patents for gaze-based control of content, and offers eye-tracking within the Meta Quest Pro. Apple, meanwhile, has touted eye tracking and control as a big feature of its recently debuted Vision Pro headset.
Jake Maymar, VP of Innovation at The Glimpse Group, said that these tech firms see a lot of potential in users’ eyes … literally. He compared companies' newfound interest in gaze control to the shift from buttons to touch screens on cell phones: While the idea of a touchscreen once felt novel and outlandish to the average consumer, it’s since become an embedded part of daily life. “It's just a new paradigm that I don't think we realize is going to be the paradigm of the way we interact with things,” he said. “You don't realize how easy it is to use until you actually use it.”
Another (potentially lucrative) reason that Snap and other companies are so interested in vision tracking: Hyper-targeted advertising, said Maymar. In an AR or VR experience, tracking where a person is looking can tell a company where it should place ads, how they’re reacting to those ads and what content actually holds a person’s attention.
“There’s that saying that eyes are the windows to the soul,” said Maymar. “By tracking those, you can actually gain a lot of information. If you're gaining that information, you'll be able to really create memorable experiences that have impact.”
As a social media company, Snap makes the bulk of its revenue from digital advertising, a business model that’s struggling amid a drop in demand for ads. Finding new ways to target the consumer in its future iterations of headsets could provide an additional revenue stream.
#3. Walmart forecasts your scroll
Walmart wants to know what’s going to make you click “buy” when you’re online window shopping.
The retailer is seeking to patent three methods for predicting the churn, acquisition and conversion of a user. These tools work the same way that any AI prediction model does: They collect historical user data, such as transactions or engagement, and feed that to a machine learning model. That model then hands out a score detailing the likelihood that certain events will happen.
All three of these patents predict user interactions with an online membership program. The customer churn prediction patent predicts how likely the user is to not renew their membership to a customer loyalty program.
Meanwhile, the acquisition prediction patent forecasts the probability that a user will sign up for a program; and the conversion prediction method tells the likelihood that a customer will switch from a free trial to a paid membership.
Rather than treating every online customer the same, the tech in these patents seeks to help Walmart hyper-personalize its approach to making customers stick around. If a user is deemed unlikely to sign up for a membership or convert to a paid version of it, however, these systems indicate to the retailer that it shouldn’t bother with additional ads or reminders to avoid customer dissatisfaction.
“Customer satisfaction of many customers who may not have any desire to join the trial loyalty or membership programs may be reduced because such customers, while accessing the online e-commerce platform may be bombarded with content promoting the trial membership or loyalty programs,” Walmart noted in its acquisition prediction patent.
These patents are the latest example of how AI is allowing companies to cater their ads, recommendations and user experiences to hyper-personalized specifications. We’ve seen this concept crop up in several patent filings over the last few months: Visa wants to use AI to tempt you into using your credit card, Uber wants machine learning to predict your ridership needs, and eBay is working on a model to place highly-customized banner ads, just to name a few.
As for Walmart, this isn’t the first time we’ve seen the retailer get techy as it seeks to prove it’s more than its brick-and-mortar legacy. The company filed a bunch of tech-related patent applications in recent months, including an automated warehouse storage system, autonomous delivery vehicles (including trucks and drones), and a way to determine online shopper “attribute affinities” to help it give personalized recommendations.
These patents could help it boost its ecommerce presence, specifically by helping it gain traction for Walmart+, its $98-a-year subscription service that the company introduced in September 2020. The service reportedly has around 11 million subscribers, offers gas discounts and free delivery, and started its “Walmart+ Week” on July 10 and ends July 13 (The day before and after Amazon Prime Day).
Though a company representative claimed at launch that Walmart+ isn’t intended to compete with any other service, Amazon remains a major rival. With Amazon’s more than 300 million active monthly customer accounts, Walmart's tech push has likely been driven by a goal to narrow the gap between Amazon’s e-commerce capabilities and its own, especially as consumer trends lean toward online shopping, one analyst previously told Patent Drop.
Finding ways to effectively target the consumers that are actually going to buy into a Walmart subscription – and not wasting energy on those who won’t – could be key to the retailer gaining some ground in the fight for ecommerce relevance.
Extra Drops
You want a few more?
JPMorgan Chase is getting into interior design. The company wants to patent a system for “space planning” using AI reasoning, aiming to “optimize real estate usage.”
Adobe wants to predict where the crow flies. The company is seeking to patent a system that estimates 3D trajectories of physical objects, for tasks such as “optimizing layouts of physical environments” or “monitoring traffic flow patterns.”
Apple wants Siri to listen in to your FaceTime calls. The company wants to patent a method for interacting with a digital assistant during a video call, allowing users to make commands without stopping their chat.
What else is new?
The FTC has sent an information request to OpenAI as part of a probe of ChatGPT, seeking to discover whether or not the chatbot harms consumers, Bloomberg reported.
Speaking of chatbots: Google says Bard can now give audio responses, chat in 40 languages and use images
This award-winning app helps you get informed and inspired. In a world oversaturated with content, Flipboard makes it easy for you to discover stories that you care about, including stories from Patent Drop. Pick from thousands of topics, save stories in a Magazine, and connect with other enthusiasts. Download here.*
*Partner
Have any comments, tips or suggestions? Drop us a line! Email at admin@patentdrop.xyz or shoot us a DM on Twitter @patentdrop.