Amazon supports store innovation with ML algorithms
Amazon continues to evolve its in-store shopping and payment experience using state-of-the-art machine learning (ML) algorithms.
At the recent re:MARS 2022 global artificial intelligence (AI) event hosted by Amazon, Dilip Kumar, Vice President, Physical Retail and Technology, Amazon, explained how the omnichannel giant harnesses computer vision and algorithms ML in an ongoing effort to provide easier and faster in-store shopping experiences for customers.
Here are highlights from Kumar’s presentation on how Amazon applies algorithms in its Just Walk Out frictionless shopping technology, Amazon One Palm payment solution, Amazon Style physical clothing store and shopping cart smart Amazon Dash Cart.
just get out
In the case of Just Walk Out technology, which allows shoppers to skip the line at many Amazon stores, some Whole Foods Market stores, and several third-party retailer stores, Amazon deploys sensors, optics, and algorithms of artificial vision. As a result, the company has reduced the number of cameras required in stores equipped with Just Walk Out technology to make them more cost effective, smaller, and capable of running deep networks locally.
Amazon’s Just Walk Out sensors and algorithms have evolved to detect a wide range of products and differences in shopping behavior at large grocery stores. The company has also increased the diversity of environments that its algorithms can take into account by deploying Just Walk Out technology for third-party retailers.
[Read more: Amazon selling its cashierless store platform to other retailers]
Initially introduced at two Seattle-area Amazon Go stores in September 2020, Amazon One is designed to allow customers to use their unique palm signature to pay or present a loyalty card in a store. While developing Amazon One, the retailer needed data to train and test its AI algorithms against demographics, age ranges, temperatures, and variations such as palm-specific calluses and wrinkles, to allow the service to determine correctly which palm was hovering over the device.
When Amazon began building Amazon One, it realized the limited availability of public datasets consisting of palm and vein images to help train the algorithms. Thus, Amazon has further advanced existing technologies to create huge volumes of diverse and realistic synthetic palm and vein images to train the AI models and prepare the solution for a wide variety of users.
At Amazon Style, Amazon’s physical clothing store, the company has created new algorithms that use information provided by a customer, such as a detail they entered in a “style survey” or the items that he scanned while shopping on the store floor, to create a diverse set of recommended items, balancing similarity to their current choices with a diverse set of options.
The system also generates complementary selections, such as a shirt to match with jeans to create a recommended outfit. Additionally, Amazon has created synthetic datasets to mimic variations in real-life shopping scenarios.
[Read more: First Look: Amazon’s first-ever physical clothing store opens]
Amazon Dashboard Cart
When Amazon built the Amazon Dash Cart, a smart shopping cart that helps customers skip the line at many of its Amazon Fresh stores in the United States, the company developed a set of algorithms computer vision and sensor fusion to detect moving items, including accurately capturing weight and quantity. Machine vision algorithms also have strict latency budgets, as the shopping cart tracks a customer’s receipt in real time.
[Read more: Amazon’s newest smart device – the Dash Cart]
“Looking back on the progress my team has made, I remember an Amazon saying ‘it’s still day one’ and it’s definitely still day one for us in physical retail and tech,” Kumar said in a business blog post. “I feel like we’re just beginning to tackle some of the complex challenges of the physical retail world, and I’m excited to see what the team will do next to push the boundaries of AI.