Photo by Rosa Rafael on Unsplash
With the huge shift into regimen recommendation, it’s important for beauty retailers to step up their game plan to enhance customer satisfaction, loyalty and build trust among customers. Customers are more likely to return for future purchases and recommend the retailer to others if the retailer genuinely cares about their skincare concerns and provides helpful recommendations with positive results. Our framework uses ML-models to incorporate beauty regime recommendations through Walmart’s Virtual Try On feature, overcoming cold start and out-of-stock challenges.
Modeling Approach

The Regime Recommendation Framework (RRF) follows a two-step approach.
Step 1: Data Preparation
RRF scans through the latest years of omni transactions (store and eCommerce) in the beauty department, running an ensemble of association rule-based and probabilistic models. Including store transactions captures the “impulse purchases” driven by the in-store experience to surface online. We incorporate product attribute information (such as color, size) and shelf details (such as mascara, eyeshadow) to form a comprehensive omni table.
As part of data preprocessing, we filter out transactions which have single item purchases & create baskets of omni transaction data for modeling. We also filter the recommendations for diversification by product type within same department, so if someone is shopping for mascara, they will also receive eyeshadow recommendations. The recommendation is prioritized based on Customer Decision Tree (CDT), which reflects the customer’s journey in purchasing beauty products and customer preference curating a regimen of beauty products. For example, for a foundation skin product, would a customer prefer liquid or powder? If a customer is virtually trying a mascara, what compatible eyeshadow should be recommended?
Our data preparation pipeline features a scalable and robust design that can manage billions of transactions, demonstrating the framework’s capacity to handle substantial data volumes.
Step 2: Model Training to Provide Recommendations
RRF is trained using a BERT-based sequence classification model on a GPU to reference recommendations made in step 1. This helps in overcoming the cold start problem for newly launched or less purchased products. The algorithm will provide real-time product recommendations that will generate comprehensive beauty regimens which get ingested in Walmart’s Virtual Try-On platform – essentially, offering a complete virtual makeover.
This pipeline utilizes the processed data to train machine learning and deep learning models with a probabilistic approach.
a) Market Basket Analysis Model: RRF uses a tree-based algorithm for identifying frequent patterns. Min Support & Min Confidence threshold values are used as hyper-parameters and then association rules are filtered to generate single cardinality output (Antecedent, Consequent, Confidence, Lift).
What are support, confidence and lift in product association mining?
· Support: defines how frequent an item-set is in all transactions

· Confidence: defines the likeliness of occurrence of consequent in the cart given that the cart already has the antecedents

· Lift: measures the performance of an association rule at predicting a specific outcome, compared with a random choice. Greater lift values indicate stronger associations.

b) Probabilistic Approach: This statistical approach utilizes the ratio of co-occurrence orders to individual occurrence of products to produce final set of recommendations.
c) GPU-attribute-based approach: The previous two approaches work with transaction data, but result in a cold-start problem for newly launched or less purchased products. Using the output of transaction-based approach and item attributes, we create a custom labelled dataset containing positive and negative complementary pairs. The pretrained sequence classifier BERT-based model is fine-tuned using sentence pair classification to provide complement product recommendations based on item attributes like item description, benefit, etc.
Model Configuration:
· Layers: The first few layers are frozen and rest of the layers are trained and tuned based on customer labelled dataset.
· Optimizer: Adam algorithm
· OOS: For solving the out-of-stock problem while providing recommendations, we replace non-transactable items with items having similar attributes using sentence transformer: BERT-based embedding followed by cosine similarity and considering the item popularity and sales volume. In cosmetic products with multiple variants (colors/shades), it recommends another variant in that base item ID that is available.
Step 3: Prioritization of Recommendations
The recommendations from the model training pipeline are prioritized based on customer purchase preference using Customer Decision Tree (CDT), sales lift estimate, and a few other attributes that affect customer buying decisions. Recommendations are prioritized on product benefit, skin benefit, brand affinity, CDT, and product diversity using a weighted score mechanism.
RRF uses a scheduler running in parallel to produce final recommendations in less than 2.5 hours.
Benefits
Online customers feel frustrated that they cannot see what beauty products, specifically color cosmetics, look like on their face or how they match customers’ skin tones. Customers also struggle to envision the complete look that these beauty products will create. Shopping online is overwhelming with millions of beauty products that may or may not work for their skin type. RRF framework addresses these challenges to enabled customers to try products on their face and make more informed decisions about their purchases. Virtual makeovers are accessible to a wide range of people, including those with disabilities or mobility issues who may find it challenging to visit a physical retail store. This accessibility ensures everyone can explore different makeup looks.

Conclusion
The RRF approach can be used across multiple departments for improving shopping recommendations, especially involving virtual try-ons. The framework overcomes cold start and out-of-stock inventory issues and can help those seeking to build product recommendation solutions at scale for any department.
Acknowledgement
This recommendation engine is co-developed with Krishna Koti & sanjay vk. Special thanks to Jonathan Sidhu & Magdaline Frank for this initiative and support and Walmart’s merchant team for helping throughout this effort.
Resources
[1] An Yan, Chaosheng Dong, Yan Gao, Jinmiao Fu, Tong Zhao, Yi Sun, Julian McAuley. 2022. Personalized Complementary Product Recommendation. https://cseweb.ucsd.edu/~jmcauley/pdfs/www22b.pdf
[2] Junheng Hao, Tong Zhao, Jin Li, Xin Luna Dong, Christos Faloutsos, Yizhou Sun, Wei Wang.2020. P-Companion: A Principled Framework for Diversified Complementary Product Recommendation. https://dl.acm.org/doi/pdf/10.1145/3340531.3412732
[3] Giorgi Kvernadze, Putu Ayu G. Sudyanti, Nishan Subedi, Mohammad Hajiaghayi. 2022. Dual Embeddings for Complementary Product Recommendations. https://arxiv.org/pdf/2211.14982.pdf
4] Wang-Cheng Kang, Eric Kim, Jure Leskovec, Charles Rosenberg , Julian McAuley. 2019. Scene-based Complementary Product Recommendation. https://cs.stanford.edu/people/jure/pubs/completethelook-cvpr19.pdf
[5] Abhishek Mungoli. 2020. A Novel and Intuitive Way of Finding Substitutes and Complements. https://towardsdatascience.com/retail-analytics-a-novel-and-intuitive-way-of-finding-substitutes-and-complements-c99790800b42
Improving Virtual Makeovers with Machine Learning was originally published in Walmart Global Tech Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Introduction to Malware Binary Triage (IMBT) Course
Looking to level up your skills? Get 10% off using coupon code: MWNEWS10 for any flavor.
Enroll Now and Save 10%: Coupon Code MWNEWS10
Note: Affiliate link – your enrollment helps support this platform at no extra cost to you.
Article Link: Improving Virtual Makeovers with Machine Learning | by Apurva Sinha | Walmart Global Tech Blog | Mar, 2025 | Medium