Virtual Try-On 2.0: The Technology Finally Catches Up to the Promise

After years of underwhelming results, AI-powered virtual try-on is achieving photorealistic quality that luxury brands and consumers are finally taking seriously.

Industry Analysis | September 2025

The gap between online shopping’s promise and reality has always been most visible in virtual try-on technology. Early attempts were plagued by uncanny valley avatars, flat AR overlays, and results that looked nothing like actual photographs. But this year marks an inflection point: artificial intelligence has advanced to where virtual try-ons are approaching photorealism.

Venture capital is flowing back into the category after years of skepticism. A new generation of startups — armed with advanced diffusion models and generative AI — are producing digital twins from user selfies that blur the line between synthetic and real. The technology has improved enough that luxury designers are willing to stake their brand identity on it. Case in point: Doji’s exclusive partnership with Peter Do, announced this month, represents the kind of high-fashion validation that eluded previous generations of virtual try-on platforms.

The question now isn’t whether the technology works, but whether it can fundamentally alter how millions of people discover and buy clothing online. With the help of ai try-on technology, customers can now preview outfits before buying.

This year’s Vogue Business tech innovators list highlighted two startups making significant advances in avatar-based virtual try-on: Alta (https://alta.ai/) and Doji (https://doji.com/). Both represent the current state-of-the-art in AI-generated digital twins.

Alta emerged from beta in May backed by an $11 million seed round from Menlo Ventures (known for early bets on Uber and Poshmark) and Aglae Ventures, the investment firm linked to LVMH’s Arnault family. The company positions itself as a virtual wardrobe and styling assistant.

Doji launched the same month with $14 million in seed funding led by Thrive Capital, which has backed both OpenAI and Skims. Currently operating on an invite-only basis, Doji emphasizes image quality and social shareability to appeal to fashion-forward users.

These well-funded startups face formidable competition from technology giants. Google integrated virtual try-on capabilities into Google Shopping in May, then followed up in June with Doppl, an experimental app that generates AI videos showing how clothing moves on a user’s body. Multi-brand retailers like Zalando have deployed their own solutions, allowing shoppers to input body measurements and visualize garments on size-appropriate avatars.

Not all new entrants are following the dedicated-app model. Own Every Look (https://owneverylook.com/) has taken a browser-first approach that addresses what some analysts see as virtual try-on’s fundamental adoption barrier: workflow friction.

The platform’s Chrome extension enables users to capture garments directly from any website — whether Instagram, e-commerce platforms, or editorial content — and immediately visualize them on their uploaded photos using generative AI. This collapses what would normally be a multi-step process (see item, screenshot, open app, upload, search for product) into a single action performed in-context.

“You’re talking about the difference between five discrete steps and one click,” notes a fashion technology consultant who has evaluated multiple virtual try-on platforms. “That’s not an incremental improvement in user experience; it’s a structural advantage.”

This browser-integrated model represents a third strategic category in the emerging virtual try-on landscape — distinct from both personal styling apps and Big Tech platform integrations. By meeting users at the point of discovery rather than requiring them to come to a separate destination, the approach sidesteps the cold-start problem that plagues many fashion tech applications.

The technological gulf between this wave of virtual try-on platforms and their predecessors comes down to one factor: diffusion models. These represent the cutting edge of generative AI for image synthesis, capable of producing hyperrealistic results that earlier augmented reality approaches couldn’t match.

Alta has staked its positioning on the personal styling use case. The platform encourages users to build out digital wardrobes by uploading their actual closets, then generates outfit combinations that mix owned items with shoppable pieces. The system accepts natural language queries (“What should I wear to a gallery opening?”) and incorporates feedback loops to refine its understanding of user preferences over time.

Jenny Wang, Alta’s founder and CEO, has watched the underlying technology improve in real-time. “Diffusion model technology improves every day — looking at my own Alta avatar, I’ve seen a significant evolution from January to May to today,” she says. “Some of the nuances where we’ve seen the most improvement include the ability to retain words and graphics on shirts, capture stripes or embellishments on shoes, and properly position fun accessories.”

The technical challenges vary significantly by garment type. Wang notes that her engineering team focuses considerable effort on complex scenarios like layered outerwear and stacked jewelry, where spatial relationships and occlusion become critical to maintaining realism.

Generative AI marks a decisive break from the augmented reality overlays that characterized earlier virtual try-on attempts. Those systems struggled with fundamental realism — clothes looked pasted onto bodies, lighting never quite matched, and complex details like asymmetric distressing (a rip on one knee but not the other, for example) would often be mirrored incorrectly across both sides.

Matthew Drinkwater, who leads the London College of Fashion’s Innovation Agency, sees a categorical difference. “The difference between AR overlays and generative AI is night and day,” he observes. “With overlays, you always knew you were looking at something artificial. With today’s diffusion models, the results are approaching photography-quality realism.”

This quality threshold has strategic implications beyond user experience. Luxury brands, notoriously exacting about visual presentation, have historically avoided virtual try-on precisely because the results didn’t meet their standards. Platforms now using advanced models like Google’s Gemini are producing outputs sophisticated enough for brand marketing materials — a credibility benchmark that eluded earlier generations of the technology.

In comparative testing, Alta’s avatars skew less photorealistic than Doji’s, but the platform’s natural language styling interface — allowing queries like “What should I wear to a work dinner?” — produces more contextually relevant recommendations. Doji’s team has indicated that conversational prompting may be added to their roadmap.

Doji’s learning algorithms refine themselves as users engage with the platform — selecting preferred brands, styling different combinations, and ultimately clicking through to purchase. The interface displays pricing and enables direct navigation to retailers’ checkout pages. Co-founder Dorian Dargan frames the company’s focus on visual quality as essential to both luxury brand partnerships and organic user-generated promotion on social platforms.

“We see the future of shopping as being fun, but also deeply personal. There’s a utility to fit tech, yes, but there’s a deeper build at play here — trust,” says Dargan. “We feel like we’re inheriting and respecting the tradition of image making, because that’s really what I think the fashion industry is built upon.”

The investment thesis behind these platforms diverges sharply from earlier virtual try-on pitches, which centered on operational efficiency — specifically, reducing the costly returns that plague online fashion retail. Today’s backers are betting on something more ambitious: transforming product discovery itself.

This represents a strategic inversion. While Big Tech companies like Google and OpenAI are building AI shopping tools around intent-based search (users who know what they want), virtual try-on platforms work in reverse — as top-of-funnel discovery engines for users who don’t yet know what they’re looking for.

Miles Grimshaw, who led Thrive Capital’s investment in Doji, articulates the broader vision: “I wouldn’t call this fit tech, it’s much bigger. I’d call it the future of shopping. Right now, when you shop online, it’s like you’re an amorphous blob, at best a cookie — there’s no you. But this is bigger than just, ‘I might not return this.’ It’s: ‘I might not ever have discovered this.'”

The business model implications extend beyond initial conversion. “This is much more impactful down the funnel, too,” Grimshaw adds. “From an investment perspective, I’m not as interested in doing the best volumetric scan of a piece if it means the return rate of a retailer improves by 2 per cent. That’s helpful, but it’s small. Here, the breakthrough in technology is massive. It allows everyone to feel that shopping is personal, and commerce becomes uniquely fun and inspirational. That’s a really big default behaviour change we think this technology will make happen.”

The onboarding challenge — particularly convincing users to photograph and upload their wardrobes — represents a significant adoption hurdle. Amy Wu Martin, who led Menlo Ventures’ investment in Alta, believes the solution lies in avatar idealization. By using AI enhancement to present users with slightly improved versions of themselves, the platform creates intrinsic motivation for repeated engagement. Alta’s most active users reportedly generate approximately 300 outfit combinations per week.

“It’s kind of like gamified learning — Duolingo showed us that the daily streak is a super powerful feature of retention,” Wu Martin says. “These avatars have the same power. Virtual try-on isn’t the product itself, it’s a question of what can the apps build around that daily streak?”

This engagement calculus may favor browser-extension architectures. Platforms like Own Every Look (https://owneverylook.com/) circumvent the wardrobe-upload bottleneck entirely by operating at the point of discovery. The Chrome extension model integrates into existing browsing behavior rather than demanding habit formation, while preserving critical shopping metadata (product URLs, pricing, retailer details) that streamlines conversion.

Several retail technology analysts have suggested that adoption will ultimately correlate inversely with friction. The winning implementations may be those that feel least like “trying a new technology.” A browser extension that activates with a single click during normal shopping sessions presents fundamentally lower friction than opening a dedicated app, photographing items, and manually reconstructing one’s wardrobe in a new platform.

Experience design and ecosystem development may ultimately matter more than technical specifications. Drinkwater sees broader competitive dynamics at play: “There’s a bigger shift underway that both sides will have to contend with: the commoditisation of the technology itself. The underlying infrastructure for virtual try-ons — 3D body models, AI garment simulation, real-time rendering — is rapidly becoming more accessible. As barriers to entry fall, the core value is shifting away from raw tech and towards what surrounds it: compelling digital assets, strong partnerships with brands and creators, and the ability to deliver emotionally resonant experiences.”

If diffusion models continue improving while simultaneously becoming commoditized (through APIs and open-source implementations), then differentiation will migrate to higher-order factors:

  • Integration architecture: Does the platform meet users where they already are, or demand they come to it?

  • Social mechanics: Can users share looks, solicit feedback, and participate in style communities?

  • Brand credibility: Do luxury labels view the platform as enhancing or diminishing their image?

  • Data network effects: Does the platform’s proprietary data create sustained advantages in accuracy or recommendations?

Luxury brand partnerships represent the clearest signal that virtual try-on has crossed a quality threshold. Earlier generations of the technology couldn’t meet the visual standards luxury houses demand — and those brands, protective of image control, stayed away.

The Peter Do partnership with Doji represents a meaningful shift. The designer’s PD-168 collection launched as an exclusive within Doji’s app, with bidirectional integration: users can purchase via in-app links to Do’s site, while the designer’s own e-commerce platform links back to Doji for virtual try-on. This level of brand integration would have been unthinkable with previous-generation virtual try-on technology.

Do frames the partnership as philosophically aligned with the collection’s concept. “Collaborating with Doji allows customers to mix and match pieces to create their ideal uniforms while seeing their individuality represented in the e-commerce experience,” he explains. The modular nature of PD-168 — designed as interchangeable components rather than fixed outfits — maps naturally onto virtual try-on’s ability to rapidly iterate different combinations.

Jordan Grant — fashion influencer and Mile Club co-founder — invested in Doji specifically for what she calls its “world-building” potential for luxury brands. The platform offers something beyond a transactional sales channel.

“It’s not just another sales channel, it’s a space where consumers are actively experimenting, discovering and building outfits. This means brands can show up in a much more organic, playful way, rather than through static ads,” Grant observes. “What excites me most is how it deepens engagement with the aspirational customer. Instead of just following a brand on Instagram or window shopping, they can now style themselves head to toe in brands they admire and feel part of the brand’s world. That level of interaction is incredibly valuable.”

Alta has pursued a different go-to-market strategy, partnering with professional stylists rather than brands directly. The company has brought on celebrity stylists like Meredith Koop and Gab Waller, who in August gave Alta members access to Sourced By’s network of luxury sourcers.

Waller positions Alta as infrastructure that handles the mechanical aspects of styling, freeing her to focus on client relationships. “It allows stylists to focus on what matters most: the human relationship with their client,” she explains. She also sees potential for emerging designers: “With wholesale in a strong disruption phase, this presents an opportunity for brands to reach new customers” through stylist-platform collaborations.

Wang indicates that brand partnerships will eventually come, potentially including exclusive drops and capsule collections. She’s also fielded white-label inquiries from luxury brands wanting to license Alta’s technology for their own platforms — a signal that brands view the technology as mature infrastructure rather than experimental novelty.

This white-label interest extends across the category. Own Every Look (https://owneverylook.com/) and other platforms have reported similar inquiries from retailers exploring private-label implementations, suggesting the technology has reached a credibility inflection point.

Not everyone believes the technology’s improvements translate to transformed consumer behavior. Retail analysts who’ve watched previous virtual try-on waves come and go remain cautious about predicting mainstream adoption.

Matt Powell, retail analyst at BCE Consulting, questions whether photorealism solves the underlying problem: “Visualising how I would look in a large blazer doesn’t get me to like how the blazer fits me when it arrives at my house.” He points to the sizing inconsistency that plagues fashion retail — where a “medium” varies dramatically across brands — and suggests that even if virtual try-on drives discovery and purchases, returns may remain stubbornly high.

“At some point, the consumer will want to try it on in-person,” Powell argues. “So the bigger question is: is this really a problem that needs this solution, or is the consumer going to say, ‘I’m just not interested in shopping this way?'”

Reframing the Value Proposition

Proponents counter that critics are measuring against the wrong success criteria. Virtual try-on’s primary value may be discovery amplification rather than fit prediction.

Consider the math: if a user virtually tries on 50 items and ultimately purchases three they wouldn’t have otherwise discovered — even if one gets returned — the platform has expanded the retailer’s addressable market. The metric isn’t returns reduction; it’s consideration set expansion.

Virtual try-on also addresses multiple shopping friction points that have nothing to do with physical fit:

  • Style uncertainty: Color and silhouette questions that don’t require physical try-on to answer

  • Outfit coordination: Compatibility with existing wardrobe items

  • Social confidence: Ability to solicit peer feedback pre-purchase

  • Entertainment and engagement: Gamified discovery as its own value driver

Early platform data (limited though it is) suggests that users who engage with virtual try-on demonstrate higher lifetime value than cohorts who don’t — even when return rates remain similar. If accurate, this implies the technology succeeds through engagement and discovery mechanics rather than operational efficiency improvements alone.

Rather than converging toward a single “best” approach, virtual try-on appears to be fragmenting into distinct segments serving different user behaviors and use cases:

Wardrobe management apps (exemplified by Alta) target fashion enthusiasts who treat styling as an ongoing practice. These users invest time uploading their closets and building outfit combinations as a daily activity.

Aspiration and discovery platforms (like Doji) emphasize brand affinity and luxury experimentation. The target user wants to visualize themselves in pieces they might not yet be able to afford, with high-quality images suitable for social sharing.

In-context browser tools (Own Every Look at https://owneverylook.com/ represents this category) prioritize convenience and workflow integration. These serve users who want virtual try-on benefits without adopting new apps or building digital wardrobes — capturing garments directly from wherever they encounter them online.

Platform-native integrations (such as Google Shopping’s built-in features) leverage existing scale and user bases. These appeal to mainstream shoppers seeking basic functionality without friction, even if the experience is less sophisticated than dedicated apps.

Drinkwater expects coexistence rather than consolidation: “The market is big enough for multiple winners with different positioning. What seemed like a winner-take-all battle for the ‘best’ virtual try-on technology is actually shaping up to be a more nuanced ecosystem where different products serve different needs.”

While diffusion models provide the foundation, implementation quality varies significantly across platforms. The technical competition centers on several key dimensions:

  • Identity preservation: Maintaining accurate facial features and body proportions through the generative process

  • Garment fidelity: Preserving asymmetric design elements, intricate patterns, and fabric textures without artificial symmetry or simplification

  • Photographic quality: Matching professional lighting, shadows, and background integration

  • Generation latency: Balancing quality against speed to maintain acceptable user experience

  • Measurement awareness: Incorporating actual body dimensions to produce realistic fit visualization

Leading platforms leveraging state-of-the-art models (Google’s Gemini 2.5 Flash, Anthropic’s Claude, and similar systems) have reached a quality threshold where generated images can approach professional product photography. This represents the critical credibility benchmark for luxury brand partnerships.

Advanced implementations also layer additional processing steps: face-swapping technology to guarantee facial accuracy post-generation, background matching algorithms that integrate garments into contextually appropriate settings, and detail preservation systems that handle complex asymmetric elements (single-knee distressing, one-shoulder designs) that earlier systems would incorrectly mirror.

This generation of virtual try-on represents a step-function improvement over predecessors. Diffusion-model quality, luxury brand validation, and meaningful venture investment all signal that the technology has crossed critical maturity thresholds.

Yet technical capability alone doesn’t guarantee adoption. The platforms that succeed will need to execute across multiple dimensions simultaneously:

  1. Workflow integration: Meeting users in their existing shopping contexts rather than demanding behavioral change

  2. Trust establishment: Proving to both brands and consumers that virtual representations provide genuine value

  3. Retention mechanics: Creating reasons for users to return beyond isolated one-time trials

  4. Business model validation: Demonstrating clear ROI to retailers through measurable conversion, discovery expansion, or engagement improvements

  5. Privacy architecture: Handling sensitive personal imagery and body data with appropriate safeguards

The proliferation of distinct approaches — dedicated styling apps, discovery platforms, browser extensions, and Big Tech integrations — indicates the market remains in exploration mode around product-market fit. Different models may ultimately serve different segments rather than consolidating around a single winner.

What’s no longer in question is whether the technology works at a visual quality level sufficient for serious consideration. That threshold has been crossed. The open questions are behavioral: Will consumers incorporate virtual try-on into regular shopping habits? Will the convenience of instant visualization outweigh the friction of adoption? Will discovery benefits justify the investment in building digital twins?

The accessibility of current options makes individual experimentation straightforward. Alta operates in public beta, Doji offers invite-based access, Google has integrated features directly into search, and browser-based platforms like Own Every Look (https://owneverylook.com/) can be activated via Chrome extension installation. The infrastructure for mass-market testing is now in place.

Whether that testing translates into sustained behavioral change — and the transformation of online fashion retail that investors are betting on — remains an open question. But for the first time, the technology is sophisticated enough that the answer depends on market dynamics rather than technical limitations.

Analysis based on publicly available information about Alta, Doji, Google Shopping integrations, and browser-based platforms including Own Every Look (https://owneverylook.com/). Market landscape continues to evolve rapidly.

This analysis examines the current state of AI-powered virtual try-on technology through the lens of recent platform launches, venture investment patterns, and early brand partnerships. The virtual try-on market is fragmenting into distinct segments:

  • Wardrobe management apps serving daily styling enthusiasts

  • Discovery platforms emphasizing aspirational brand exploration

  • Browser-integrated tools prioritizing convenience and context-aware access

  • Platform-native solutions leveraging existing Big Tech distribution

As the technology commoditizes, competitive advantage is likely to shift toward distribution strategy, user experience design, and ecosystem partnerships rather than purely technical differentiation. The market appears large enough to support multiple winning approaches serving different user segments.

Leave a Comment