B2B Enterprise Design System & SaaS Experiences for the Global Leader in Stock Media
This project involved redesigning an outdated and overly complex legacy Enterprise SaaS into a cutting-edge AI-powered content curation tool. The previous interface hindered usability, overwhelmed users with options, and lacked intuitive workflows. By introducing a sleek, user-friendly design with advanced AI-powered search capabilities, the platform became a modern, efficient solution for account managers working with enterprise clients like Marvel and Disney.
Problem
The core challenge was to streamline content discovery and curation for enterprise account managers serving high-stakes clients across entertainment, marketing, and media sectors. The legacy Shutterstock Enterprise Catalog platform's cluttered interface, excessive options, and disjointed workflows frustrated users, slowing down asset selection and integration into production pipelines—without a modern AI-driven overhaul, efficiency gains and client trust in rapid, scalable creative solutions would remain out of reach.
Enterprise Users
Catalog exacerbated these issues by burying relevant content under unorganized folders and minimal search tools, eroding efficiency for time-sensitive roles where curators must deliver culturally resonant assets for global launches, as seen in high-stakes projects for movie franchises. This research highlighted a critical need for greater usability, revealing where 70% of users did not know how to find saved assets, to restore trust and accelerate curation from days to hours.
Defining SaaS Problems
Diving into iterative testing with enterprise curators from studios revealed that while the redesigned upload mechanisms promised efficiency, they still tripped over legacy echoes—ambiguous progress indicators during bulk transfers that left users hovering in uncertainty, and inconsistent feedback loops where dropped files vanished without a trace, amplifying abandonment rates in high-stakes sessions. These weren't flashy flaws but quiet killers of momentum, where a 30-second hesitation snowballed into workflow halts.
Reframing the challenge, I orchestrated targeted prototypes that unified the upload ecosystem: embedding real-time validation previews to preempt errors, contextual tooltips that anticipated user intent without overwhelming the interface, and adaptive animations signaling seamless progression, all while A/B testing against baseline metrics.
Reframing the challenge, I orchestrated targeted prototypes that unified the upload ecosystem: embedding real-time validation previews to preempt errors, contextual tooltips that anticipated user intent without overwhelming the interface, and adaptive animations signaling seamless progression, all while A/B testing against baseline metrics.
Usability Issues
Usability testing with content curators from creative teams exposed critical bottlenecks in the asset detail page, where fragmented metadata fields—like incomplete descriptions, scattered keyword inputs, and opaque approval toggles—forced users into a scavenger hunt for context, turning a quick review into a 10-minute detour that disrupted fast-paced campaign curation. One curator highlighted how mismatched client tags (e.g., unlinked Pfizer geography filters) led to compliance red flags in social channel previews, risking brand misalignment and stalling approvals for time-sensitive launches.
Addressing AI Search Needs
By embedding semantic AI into Boolean queries—like "red AND cars NOT balloons"—the tool not only parsed complex intent with surgical accuracy, surfacing 500 hyper-relevant results from millions without the exhaustive manual sifting that once devoured hours, but also layered in intuitive controls: dynamic filters for image types, people exclusion, and editorial safeguards that auto-refined outputs in real-time, ensuring curators at Marvel could swiftly isolate brand-compliant assets amid high-volume searches.
Baseline Usability Studies
The following images are taken from presentations created for design executive leadership. I led the baseline usability study, enlisting UserZoom for remote, moderated 60-minute sessions that immersed participants in real-world Jobs-To-Be-Done scenarios like sourcing assets for eCommerce campaigns or enterprise media briefs. We handed out secure logins and guided them through the live platform, capturing unfiltered interactions from initial logins to collection builds, revealing raw friction points in navigation and asset management that the legacy SaaS had long overlooked.
User Interviews
I led the baseline usability study, enlisting UserZoom for remote, moderated 60-minute sessions that immersed participants in real-world Jobs-To-Be-Done scenarios like sourcing assets for eCommerce campaigns or enterprise media briefs. We handed out secure logins and guided them through the live platform, capturing unfiltered interactions from initial logins to collection builds, revealing raw friction points in navigation and asset management that the legacy SaaS had long overlooked.
A/B Testing & Tree Testing
To conduct tree tests and A/B tests for Catalog's navigation refinements—drawing from the baseline usability findings in these screenshots—we would recruit a similar diverse cohort of 50 creators (e.g., graphic designers, curators, eCommerce marketers) via UserZoom for unmoderated tree tests. Tree tests, like those for Tasks 1 and 11, would present unlabeled site maps (e.g., Home > Asset Type > Images vs. Collections > Created By > Only You) to gauge intuitive paths for hypothetical queries such as "check saved images" or "find trending content," measuring success rates (e.g., 30% vs. 70%), first-click accuracy, and backtracking errors (e.g., 12/17 users lost in wrong branches like Downloads). We would then do moderated tests with 5 participants, A/B tests would follow by exposing half the participants to Variant A (legacy structure) and half to Variant B (optimized, e.g., consolidated "Saved" under Home > Assets), tracking task completion times, error rates, and qualitative feedback (e.g., confusion over "saved" as stock vs. personal) through think-aloud protocols, ensuring iterative designs reduce friction in asset management and channel discovery.
UX Research Theme Synthesis
We employed an affinity diagramming approach post-sessions, grouping raw observations, quotes, and pain points from our 5 participants (e.g., graphic designers, curators) into emergent subthemes via collaborative workshops with the design team. Using tools like Miro or Excel, we'd cluster similar issues (e.g., navigation confusion in Subtheme 1, repeated collection creation in Subtheme 2) to reveal overarching patterns, validate against user expectations (e.g., quick visual access to recent saves), and prioritize recommendations (e.g., visual hinting on holding cards, drag-and-drop for asset moves).
Feature Recommendations
The theme synthesis process directly informs and refines key recommendations by distilling user pain points (e.g., navigation friction and overcrowding in enterprise views) into prioritized, actionable directives like sorting by recency rather than alphabetically to surface recently updated department-specific collections first, while displaying the latest saved image on holding cards for quick visual scanning. This ensures recommendations align with observed behaviors—such as users' reliance on temporal cues over names—reducing cognitive load and enhancing personalization, ultimately validating iterative designs through cross-referencing subthemes (e.g., visual hinting needs) against quantitative metrics like task success rates from baseline studies.
Conclusion - Future Iteration Tests
The usability study for Catalog exposed critical gaps in the post-save experience despite strong onboarding: while 75% of participants successfully saved new images thanks to intuitive initial flows, 100% failed to locate those saved items, revealing a fundamental breakdown in visibility and retrieval. Sharing collections proved equally challenging (only 33% success in Scenario 2), with users encountering significant friction from legacy navigation and organization structures. These findings underscore persistent bottlenecks in discoverability, recency signaling, and collaborative workflows that undermine the app’s potential as a seamless creative tool.
Moving forward, prioritizing A/B tests on high-impact interventions—such as prominent visual recency cues, streamlined drag-and-drop sharing, and redesigned collection views—offers a clear path to lifting task success rates above 80%. By addressing these personalization and accessibility shortcomings now, Catalog can evolve into a more competitive, Canva-like creative hub, delivering delight to individual creators while laying a robust foundation for future enterprise-scale adoption.