Major UK supermarket (strict NDA)
UI for human-AI collaboration and profitable clearance of products
Embedded client-side to digitise complex manual process
Conducted generative and evaluative research with experts to define and iterate a UI for human-AI collaboration
Identified quick wins, systemic barriers and actions to realise value of automation
Collaborated in agile sprints with product functions, stakeholder groups and design community
Client
Major UK supermarket (NDA)
Competencies
Vision workshop Discovery Experience mapping Usability testing
Industry
Retail
Duration
18 months
Context
One of Britain’s big 5 supermarkets was on a mission to transform it’s internal operations. I joined the enterprise team, designing a suite of tools to manage the lifecycle of products. My brief focused on clearance of non-food products at the end of the lifecycle.
Commercial teams predict sales up to a year in advance, to plan use of space in stores and ensure adequate supply of products. Months later, towards the end of the planned selling period there may be stock left. Products are discounted to minimise waste and maximise revenue.
Challenge
Deep discounts might clear stock at the expense of revenue. Shallow discounts protect the revenue but may not sell out, leaving stock to be disposed of manually.
Teams managing the products used legacy systems, personal documents, sales plans and personal expertise to determine a discount aimed at striking a balance for the business.
How could machine learning artificial intelligence (MLAI) be used analyse product data and recommend prices? With automation and fine tuning revenue could be increased, and time saved for teams to focus on higher value tasks.
My role
I was embedded as part of the enterprise design team, along with a succession of UI-weighted agency colleagues. I reported to in-house design leadership and worked day-to-day as part of a cross-functional team coordinated by a Product Manager.
This was shortly after the pandemic so the role was mainly remote, with occasional team days in the regional and head office.
Users
Merchandisers: Looking after groups of products; Review price recommendations and submit decisions for approval
Senior Merchandisers: Maintain the schedule; sense-check decisions prior to Manager approval.
Superusers: Leadership and admin roles overseeing process and business strategy for Director approval.
Objectives
Design a single UI that supports human-AI collaboration to realise the data-driven gains. A tool providing AI prices was already built, but it was not well integrated into users’ workflow. Sentiment was low and recommendations were not widely implemented.
Additionally, design of the UI would contribute and adhere to an evolving design system, to maximise quality, usability and minimise tech debt.
Actions
Vision workshop
Sprint planning
Journey mapping
Contextual inquiry and interviews
Ideation and co-design workshops
Content creation
Wireframing & prototyping
Usability testing
Strategy
An experience map helped to understand the general process and document types but practices varied from team to team.
A vision requiring fundamental change
To align key stakeholders on a 5-year vision I used the newspaper activity to understand aspirations, followed by Stop-Start-Continue to learn about the as-is context.
The vision resolved into 4 seemingly reasonable objectives to achieve the ultimate goal. Building Trust in AI was, and remained, the key challenge.
At the time, alignment in the room gave no indication of the challenges that surfaced later while designing the UI to support process – the question of strategy, and how ways of working should change on a practical level. Questions that extended beyond the remit of UX. The business would need to evolve financial processes and accountability to encourage human-AI collaboration
Initial research revealed quick wins and two processes
2 divisions comprised of numerous teams, with hierarchical roles of increasing oversight and commercial responsibility.
Interviewing users in key roles of the two divisions helped gain a high level understanding of the process, touch points, actors and frustrations.
It turned out that both divisions had origins as separate initiatives. Each with its own language and quite different ways of working. Ultimately the divisions could not be aligned so a dashboard for each was designed, with a view to align over time.
A few usability and educational quick wins were prioritised to help get grounded in the product, add some design value and onboard a new team of offshore Engineers.
Quick wins for the as-is solution. Wireflow of early concept.
From Sprint 0 to Version 1
I wireframed a concept aimed at minimising the need for human effort. The user’s role would be to ‘step in’ when the numbers looked wrong.
With 40 metrics in the current tool the PM was keen to minimise data in the UI to maximise performance. We challenged users to align on critical metrics. This helped optimise the UI but increased uncertainty for users.
We realised that users were responsible for their budgets and the outcome of price changes. They had to justify their decisions to managers, but had no way to explain the AI recommendations. The following sprints focused on this problem.
How might we help users relate to AI recommendations?
Why that price?
By attending division working groups we learned that the AI tool had been introduced to the user base, but not fully understood.
Collaborating with data scientists and expert users I created training materials increase awareness and confidence in AI and a way to visualise AI logic.
Using real data in the prototype helped users focus on the UX and validate the feature. Further collaboration across the enterprise team ensured components and functionality were accommodated by the design system.
The visualisations saved space in the UI and provided the reassurance users needed (despite their preference for spreadsheets and tables).
Layers of approval added complexity
Unexpectedly, while reviewing the UI with users I learned that decisions were progressively 'rolled up' in commercial groups to be signed of at division level. Running a spike enabled me to secure time with managers to understand requirements and co-design for this unknown part of the process.
Testing with real data proved critical to success. Despite explanation throughout design and handover, testing highlighted API and calculation errors. I coordinated time with expert users and engineers to verify data sources and formulae. Working closely with engineers I ensured the UI was configured to generate accurate data and validate the solution.
Data pilots revealed a need to clarify formulae and source metrics to support the approval process
Shifting focus to the as-is experience
All the effort invested in the future process generated frustration among users, who still had problems working with the existing version of the tool. UX focus shifted to ensure the teams felt heard.
Contextual enquiries helped to reset with some foundational research. We identified some more quick wins to make usability improvements utilising components from the new design system.
Outcome
The documented UI it was handed off to offshore engineers to be built using the principles of the evolving design system. It represented the first important step towards the original vision.
By working closely with steering groups and gaining insight from users at relevant work levels we created a solution integrated within a suite of tools that set the foundations for new efficient ways of working.
Strategic recommendations highlighted the need for structural change, shared accountability and evidence of data-driven gains to continue to build trust in human-AI collaboration.
“It’s an insane amount of work - it’s really come a long way.”
Product Manager
The long lifecycle for retail products meant it would take a year to fully evaluate AI price recommendations.
While the dashboard was in development I worked on a version of the tool aimed at seasonal products, with a much shorter lifespan. Case study coming soon…
Learnings & takeaways
Evaluate the appetite and capacity for change at all levels
We had a mandate to design from the bottom up, but the process was governed top-down by division leadership and shaped by financial processes at business level, both of which presented challenges to the objective.
Ensure key people are onboard with the objective and kept informed
The variation in users' needs and ways of working was dependent on their managers. These were key people in the approval process but they were not close enough to the project to inform design.
Trust and transparency are critical to design for AI
In this context AI recommendations represented a risk to the reputation of human collaborators and departmental budgets. Users felt that accountability should be shared. I felt that evidence of successful AI prediction and system-level incentives would help drive engagement and adoption.