CASA were implementing a new information architecture for their online store and needed to understand how well the proposed structure supported real user tasks.
I ran interviews and usability testing with current customers, pilots and frequent users of the store to evaluate whether needs were being met, and to compare the existing IA against the proposed structure.

Ten 1:1 moderated sessions, up to 90 mins, were conducted over Zoom with screen-sharing. Tasks were aligned to key customer journeys and used to compare performance across the current and proposed IA.
To minimise learning effects, participants were split into two groups. Group A used the current store first then the proposed IA; Group B did the same tasks in reverse. This counterbalancing helped isolate the impact of the new structure itself.
Tasks were rated as Easy, Some difficulty, Great difficulty, or Fail. Trends highlighted where the IA supported navigation and where changes were needed.

A task-aligned ease score, out of 10, summarised perceived difficulty across journeys. Overall, participants tended to find the existing IA easier, despite one low outlier. This signalled the proposed IA needed further refinement.

An Axure prototype reflected the updated IA and guided task flows. Watching participants attempt navigation in a realistic context revealed where the structure aligned with expectations and where labelling and placement needed tuning.

