Expert team, exceptional design, exponential growth.

Enquire Now
journal

User Testing on Ground Zero

We have all been in this situation. We find ourselves with a lot of great visual design and flows, but realise that we are getting hit by a wave of feedback which we are unable to prioritize. The design needs to be rolled out yesterday. We were in this exact situation once.

We have all been in this situation. We find ourselves with a lot of great visual design and flows, but realise that we are getting hit by a wave of feedback which we are unable to prioritize. The design needs to be rolled out yesterday. We were in this exact situation once.

  1. At the client side we had a range of stakeholders who were involved at different levels, while their insights were valuable, it seemed to come in at a difficult time
  2. We had no way to validate or assumptions, neither did we have a clearly framed requirement or users on which to conduct our tests
  3. Time. Time is never on an agency’s side

This is a situation that some of you may have faced as well, especially when designing at the speed of business. It got us thinking and that’s when we realised that user testing and it’s insights is what is required. When all else fails, go back to the user.

Building personas

The first step was to understand who the user was. We interviewed 10 users to begin with and went ahead and made personas on the basis of

  1. Demographic Profile
  2. Behaviour, Desire and Need
  3. Technical background

Largely our users seem to have been selected on the basis of software literacy, so we grouped them into

  1. Basic users – No software knowledge
  2. Intermediate users – Basic software knowledge
  3. Advanced users – Folks innovating using software

Since we had done this with interviews for only 10 users, we continued this exercise to validate and keep iterating the above.

Planning a test

  • Understanding and stating the task: Once a design was in the sketch phase, we would list down what was the task the user needed to do
  • Stating the goal: What was the goal of the user here, what will he/she achieve by completing this task
  • Setting signals: What steps, or ideal flow will make the task successful
  • Defining metrics: What metrics define success

Executing the test

Once an quick mock was available our UX designer would identify, 3 intermediate and 2 advanced users to test with. We had this data from preliminary interviews that had been conducted for personas. The designer and the visual designer would then fix calls with these users in the format below:

  1. Calls were not longer than 20 minutes
  2. A dry run was carried out internally
  3. These were done on google hangouts, however since net connectivity was an issue at the user site we had a backup plan to do these orally on phone
  4. Users were asked to think out aloud while they performed the task
  5. While the user carried out the task we recorded the following
    1. Sequence of events
    2. No of stops / times the user encountered an error
    3. What worked well
    4. Terminology used by user
    5. Recording of video / screen share

Executing the test would take our designers half an hour per test and we did a minimum of 5 – 6 to validate each feature.

Evaluation of results

Once we had conducted the test, our UX designer and the Product Manager would analyze findings. The above findings were shown in the format below.

  1. No of stops : Errors, points of confusion for the user
  2. Time taken : In minutes
  3. Task completion: In percentage
  4. Happiness and engagement depending on how well the user though he/she did

The above was based on Google’s HEART metrics framework.

What worked

  1. We got actionable insights out of this, we had a good mix of qualitative and quantitative data to make suggestions and changes
  2. Along with the Product Manager we were able to prioritize and tackle feedback
  3. Since the design was tied to a requirement, goal and tested it gave the team a clear path on what to work on, thus saving time, and a lot of back and forth
  4. Refined our process, we were able to define a process from Requirements – Visual Design including iteration for a feature. This was brought down to 2 weeks a feature from a fuzzy month or month and half.
  5. Allowed us to quickly validate our assumptions in 4 – 8 man hours

Taking this ahead

The small inclusion in the process above definitely helped us make more informed decisions, but like everything else that is design, is being constantly iterated 🙂

Here is what we learned:

  1. While our metrics seemed to give good insights on performance, we realized that performance alone didn’t make sense. It also depended on who the user was. For some users time taken or task completion did not matter
  2. Our personas were based on a fact dictated by an organization in the system. We later changed these personas to reflect behaviour and needs. These seem more scalable
  3. With the revised personas we were able to focus on who the user was, his needs, his problems instead of only what he did
  4. We realized every task may be unique, and may require different metrics to measure, we are now working on identifying which metrics make most sense given the user and context
  5. We may not always have access to users, in that case talking to key stakeholders in the client side may help

Concluding thoughts

  1. By no means have we discovered the holy grail of agile research, in fact it keeps changing. Given the timelines, and need for insight it worked well for us
  2. This is not comprehensive in the traditional sense, but we believe that some research is better than no research
  3. Our users were mostly from rural areas, with limited infrastructure and were tough to gain trust of. This process helped us touch base with them and build a strong database of users we could keep going back to
  4. I feel this is required, and is possible for various projects. While most folks we work with don’t understand its value yet, planning effectively and presenting the data to stakeholders does prove to them that this works. What is important is how we present our test results and to which stakeholders
  5. I really love the clarity this gives to the design team and product teams, Specially while working remotely

We continue to refine the process and implement this for our customer. More time and iteration will pave the way!

We have all been in this situation. We find ourselves with a lot of great visual design and flows, but realise that we are getting hit by a wave of feedback which we are unable to prioritize. The design needs to be rolled out yesterday. We were in this exact situation once.

  1. At the client side we had a range of stakeholders who were involved at different levels, while their insights were valuable, it seemed to come in at a difficult time
  2. We had no way to validate or assumptions, neither did we have a clearly framed requirement or users on which to conduct our tests
  3. Time. Time is never on an agency’s side

This is a situation that some of you may have faced as well, especially when designing at the speed of business. It got us thinking and that’s when we realised that user testing and it’s insights is what is required. When all else fails, go back to the user. The first step was to understand who the user was. We interviewed 10 users to begin with and went ahead and made personas on the basis of

  1. Demographic Profile
  2. Behaviour, Desire and Need
  3. Technical background

Largely our users seem to have been selected on the basis of software literacy, so we grouped them into

  1. Basic users – No software knowledge
  2. Intermediate users – Basic software knowledge
  3. Advanced users – Folks innovating using software

Since we had done this with interviews for only 10 users, we continued this exercise to validate and keep iterating the above.

  • Understanding and stating the task: Once a design was in the sketch phase, we would list down what was the task the user needed to do
  • Stating the goal: What was the goal of the user here, what will he/she achieve by completing this task
  • Setting signals: What steps, or ideal flow will make the task successful
  • Defining metrics: What metrics define success

Once an quick mock was available our UX designer would identify, 3 intermediate and 2 advanced users to test with. We had this data from preliminary interviews that had been conducted for personas. The designer and the visual designer would then fix calls with these users in the format below:

  1. Calls were not longer than 20 minutes
  2. A dry run was carried out internally
  3. These were done on google hangouts, however since net connectivity was an issue at the user site we had a backup plan to do these orally on phone
  4. Users were asked to think out aloud while they performed the task
  5. While the user carried out the task we recorded the following
    1. Sequence of events
    2. No of stops / times the user encountered an error
    3. What worked well
    4. Terminology used by user
    5. Recording of video / screen share

Executing the test would take our designers half an hour per test and we did a minimum of 5 – 6 to validate each feature. Once we had conducted the test, our UX designer and the Product Manager would analyze findings. The above findings were shown in the format below.

  1. No of stops : Errors, points of confusion for the user
  2. Time taken : In minutes
  3. Task completion: In percentage
  4. Happiness and engagement depending on how well the user though he/she did

The above was based on Google’s HEART metrics framework.

  1. We got actionable insights out of this, we had a good mix of qualitative and quantitative data to make suggestions and changes
  2. Along with the Product Manager we were able to prioritize and tackle feedback
  3. Since the design was tied to a requirement, goal and tested it gave the team a clear path on what to work on, thus saving time, and a lot of back and forth
  4. Refined our process, we were able to define a process from Requirements – Visual Design including iteration for a feature. This was brought down to 2 weeks a feature from a fuzzy month or month and half.
  5. Allowed us to quickly validate our assumptions in 4 – 8 man hours

The small inclusion in the process above definitely helped us make more informed decisions, but like everything else that is design, is being constantly iterated 🙂

Here is what we learned:

  1. While our metrics seemed to give good insights on performance, we realized that performance alone didn’t make sense. It also depended on who the user was. For some users time taken or task completion did not matter
  2. Our personas were based on a fact dictated by an organization in the system. We later changed these personas to reflect behaviour and needs. These seem more scalable
  3. With the revised personas we were able to focus on who the user was, his needs, his problems instead of only what he did
  4. We realized every task may be unique, and may require different metrics to measure, we are now working on identifying which metrics make most sense given the user and context
  5. We may not always have access to users, in that case talking to key stakeholders in the client side may help
  1. By no means have we discovered the holy grail of agile research, in fact it keeps changing. Given the timelines, and need for insight it worked well for us
  2. This is not comprehensive in the traditional sense, but we believe that some research is better than no research
  3. Our users were mostly from rural areas, with limited infrastructure and were tough to gain trust of. This process helped us touch base with them and build a strong database of users we could keep going back to
  4. I feel this is required, and is possible for various projects. While most folks we work with don’t understand its value yet, planning effectively and presenting the data to stakeholders does prove to them that this works. What is important is how we present our test results and to which stakeholders
  5. I really love the clarity this gives to the design team and product teams, Specially while working remotely

We continue to refine the process and implement this for our customer. More time and iteration will pave the way!

CATEGORIES

Design Creates Experiences that Facilitate Product Goals.

With more than 20 years in the UI/UX sphere, we craft experiences that match user expectations, thus enabling brands to achieve their business vision.

20 +

Years in Design

200 +

Satisfied Clients

500 +

Successful Projects

40 +

Designers On-board

Take a Look at Our Journal
Image

Consumers today demand more than just functionality; they expect personalized, seamless, and secure experiences. Ignoring these expectations could mean losing market share to competitors who are quicker to adapt. From AI-driven personalization and omnichannel integration to sustainability and data privacy, these trends are shaping the future of digital experiences and the way businesses connect with their customers.

Image

Looking ahead, one thing is clear: exceptional UX design is not merely an option—it’s essential for businesses aiming to capture and retain users in today’s rapidly evolving fintech landscape. This guide will explore the key components, best practices, and emerging trends crucial for mastering fintech UX design and differentiating your application from the competition.

Image

Navigating enterprise software complexities can be daunting. Large-scale systems often come with intricate interfaces and overwhelming functionalities, causing inefficiencies, user frustration, and decreased productivity. Managing vast data, integrating multiple modules, and ensuring seamless user interactions pose continual challenges to maintaining high performance and reliability.

There’s a Lot Happening Behind the Scenes in Our Lab!

This project attempted to identify the gaps in this food delivery app and propose UI/UX design ideas to expand Swiggy in different ways for the users to use it for more than just a food delivery app, in turn setting it apart from its competitors.

Image

Our team attempted to fill in the gaps, in terms of its interface and user experience design; for offering a more enhanced and assisted experience for the users throughout their journey.

Image

Exploring possibilities of turning an OTT platform into something more than just for entertainment purposes. Here’s our attempt of working out new sources of income benefiting the platform as well as the audience?

Image