Last fall, me and 2 of my friends and classmates, we made an assessment the current method(s) used to navigate indoors throughout the university. The purpose was to develop a prototype that performs better than the current solution. And we did.
So how did we get from nothing to an evaluated MVP for this indoor navigation system?
Well, we studied and developed a product based on the evaluation of the user’s experiences and desired experiences.
And while reflecting back on the process, I see that it mainly was a 5-step approach, once we identified the topic that we wanted to explore.
1. Qualitative evaluation of the current product / tools used to navigate indoors.
Here we interviewed our audience and talked about their habits and perceived value of the current ways regarding navigation.
We did face-to-face, unscheduled interviews with students who were willing to give us a few minutes of their time to improve their life as a student, and asked them to rate their experiences on different parameters.
2. Quantitative validation
We sent out and analyzed a questionnaire, to see if the problems that we initially detected from our own experience and from the people we interviewed, were at a large scale or not.
They answered an on-line questionnaire, delivered through social media groups related to the university – a highly powerful medium if used efficiently.
3. Qualitative exploration of the current solution
Once the interviews and responses from the questionnaires were pointing that there is a need for improvement and already showing some points of improvement, we went further to understand what kind of experience do the students have when navigating around.
What kind of feelings do they have, what bottlenecks or critical points are there in the existing design of the solutions, how do they react to it, etc.
We filmed them and asked them to narrate the process but behave like we are not around.
4. MVP development
We created an MVP with features emerging from all the 3 data collection methods, which addressed the most critical points and the experiences that were desired from the interviewees, as well as adding features that would avoid negative experiences that interviewees had with the other solution.
5. MVP testing
We put the MVP to the test, asking people to navigate the university using our MVP, so we can see if the initial problems are resolved, and what other technical issues, or new problems might appear with the usage of our concept.
With this approach we got some robust product requirements and also points of improvement for the current MVP.
By repeating step 4 and 5, it is expected that a more refined product will emerge after each iteration, leading to an offering that can be tested at a larger scale, to see how well it performs.
Now, to do this at a large scale was out of the scope of the project, but doing a testing with 30-40 people with the refined MVP promises to add more insights into how the final product should look like.
And even though it might mean a higher investment at the beginning, involving your users in the early stage of new product design or re-design, ensures a reliable set of product features.