Building a generative AI query assistant
Leveraging research, rapid prototyping, and UX design to test and launch a query assistant for an analytics application.
Introduction
A primary workflow in OpenSearch Dashboards involves querying large data sets to understand when mission critical software has failed. This process can be complex, time consuming, and require analysts to sift through large result sets to uncover insights.
If a digital product stops working you would use OpenSearch to understand what happened,
when, and why? I made it easier for users to answer these questions through a query assistant.
Identifying areas of opportunity in user workflows
Speaking with DevOps Engineers that conduct root cause analysis resulted in an end to end workflow and insights into areas they find frustrating. From here I identified areas of opportunity where GenAi could solve for their frustrations. This approach allowed me to design use case specific features but also generated technical requirements for engineering that informed the types of models that need to be built and trained.
Use pain point
DevOps engineers leverage complex queries to uncover the cause and implications of a system failure. Writing these queries is time consuming and prone to errors. To add, not all DevOps engineers are familiar with the unique query languages offered by OpenSearch.
Delays in this step can result in significant financial loss, security risks, and downtime for users of a software.
How might we help DevOps engineers write queries faster?
The following solutions were informed by generative research, ideation workshops, and some creativity.
Query in English instead of a complex query language
Hypothesis
We believe users will query data faster if they could use natural language instead of a query language.
Key insight
Novice and power users generated needle in the haystack queries faster. However, it was faster to iterate and modify queries through a query language instead of using English.
Outcome
Initially the assistant drove the experience. After uncovering the insight above I pivoted to an assistant that helped users generate queries only when they needed it. The experience was still driven by the traditional query language.
Help me diagnose errors in my queries.
Hypothesis
Users spend a lot of time diagnosing errors in their query. A query assistant will help them resolve common errors quickly.
Outcome
Through early conversations with users we identified this feature would help greatly. Especially for users that are not experts in a query language.
Make it easy for me to learn a query language.
Hypothesis
Users will benefit from hyper contextualized query documentation that will help them learn and write more efficient queries.
Key insight
A common complaint we received was around in product documentation. Users found it challenging to learn and understand query parameters while in context. They usually spent a great amount of time sifting through our documentation site and then trying to translate their findings into something that works.
Outcome
This feature was highly desirable when we spoke to users. Through the assistant they could trigger a specific context or simply ask questions to generate answers from our documentation in context of the product.