Building a generative AI query assistant

Leveraging research, rapid prototyping, and UX design to test and launch a query assistant for an analytics application.

Public Github Issue

Introduction

A primary workflow in OpenSearch Dashboards involves querying large data sets to understand when mission critical software has failed. This process can be complex, time consuming, and require analysts to sift through large result sets to uncover insights.

If a digital product stops working you would use OpenSearch to understand what happened,
when, and why? I made it easier for users to answer these questions through a query assistant.

Client

Amazon Web Services / OpenSearch Project

Duration

1 month - 2024

Team

Kevin Garcia (UX Manager), Kyle Averak (UX Designer)
Anirudha Jadhav (Engineering managers)
Pratrik Shenoy (Senior engineer)
Nitin Chandra (Technical Product Manager)
OpenSearch community

Role

Lead the charge on conceptualizing, defining UX, and launching a query assistant in OpenSearch Dashboards.

Identifying areas of opportunity in user workflows

Speaking with DevOps Engineers that conduct root cause analysis resulted in an end to end workflow and insights into areas they find frustrating. From here I identified areas of opportunity where GenAi could solve for their frustrations. This approach allowed me to design use case specific features but also generated technical requirements for engineering that informed the types of models that need to be built and trained.

Balancing flexibility and structure through framework based design

I partnered with engineering to translate our research into a scaleable backend and frontend framework that would support the flexibility needed for the open source but also provide enough structure to allow for consistent user experiences.

Use pain point

DevOps engineers leverage complex queries to uncover the cause and implications of a system failure. Writing these queries is time consuming and prone to errors. To add, not all DevOps engineers are familiar with the unique query languages offered by OpenSearch.

Delays in this step can result in significant financial loss, security risks, and downtime for users of a software.

How might we help DevOps engineers write queries faster?

The following solutions were informed by generative research, ideation workshops, and some creativity.

Query in English instead of a complex query language

Hypothesis
We believe users will query data faster if they could use natural language instead of a query language.

Key insight
Novice and power users generated needle in the haystack queries faster. However, it was faster to iterate and modify queries through a query language instead of using English.

Outcome
Initially the assistant drove the experience. After uncovering the insight above I pivoted to an assistant that helped users generate queries only when they needed it. The experience was still driven by the traditional query language.

Help me diagnose errors in my queries.

Hypothesis
Users spend a lot of time diagnosing errors in their query. A query assistant will help them resolve common errors quickly.

Outcome
Through early conversations with users we identified this feature would help greatly. Especially for users that are not experts in a query language.

Make it easy for me to learn a query language.

Hypothesis
Users will benefit from hyper contextualized query documentation that will help them learn and write more efficient queries.

Key insight
A common complaint we received was around in product documentation. Users found it challenging to learn and understand query parameters while in context. They usually spent a great amount of time sifting through our documentation site and then trying to translate their findings into something that works.

Outcome
This feature was highly desirable when we spoke to users. Through the assistant they could trigger a specific context or simply ask questions to generate answers from our documentation in context of the product.

A generative AI assistant when you need it.

Efficiency improvements

Through a query assistance users could generate queries 65% faster than manually typing them out. Through the query assistant we could surface hyper contextualized documentation that helps users troubleshoot and learn about OpenSearch query languages. This solved for the largest pain point surfaced in community forums.

Use case specific GenAi

By leveraging user research we were able to pin-point areas where GenAi would actually benefit users. In these cases we also balanced the needs of users and the organizational push to launch something with GenAi.

Rapid testing and iteration

Working closely with engineering we launched several coded POC's in under a month. This allowed us to test, generate insights, and iterate rapidly.

Understand everything

As a designer I not only dug deep to understand users, but the technology. I spent lots of time talking to engineers, understanding how the models were built and trained, and working with them to share user insights to refine the models for the use cases we needed them for.

This project was a small part of improving the overall querying experience.
See how I approached the larger challenge bellow.

Available to help.

If you're looking for a product designer that can work end to end I'm your guy.