Interpreting Studies

Understanding your study's progress.

At the top of the results page for each study you launch, Sprig provides statistics to help you understand the study's status and performance throughout its run.

626626

Study Performance Statistics

The first glyph shows the completion percentage shows how close you are to completing your study based on the target number of responses you've set. The above example shows 14% completion; the study will reach 100% and be complete once it collects 370 responses. The second icon indicates the number of responses received in the past 24 hours; in this case, 0. The third icon shows the response rate; it is calculated slightly differently depending on which platform you are using:

  • Web & mobile studies - The response rate is calculated by dividing the number of responses received by the number of studies seen by a respondent. In the above example, the number of studies seen is 52 x 100/77.6 = 67.
  • Email studies - The response rate is calculated by dividing the number of responses received by the number of emails sent.
  • Link studies - Response rates are not displayed since Sprig does not track how many link studies are sent or seen.

Key Terms

Sent: The term 'sent' has different meanings depending on which platform you are using for your study:

  • Web and mobile studies - A web or mobile study is considered 'sent' when it is scheduled to be displayed to a user who is currently active on your site or app.
  • Email studies - An email study is considered 'sent' when a study email is delivered to a user.

Seen: The term 'seen' has different meanings depending on which platform you are using for your study:

  • Web and mobile studies - A study is considered 'seen' when displayed in a user's browser or app. Studies that are sent may not be seen if, for example, a user navigates away from a page before meeting time-on-page criteria set during study creation.
  • Email studies - An email study is considered seen when a user interacts with a study email in some way, including, but not limited to, answering a question.

Response/Answer: A 'response' is counted every time someone answers one or more questions in a study; as such, the total responses count includes both users who complete the entire study and users who only complete a portion of it.

Completed: A question is completed with a valid response or answer.

Skipped: The respondent has navigated to another page but the question is not completed with a valid answer.

Automated Thematic Clustering

One of the richest sources of customer experience data is an open-text survey response. Historically this is also one of the most difficult data formats to extract meaningful insights, especially at scale.

When user researchers run surveys with open-text questions, a common goal is to group the huge number of responses into a small number of bite-size and actionable take-aways. The identified themes are shared with product stakeholders and play a critical role in determining how to improve the product experience. An example with some responses and a summarizing theme could be:

  • Response: "I'm lost, can't really find anything easily in the product."
  • Response: "It'd be nice if there was a way to find users by name."
  • Response: "Please add a search field."

Solution create a theme: “Add search functionality”

Performed manually, this analysis takes the form of placing the responses into a large spreadsheet, reading through them to locate patterns, defining themes that represent actionable groups of responses, and then assigning all responses to one or more of these themes (a.k.a. "coding"). As you can imagine, this is a detailed process and certainly can't scale easily beyond a few hundred responses. Automating this process can be a powerful way to increase the leverage of researchers and bring the survey life cycle from weeks down to hours. The ability to do this accurately is also one of the key differentiators between Sprig and other customer survey tools.

Sprig Theme Analysis Engine

We employ a multi-dimensional approach to capture the nuance of themes seen in open-text survey responses. Instead of just considering the topic, we also consider various other derived information for each response. Following is an example showing this information split into three possible dimensions. However in reality, we employ many more than three dimensions.

At a minimum, to accurately describe an actionable theme, you need to identify the topic and explain the intent - often implicit - of the respondent. In the first response example above, "The subscription fee is a bit steep," the respondent's purpose is to exhibit a negative sentiment towards the topic. Suppose a new response arrives: "It's too expensive for me to use." Here the topic and intent match the first example response, and so these will be considered part of the same actionable theme.

Another aspect of this problem is what information the models behind each axis have. Some elements are global and don't depend on the context of the survey. Examples include answering the question, "Is the respondent frustrated or not?" Some elements are more specific to the domain of survey results. An example here is the question, "What portion of the product or service is the respondent referring to?"

Answering these questions is trivial in some cases but much more difficult for others. In all cases, we utilize state-of-the-art deep neural networks as the basis for models whose jobs are to answer these questions. By splitting the problem into separate portions - topic vs. intent, global elements vs. domain-specific attributes - we can successfully replicate the efforts of expert human researchers.

The End Result

All themes should have both a topic and an intent so that the takeaway is clear and immediately useful. It's also important to identify an element of emotional response - sentiment - and a recommendation based on the urgency of the theme's responses. Sprig can produce this kind of AI analysis quickly, accurately, and at scale using advanced machine learning techniques.
.
Not all responses can be grouped into themes. For example, there may not be enough data to generate a theme. If only one response was, "It costs too much", it may not be appropriate to generate a theme until additional similar themed responses are received. These responses are categorized as a low occurrence response. Also, the response data may be unintelligible by the analysis engine. For example, "asdf1234". These responses are rejected by the analysis engine and are categorized as a low signal response; they will never be grouped into a theme.

To find the themes generated by the responses to your open-ended questions:

  1. In the Navigation pane, click Studies. Click on the study in question.
  2. Click Summary, then scroll down to the open-text question response table.
  3. Make sure to click list in the table.
  4. All themes identified by Sprig will be shown in the table.
  5. Responses that have been received but have not been associated with a theme are added to either the Low occurrence responses or Low signal responses categories.
872872