Interpreting Studies
Understanding your study's progress.
At the top of the results page for each study you launch, Sprig provides statistics to help you understand the study's status and performance throughout its run.

Study Performance Statistics
The first glyph shows the completion percentage shows how close you are to completing your study based on the target number of responses you've set. The above example shows 14% completion; the study will reach 100% and be complete once it collects 370 responses. The second icon indicates the number of responses received in the past 24 hours; in this case, 0. The third icon shows the response rate; it is calculated slightly differently depending on which platform you are using:
- Web & mobile studies - The response rate is calculated by dividing the number of responses received by the number of studies seen by a respondent. In the above example, the number of studies seen is 52 x 100/77.6 = 67.
- Link studies - Response rates are not displayed since Sprig does not track how many link studies are sent or seen.
Key Terms
Sent: The term 'sent' has different meanings depending on which platform you are using for your study:
- Web and mobile studies - A web or mobile study is considered 'sent' when it is scheduled to be displayed to a user who is currently active on your site or app.
Seen: The term 'seen' has different meanings depending on which platform you are using for your study:
- Web and mobile studies - A study is considered 'seen' when displayed in a user's browser or app. Studies that are sent may not be seen if, for example, a user navigates away from a page before meeting time-on-page criteria set during study creation.
Response/Answer: A 'response' is counted every time someone answers one or more questions in a study; as such, the total response count includes both users who complete the entire study and users who only complete a portion of it.
Completed: A question is completed with a valid response or answer.
Skipped: The respondent has navigated to another page but the question is not completed with a valid answer.
Automated Thematic Clustering
One of the richest sources of customer experience data is an open-text survey response. Historically this is also one of the most difficult data formats from which to extract meaningful insights, especially at scale.
When product teams run surveys with open-text questions, a common goal is to group the huge number of responses into a small number of bite-size and actionable takeaways. The identified themes are shared with product stakeholders and play a critical role in determining how to improve the product experience. An example with some responses and a summarizing theme could be:
- Response: "I'm lost, can't really find anything easily in the product."
- Response: "It'd be nice if there was a way to find users by name."
- Response: "Please add a search field."
Solution create a theme: “Add search functionality”
Performed manually, this analysis takes the form of placing the responses into a large spreadsheet, reading through them to locate patterns, defining themes that represent actionable groups of responses, and then assigning all responses to one or more of these themes (a.k.a. "coding"). This is a detailed process and can't scale easily beyond a few hundred responses. Automating this process is a powerful way to increase the leverage of a product team and reduce the survey life cycle from weeks to hours. The ability to do this accurately and at scale is one of the key differentiators between Sprig and other customer survey tools.
Sprig Theme Analysis Engine
We employ a state-of-the-art approach to capture the nuance of themes seen in open-text survey responses. Instead of just considering the topic, we also consider the context of the response itself. To do this, we utilize deep neural networks, which can capture the complexities of natural language.
To accurately describe an actionable theme, you need to identify the topic and explain the respondent's intent, which is often implicit. Take the example response: "The subscription fee is a bit steep." The respondent's purpose is to exhibit a negative sentiment towards the topic of monetary price. With the response "It's too expensive for me to use," the topic and intent match the first example response, and so these will be considered part of the same actionable theme. However, the purpose of the response "Please add a team-based subscription plan" is to request a new type of subscription, which is not the same as the other two.
We aim to generate granular themes that comprise a single actionable takeaway. However, in the case of low response numbers, themes can be broader. An example of this is the theme "Add more features," which may have several unique and single-response feature requests.
The End Result
All themes should have both a topic and an intent so that the takeaway is clear and immediately useful. It's also important to identify an element of emotional response - sentiment - and a recommendation based on the urgency of the theme's responses. Sprig can produce this kind of AI analysis quickly, accurately, and at scale using advanced machine learning techniques.
Not all responses can be grouped into themes. For example, there may not be enough data to generate a theme. If only one mentions cost, generating a theme may not be appropriate until additional similar-themed responses are received. These responses are categorized as low occurrence responses. Also, the response data may be unintelligible by the analysis engine. For example, "asdf1234". The analysis engine rejected these responses and categorized them as low signal responses; they will never be grouped into a theme.
To find the themes generated by the responses to your open-ended questions:
- In the Navigation pane, click Studies. Click on the study in question.
- Click Summary, then scroll down to the open-text question response table.
- Make sure to click
in the table.
- All themes identified by Sprig will be shown in the table.
- Responses that have been received but have not been associated with a theme are added to either the Low occurrence responses or Low signal responses categories.

Updated about 1 month ago