Question Types

Learn about the question types available in Sprig.

Supported Question Types

By default, every question type is supported across all:

  • Plans (Free, Starter, and Enterprise)
  • Study Types (In-Product Surveys, Long-Form Surveys, Feedback Studies, and Prototype Tests)
  • Delivery Methods (Web, Mobile, and Link)
  • Survey Formats (Conversational and Standard)

Exceptions are noted in the Constraints column below:

Question Type

Description

Constraints

Rating Scale

Also known as a Likert scale. Used to measure respondents' attitudes toward a statement.

Open Text

Open text response to a text question. Our AI Analysis constructs themes based on these responses. We also support AI Follow-up Questions for this question type.

Matrix / Accordion

Allows you to collect many responses for similar question sets.

Multiple Choice Single-Select

Users may select only one option from a list.

Multiple Choice Multi-Select

Users may select multiple options from a list.

NPS

NPS measures customer loyalty on a scale from 0 to 10. Results are automatically categorized into Promoters, Passives, and Detractors.

Consent / Legal

Enable respondents to consent to disclaimers before conducting the study.

Only available on In-Product and Long-Form Surveys.

Text / URL prompt

Used to introduce or conclude a study, or permit the study respondent to navigate to a URL.

Video and Voice

Enable respondents to provide video and audio-only questions and responses.

Not supported on Feedback Studies.

Rank Order

Enable respondents to prioritize a set of options so you can make data-driven decisions.

Only available on Enterprise Plans.

Only available on In-Product and Long-Form Surveys.

Recorded Task

Allows you to watch a user attempt a specific goal in your prototype while capturing their screen, voice, and video.

Only available on Enterprise Plans.

Only available on Prototype Tests.

MaxDiff

Gather granular preference information on longer lists of items.

Only available on Enterprise Plans.

Only available on Long-Form Surveys in Standard Format.

Multi-Question: Single Page

Add a page that can contain multiple questions.

Only available on Starter and Enterprise Plans.

Only available on Long-Form Surveys.

Note: Video and Voice and Recorded Task questions are not supported in HIPAA compliant environments.

Supported Question Types

Rating Scale

A Rating or Likert scale is a type of scale used in survey research that measures respondents’ attitudes towards a certain subject. You can change the Range or number of points; the Labels, numeric or a star of smiley emoji; and the Lowest and Highest Value Labels.

Rating Scale Skip Logic

There are eight options for configuring skip logic on numerical question types.

  1. is equal to
  2. is not equal to
  3. is less than
  4. is less than or equal to
  5. is greater than
  6. is greater than or equal to
  7. is submitted (any value selected)
  8. is skipped (no value selected)

Net Promoter Score

Net promoter score (NPS) is a widely used market research metric that typically takes the form of a single survey question asking respondents to rate the likelihood that they would recommend a company, product, or service to a friend or colleague. NPS is a commonly requested sentiment score that investors and boards review across companies, segments and verticals. For more information on NPS, click here.

NPS Skip Logic

There are eight options for configuring skip logic on numerical question types.

  1. is equal to
  2. is not equal to
  3. is less than
  4. is less than or equal to
  5. is greater than
  6. is greater than or equal to
  7. is submitted (any value selected)
  8. is skipped (no value selected)

Open Text

Open Text uses AI Thematic Clustering to automatically group qualitative responses to text-based questions into themes.

AI Follow-up Questions

Open Text questions support AI-powered follow-up questions. When enabled, Sprig AI dynamically generates a contextual follow-up based on the respondent's open text response surfacing deeper insights without requiring you to anticipate every answer path in advance. To enable AI follow-up questions on an Open Text question, toggle AI Follow-up Question on in the question settings. See AI Follow-up Question for full configuration details and best practices.

Open Text Skip Logic

There are four options for configuring skip logic on Open Text question types.

  1. is submitted (any value selected)
  2. is skipped (no value selected)
  3. contains
  4. does not contain

Matrix / Accordion

Matrix questions bundle multiple questions sharing the same rating scale or response options into one cohesive table. Rows represent different items being evaluated, while columns represent the scale.

There are minimum SDK version requirements to support Matrix questions.
PlatformMinimum SDK Version
Web2.25.1 (Desktop), 2.32.0 (Accordion on Desktop), 2.31.2 (Accordion on Mobile Web)
iOS4.22.3
Android2.16.4
React Native2.17.0

Display Behavior

  • Desktop Web: Defaults as full scale matrix, Option to Always display as accordion on desktop
  • Mobile Web & Mobile Apps: Always displayed as accordion (full scale matrix not supported)

Editing Post-Launch

Matrix rows and columns cannot be deleted or reordered after launching a survey, but new rows and columns can be added as needed.

Matrix Skip Logic

There are three options for configuring skip logic on Matrix questions.

  1. is completely submitted (an answer is provided for every row)
  2. is partially submitted (an answer is provided for at least one, but not all rows)
  3. is skipped

Multiple Choice

Multiple choice questions allow respondents to select one or more options from a list, depending on if they are allowed to select one, a range, or unlimited options. Both single- and multi-select options support the following two configuraitons:

Randomized Order Supported

The order of responses shown to the user can be randomized using the Randomize Order dropdown menu. Additionally, users can configure choices to be pinned to the bottom of the list of responses by adding a Pinned Choice, even alongside an Other choice.

Display as Dropdown

You can choose to display multiple choice questions as a dropdown instead of a list by toggling Show choices in a dropdown ON. Note: when using the "Other" choice option, displaying as a drop down is not supported.

Multiple Choice Single-Select

The multiple choice single-select question type allows a respondent to choose only one option from a list of possible answers.

Single-Select Skip Logic

There are three options for configuring skip logic on Single-Select questions.

  1. is
  2. is not
  3. is submitted (any value selected)

Multiple Choice Multi-Select

The multiple choice multi-select question type allows a respondent to choose multiple options from a list of possible answers.

Response Validation Options (also known as "Max Selectable")

There are three options for constraining responses to Multiple Choice Multi-Select questions:

  1. Unlimited (default) - Respondents can select as many choices as they'd like.
  2. Maximum - Set a maximum for the number of choices respondents can select.
    1. Once the maximum has been reached, remaining choices will become un-selectable.
  3. Range - Define a minimum and maximum number of choices that respondents can select.
    1. The lowest a minimum can be is 1, the maximum can be up to the total number of choices in the list.
    2. If a question is optional, respondents can skip it entirely. However, if they select any option, they must meet the minimum requirement to proceed.
    3. Respondents will see a numeric indicator to know how many options they need to select to meet the minimum. The indicator will disappear once it's been met.

Best practice: describe the validation requirements in the question or description so respondents know what's expected.

None of the Above Choice

For Multiple Choice Multi-Select questions, a None of the Above choice can be added to the list of options. When selected by a respondent, other options are automatically deselected.

  • If Range or Maximum are selected for the question, a "None of the above" option cannot be added to the question.
  • Similarly, if "None of the above" is already one of the options, you cannot change the Selection Amount to Range or Maximum.
  • Remove the "None of the above" option to change the validation type.
Multiple-Select Skip Logic

Skip logic for multi-select questions triggers based on how the respondent's collective choices match your defined criteria. There are four options for configuring skip logic on Multi-Select questions.

  1. is exactly: evaluates to TRUE when respondent selects the EXACT options specified.

    RuleOptionsResult
    Exactly [a, b][a]false
    Exactly [a, b][a, b]true
    Exactly [a, b][a, b, c]false
  2. includes all: evaluates to TRUE when respondents select ALL that are included within specified options.

    RuleOptionsResult
    Includes all [a, b][a]false
    Includes all [a, b][a, b]true
    Includes all [a, b][a, b, c]true
  3. includes at least one: evaluates to TRUE when respondents select one or more options specified.

    RuleOptionsResult
    Includes at least one [a, b][a]true
    Includes at least one [a, b][a, b]true
    Includes at least one [a, b][a, c]true
    Includes at least one [a, b][c]false
  4. does not include: evaluates to TRUE when respondents do not select ANY of the options specified.

    RuleOptionsResult
    Does not include [a, b][a]false
    Does not include [a, b][c]true
    Does not include [a, b][a, b]false
    Does not include [a, b][c, d]true
🚧

If multiple skip logic statements are true, visitors will be routed to a randomly selected true logic statement.

For example:

  • includes at least one [a or b] = skip to 2
  • includes all [a and b] = skip to 3

If the user selects a and b, they are routed to a randomly selected TRUE logic statement. In the example above, a visitor would have an equal chance of landing on question 2 or 3.


Text / URL Prompt

The Text / URL prompt can be used to add messaging for respondents into a study or redirect respondents to an external site. It comprises two text fields and a button that can go to the next question or a URL.

The steps to create a study with a Text / URL prompt are:

  1. Select Text / URL Prompt from the Add Question menu.
  2. Type in your purpose, question, or statement in the Title and Description fields. Both fields allow rich text formatting. Rich text formatting is supported on Web SDK v2.14.9 and later.
  3. Select when the respondent clicks the button whether you want the navigation to continue in the survey or link to an external URL.
  4. Type in the text for the button
  5. If you selected Button to link to external URL, type in the URL you want the respondent to navigate to. To construct unique links for each participant, you can enrich each link with the visitor's User ID.
Text/URL Prompt Skip Logic

There are two options for configuring skip logic on Text/URL questions.

  1. is submitted (user selects the button, whether it links to a URL or not)
  2. is skipped

Identifying the Visitor

If you use a Button to link to external URL, you can pass respondents' email and/or User ID to the destination website. This can be useful for recruiting participants or connecting responses across different platforms, or to a Sprig Long-Form Survey. More about using an In-Product Survey to navigate to Long-Form Surveys here.

How to Use

  • Add {{email}} or {{user_id}} to your URL. Sprig will automatically swap these with the respondent's actual data when they click the button.
    • For example,

      https://website.com/schedule?id={{user_id}}&email={{email}} might become

      https://website.com/schedule?id=23481&[email protected].

    • The exact format for the user ID and email name-value pairs may vary by destination site.

  • To make user ID and email available in button URLs in studies, ensure they're set on the user before the study is displayed. If there's no value found for a parameter, it is removed from the URL so the link doesn't break.
    • For example, if a visitor accessed

      https://website.com/schedule?id={{user_id}}&email={{email}} but only had a User ID set (and no email), it would become

      https://website.com/schedule?id=23481.

There are minimum SDK version requirements to support enriching URLs with user identity.
PlatformMinimum SDK Version
Web2.15.3
iOS4.3.0
Android2.6.0
React Native2.6.0

Sending Identity vs. Capturing Identity

To use identity placeholders (like {{email}}) in a Text/URL prompt, Sprig must first "know" who the visitor is.

Sending Identity through In-Product Surveys (Outgoing): Once Sprig has captured the data, you can "pass" it to an external URL using this button. Data must be already in Sprig (via the SDK or CSV upload). End goal: an outgoing Button URL set tohttps://website.com/schedule?email={{email}} might become https://website.com/[email protected].

Capturing Identity into Long-Form Surveys (Incoming): To store a Long-Form Survey respondent's user identity, you must "inject" the identity by adding the {{user_id}} or{{email}} parameter to the study link you distribute. See here for details on how to capture identity in Long-Form Surveys when they're distributed through third-party software or marketing tools.

Consent / Legal

You can incorporate a Non-Disclosure Agreement into your Study for your participant's review and obtain their consent before proceeding with the study. The copy can be in text format and/or a PDF file. You can also capture the respondent's name.

Consent/Legal Skip Logic

There are two options for configuring skip logic on Consent/Legal questions.

  1. is submitted (user selects the checkbox and selects the button)
  2. is skipped

Video and Voice

As well as Open Text questions and responses, Sprig also supports video and audio-only questions and responses. The video and voice responses are automatically transcribed and optionally translated. Contact Sprig Support to enable the translation service. The transcription enables the same AI analysis used for text to be applied to video responses. However, video and audio responses provide an additional emotional dimension that is not always visible in text-only responses. Because recording video or audio requires more effort from respondents, expect lower response rates compared to standard text questions. Consider focusing voice and video questions on a specific area of interest and only share that study with recipients you think the topic would be particularly relevant.

The steps to create a study with video-enabled questions and responses are:

  1. Select Video & Voice from the +Add Question menu.
  2. Click Request Permissions to enable Sprig to request access to your camera and microphone.
  3. Click Allow for the browser to permit Sprig access to your camera and microphone.
  4. Click Record Record to begin the recording countdown and will automatically begin recording for you to record your question. Optionally, click Video Video to turn off video and record the audio only.
  5. Click Stop to Stop recording, Play to Play back your recording, and Delete to Delete if you'd like to start over.
  6. Distribute the survey link to prospective respondents using your usual methods.
  7. Note that respondents who engage with this question must give permission in their browser to enable camera and microphone access.
Video/Voice Skip Logic

There are two options for configuring skip logic on Video/Voice questions.

  1. is submitted (user uploads a video/voice response)
  2. is skipped

Downloading a Video Response

  1. Click on the study from which you want to download a video response.
  2. Click on Responses.
  3. Click on Download CSV to download the responses CSV file.
  4. Open the CSV file in a spreadsheet.
  5. Locate the question and response you would like to download.
  6. Notice that the response cell has a field labeled downloadUrl.
  7. Copy that URL in your browser.
  8. Once the video loads, right-click on the video and select Save video as to download a copy of the video.

Content Security Policy

Supporting Video and Voice questions for Web studies may require you to update your web application's Content Security Policy (CSP). This is because Sprig uses third-party libraries to support Video & Voice questions, including fetching an external stream of the question and uploading the response.

The following list of sources is required to use Video & Voice questions:

  • Base64 SVGs and fonts.
  • Connecting to Mux (for streaming and short video uploads).
  • Connecting to Google APIs (for long video uploads).
  • A worker source required by our third-party library (videos).

Examples are shown in the following CSP source list:

<meta
     http-equiv="Content-Security-Policy"
     content="img-src data: https://*.mux.com;
     	connect-src blob: data: https://api.sprig.com https://*.mux.com https://storage.googleapis.com https://cdn.sprig.com https://cdn.userleap.com;
     	font-src data:;
     	media-src blob: https://*.mux.com;
     	worker-src blob:;"
   />
🚧

Info

As every company has different security policies, the example may not reflect the complete configuration. You may need to add additional keys and values in order to comply with your own organization’s policies. Note: Video and Voice questions are not supported in HIPAA compliant environments.

Troubleshooting Errors

After creating your first study with Video and Voice questions, test its display on your website using these instructions. Check your browser console for any CSP-related errors if the questions are not rendering correctly or responses aren't uploaded. You may have to work with your engineering and security teams to update your website's or app's CSP settings to resolve the errors.

Rank Order

ℹ️

Rank Order questions are only available on the Enterprise plan.

Capture relative preferences by having respondents prioritize a set of options. Supported in Long-Form and In-Product Surveys. This question type only supports the "is submitted" conditional for skip logic.

Configure

  • Add 2 - 10 options and optionally set them to display in a random order
  • Set the labels for the top and bottom of the list to help respondents understand the criteria they should be using the rank the options
Rank Order Skip Logic

There are two options for configuring skip logic on Rank Order questions.

  1. is submitted (user ranks items)
  2. is skipped (user does not interact with any items)

Respondent Experience

Items are unranked until the respondent ranks their first item. Respondents can drag and drop items, or set a specific ranking for an item. If they set a ranking, the items in the list will move to reflect the new order. Note that drag and drop is not yet supported when surveys are taken on Android devices.

Review results

Once you've collected results, view details about how each item was ranked in a histogram. Hover over the histogram to get a tool tip containing a breakdown with percentages.

MaxDiff

ℹ️

MaxDiff questions are only available on the Enterprise plan and for Long-Form Surveys.

MaxDiff questions, also known as Best-Worst Scaling or Maximum Difference Scaling, allow you to gather detailed preference information from respondents. They're often used when the goal is to deeply understand the importance of the items, like product improvements or brand messages, being rated.

Benefits

For researchers: mitigate position and order bias by showing items in different groupings and orders; gather deeper insights and detailed preference data.

For respondents: reduced cognitive load due to only needing to select the best and worst items from small set of items. Once a user selects both a Best and Worst, the survey automatically advances to the next set; there's no "Next" button between sets.

Build

  1. Create items: MaxDiff questions require a minimum of 4 and a maximum of 24 items in the list to be rated.
  2. Create labels:
    1. The left label must be a positive descriptor for Sprig's Best-Worst calculation to be accurate. By default, this is "Best."
    2. The right label must be a negative descriptor for Sprig's Best-Worst calculation to be accurate. By default, this is "Worst."
  3. Required or Optional?
    1. If a MaxDiff question is Required, respondents must finish all sets to proceed.
    2. If it's Optional, respondents can skip the whole experiment. However, if they start a set, they must finish that specific set.
  4. Construct the experiment: determines the number of items per set and the number of sets that a user sees. There are two options:
    1. Recommended: Sprig will use half the total possible items as the number of items per set (up to a maximum of 8), and it will use the following formula to determine the total number of sets: (total # items/items per set) * 3.
      1. For example, in a question that has 24 items:

        Items per set = half of 24, max 8 = 8

        Number of sets = (24/8) * 3 = 3 * 3 = 9

        So, users will see 9 sets of 8 items.

      2. In a question that has an odd number of items, the items per set is the number below the halved value. However, Sprig always rounds up to the nearest whole set to ensure coverage. For example, in a question that has 11 items:

        Items per set = half of 11 = 5.5, which is rounded down to 5

        Number of sets = (11/5) * 3 = 2.2 * 3 = 6.6, which is rounded up to 7

        So, the user will see 7 sets of 5 items.

    2. Custom: Configure the number of items per set and the number of times each item should be shown to each respondent, within the ranges noted below the fields. Based on those values, Sprig will calculate the number of sets to show each respondent.
MaxDiff Skip Logic

There are two options for configuring skip logic on Maxdiff questions.

  1. is submitted (user picks a "best" and "worst" for each set shown)
  2. is skipped (user immediately clicks forward button, or does not pick a "best" and/or "worst" for all sets shown)

Results

Sprig records the "best" and "worst" selection for each set and the order of items show in each set.

  • The "best" percentage is the percentage of times that item was chosen as the best across all sets. The "worst" is the percentage of times the item was chosen as worst.
  • The score is based on the following formula and ranges 100 to -100:
    (# Best - # Worst) * 100 / total number of appearances.

Results on the survey summary page are based on "completed" questions, where respondents filled in all of the sets they were shown. To view partial responses, download the CSV.

⚠️

After a survey is launched, you can only edit the existing items in a MaxDiff question. You cannot add or remove items. Likewise, you cannot change the configuration of MaxDiff item sets once the survey has been launched.

MaxDiff FAQs

How are sets configured?

Sprig's MaxDiff sets will:

  • show items in different pairs evenly such that all pair comparisons are shown to each respondent or can be inferred
    • ensure each item is shown an equal number of times with other items (so strawberries isn't always shown with kiwi and durian)
  • show each item the same (minimum) number of times across all sets (some rounding occurs especially when there are an odd number of items or sets)
  • show items in different positions within sets (top, bottom, middle)
  • show items evenly across sets so they aren't shown only up front and not later

Do all respondents see the same sets?

  • All sets a respondent will see are generated as the survey initially loads; the order of the items displayed will be randomly determined and will differ between respondents.
  • Not all items will be shown to each respondent, and each respondent may not see each item the exact number of times the formula calls for, usually when there's an odd number of items and rounding occurs.