# Metric types

**Ignity calls calculations on (survey) data ‘metrics’. In Ignity you do not define a calculation yourself, but select a metric type and fill out the parameters.**

Within Ignity we can display the following metrics/items:

- Average
- Percentage – average
- Percentage – rule
- SUM
- NPS
- Multiple Choice
- Multiple Response
- Record count
- Group metric (variation of metric)
- Open question answers (part of a metric)

**Average**

*What*: The average metric calculates the average of all scores as a number value.

*Example*: A good example is when respondent are asked to give express their experience with a mark ranging from 1 to 10.

*Calculation*: sum of all scores/ total respondents

**Percentage – average**

*What*: this percentage is an indication where the average answer is positioned on a scale from 0 to 100%.

*Example*: Suppose you ask respondents if they are satisfied on a 5 point scale from dissatisfied to satisfied. If every respondent fills out the most left scale the score will be 0%. And a 100% means every respondent filled out the right scale. An answer in the middle (scale 3) results in a score of 50%.

*Calculation*: the most left scale is valued at 0%. The most right scale is valued at 100%. The values of the other scales are at an equal distance. So in the example of the 5 point scale the scales from the left have the following values:

- Scale 1 (most left): 0 %
- Scale 2 : 25 %
- Scale 3: 50 %
- Scale 4: 75 %
- Scale 5 (most right): 100 %

The metric is calculated by using the average of all scores.

*Calculation example*: suppose 2 persons answer with scale 2 and 3 persons with scale 5, the score is ((2*25%)+(3*100%))/5 = 70%

Presentation: A percentage that ranges from 0 to 100%.

Practical use: with relative few evaluations you can detect slight changes that truly express an underlying shift in experience or attitude. On the downside it is more difficult to explain what the calculated metric stands for.

**Percentage – rule**

*What*: a percentage that expresses how many answers meet a condition (rule) you set for the answer.

*Example*: Suppose you ask respondents if they are satisfied on a 6 point scale from dissatisfied to satisfied (from left to right). Now you want to report on the percentage of respondents that are satisfied so therefore you set the rule that the answer given should be scale 4 or higher. Now the percentage expresses the percentage of respondents that are satisfied.

*Presentation*: A percentage that ranges from 0 to 100%.

*Calculation*: number of answers that meet the specified condition/total number of answers.

*Practical use*: the percentage can express a very intuitively understandable metric. In the above example the percentage of respondents that is satisfied. Compare the in this percentage from 70 to 80% with a shift of an average mark from 7 to 7,5. It is easier to explain to another person what the change in scores for the first metric means.

*Disadvantage*: although this can be a very powerful metric, you need more evaluations to determine if a change actually represents an underlying change than with metrics that report and average. This since every evaluation either meets the condition or not and therefore contributes a individual 0% or 100% score and nothing in between.

**SUM**

*What*: the SUM metric calculates the sum of all the values in a particular column.

**NPS ‘style’**

*What*: NPS is short for Net Promoter Score. The idea is that respondents indicate on a scale of 0 to 10 if they would recommend the company or service to their family and friends. The high scoring respondents are labeled ‘promoters’ and the low scoring respondents ‘detractors’. And all respondents in between are labeled ‘passives’.

*Presentation and calculation*: the NPS score is a number between -100 and + 100 that is calculated by subtracting the percentage of detractors from the percentage of promotors.

*NPS in Ignity*: For the official NPS it is exactly defined what are promoters, passives and detractors. In Ignity you can specify your own rules as to when scores are regarded as a promoter or a detractor.

**Multiple choice**

*What*: With this metric we can display a list of choices where the respondent was allowed to check ONE choice.

*Calculation*: per choice the percentage of respondents that checked this choice is calculated.

*Presentation*: A list of choices with percentages that add up to a total of 100%. This can be both one vertical bar chart with ‘segments’ for each choice. Or a list of choices and bars underneath each other.

**Multiple response**

*What*: With this metric we can display a list of choices where the respondent was allowed to check MORE than one choices.

*Calculation*: per choice the percentage of respondents that checked this choice is calculated.

*Presentation*: A list of choices with percentages that can (and most often will) add up to a MORE than 100%. This can be both one vertical bar chart with ‘segments’ for each choice. Or a list of choices and bars underneath each other.

**Record count**

*What*: this metric calculates the total number of database records in scope. Optionally rules can be defined, in which case only the number of records that matching the rules are taken into account.

**Group metric**

We distinguish between one value metric and group metrics. A one value metric consists of only one value. A group metric consists of multiple values. This is best explained with an example. Suppose you want to ask end users how they perceive the price of your service. However you do not ask it directly in one question, but you ask multiple questions that reflect different aspects of the concept quality. Like for example:

- The price for the services are fair
- The prices are lower than of most competitors
- The service is value for money

Now we can create a group metric ‘price’ with these 3 underlying questions. This group metric score is a (weighted) average of the scores for the different questions. Now you can keep track of the metric price as a total and also look at the scores for the individual questions.

A group metric can only be defined for the following 3 metrics:

- Average
- Percentage -average
- Percentage – rule

Average calculation for group metrics

If you define a group metric you have 3 options to define how the group metric is calculated based on the underlying individual scores.

*Equal average*; in this case all the individual questions have the same ‘weight’ in calculating the group metric score. For example individual metric values of 40, 50 and 75% make up a group values of 55%

*N based average*; with this option the weight is determined by the number of respondents. Since there might be not filled out answers or people that opt for the ‘not applicable’ choice the, number of respondents might vary per individual metric. With the ‘N based average’ the different respondent numbers are taken into account when calculating the group metric score. For example, if you have 2 individual metric scores, one with score 70% based on 10 respondents and 90% based on 15 respondents, the group score is calculated to be 82%.

*Weighted average*; in this option you can set the ‘weights’ yourself. For each individual metric you can specify for how much percent they should weigh in the group metric score. You give all individual metrics a percentage weight and the total should add up to 100%.

**Open question answers**

An open question is a comment a respondent can give. Often it is used so a respondent can elaborate on a topic.

In Ignity, open question answers (or comments in short) are always part of a metric. You can define that one or multiple comments belong to a specific group metric or individual metric.

In the dashboard, you can look at all comments given for a particular metric. Or just look at the comments for of a particular subset or group within the results. You can also search comments for specific terms. Looking at the comments is a valuable way to look at the story behind the metrics. Especially when scores are out of the ordinary.