# Adaptive MaxDiff

This article presents the example .qex file for the Adaptive MaxDiff (A-MaxDiff) methodology. The differences between this method and standard MaxDiff are explained. Instructions on using and updating the example file for your own list of items are also provided.

### Background and other articles

There are a few articles already on our Help Centre regarding MaxDiff & Interactive Library. It will be useful to have reviewed these first.

- Design: Example of MaxDiff methodology
- Analyse: Basic analysis of MaxDiff data
- Design: MaxDiff Interactive Library
- Design: Interactive libraries in Design 5.4
- Design: MaxDiff Table ADC

### Concepts

Research has shown that MaxDiff scores demonstrate greater discrimination among items and between respondents on the items. Since respondents make choices rather than expressing strength of preference using some numeric scale, there is no opportunity for scale use bias.

MaxDiff is also known as "best-worst scaling". When conducting a MaxDiff section in an interview, you would expect to see something like the response table below.

There is a list of items and you are asked their importance over a number of screens. Each screen has a subset of these items (usually 3, 4 or 5) and you are asked to pick your most and least important at each screen.

The arrangements are calculated (can be by software such as R) such that after all the screens are asked, the answers give a robust indication of how the items are scaled from best to worst.

The key point with **standard MaxDiff** is that the arrangements on screen are pre set and do not adapt to the responses given in interview. In addition, the number of selectable options on screen is a constant.

However, in **adaptive MaxDiff**, the number of selectable options will change. Each round of screens, the items selected as 'Least' are removed from the next round of screens. The number of items on screens therefore diminishes until you get to the start of the last round where you are asked to pick between all those you chose as 'Most'.

So the key point is the routing in the file notes the 'Least' items and removes them from consideration in later rounds, 'adapting' to given responses. Below is a simple diagram to show this more clearly:

In this example we have 20 items and we start with showing 5 per screen in the first round. Then we discard the four selected least in each round until the last round where we scale the last 4 winners.

The advantages of adaptive MaxDiff are that greater discrimination between items of importance is achieved. The disadvantages? Well, it could be argued that, since your initial answers create the upcoming arrangements, you do not have as much opportunity to change your mind about items you have rated least important in previous rounds.

This article explores Adaptive MaxDiff in great depth.

### The example files

The example file has the structure as shown to the left of the screenshot below. The clever programming has been put together by our developer making use of the relatively new routing action: **Run an Askia Script**.

Your list of items goes into the variable: **text** and is linked to by several following questions. The number of items defined in **Iterations** should run 1 to 20. The first example has **20 items** split over 4 groups and can be downloaded **here**. The routing conditions highlighted above must start with **numOfGroups = 4**.

If you have less than 20 items e.g. 18, then some of the arrangements in the rounds of screens will have less than the expected number of items to account for this.

In order to increase the number of items, we need to set the additional items and numbers into **text** and **Iterations** respectively. As well as this, we need to update the routing conditions above to start with **numOfGroups = 5**.

This will naturally work with **25 items**. The example file can be downloaded **here** and an example of the data input is shown below.

There are 5 iterations per round and in the last two iterations, the ranking of the last 5 winners is decided.

If we try this 22 items, we see there are fewer iterations required to work out winners.

Once all the screens are completed, we have some additional routing that sets into the variables at the end (**LoopRanking** and **Ranking**) - As you can probably guess, this ranks all the items in order of most importance to least: