What's New in AskiaAnalyse 5.6
Analyse 5.6. Features
This article describes the key new features in AskiaAnalyse 5.6. You can use the summary list below to jump to any feature of particular interest:
- View, Add & Edit Tags in Analyse
- Filtered Weighting
- Developed sub-questions for calculated loop variables
- New Col Sig option in 'Test each column against'
- Export all decimals in Excel but cell format should match decimal places setting in Analyse
- Analyse script calculation keywords: EdgeQuestion & EdgeSubQuestion
- Add 'Profiles' & 'Tab templates' tab to the 'Import survey definitions' menu
- Keywords to access name of tab definition & its sub-population
- Keywords to manage levels & interview order
- Updates to Analyse Command Line
- Automation Scripts: Create Calculated Variables
-
Various additional updates to the automation scripts language
-
- i) Automation Script: Set Global Sub Population in Portfolio
- ii) Automation Script: Keyword to set title within general part of tab definition
- iii) Automation Script: ReadTabStyle() & IsScaled Keywords
- iv) Automation Scripts: Set Universe to tab definition
- v) Automation Scripts: New methods to close open numerics
- vi) Automation Scripts: Create weighting (by initial weight question)
- vii) JSON portfolios and automation scripts to create any weighting
- viii) Automation Scripts: New weighting keywords
- ix) Automation Scripts: Allow reading of text files in all Analyse languages
- x) Automation Scripts: ProfileItems[n] property
- xi) Automation Scripts: Replace JSON unfriendly characters in portfolio output
- xii) Automation Scripts: Add tags by automation script
- xiii) Automation Scripts: Update numeric aggregation options
- xiv) Automation Scripts: Remaining Miscellaneous Keywords
-
- New Askia Surf Features
-
View, Add & Edit Tags in Analyse
Tags can be set on questions e.g. 'Demographics questions' or on responses e.g. 'Main competitor brands' at any point in AskiaDesign, and now also in AskiaAnalyse! Tags are used with automation scripts to speed up the selection of questions and easily replicate analysis. For full detail on this you can read the article here.
View tags:
You can now view existing tags in AskiaAnalyse. Go to the Tools menu > Tags and this will show all tags present in the survey file. (A)
You'll also notice when highlighting a question in the questionnaire tree, the response distribution panel in the bottom-left has a column at the end showing if any responses are tagged. (B)
Also, when double-clicking a question in the questionnaire tree, the pop-up window now has information about the tags applied to that individual question. (C)
Whilst this gives much flexibility in Analyse to view tags, the obvious follow-up question is: "How can I see all tags assigned to questions and responses in one view?"
Since 5.5.2. this is possible with automation scripts. In the example below, we make use of the 'Tags' keyword:
In conjunction with the Debug.Trace() keyword, we can write out all question shortcuts and their responses that have Tags associated to them in the Debug.Trace.txt file that is created in the .dat folder after running the script.
Add & Edit tags:
Using route (A) described above, we can right-click in the window that pops up through the Tools > Tags menu. Here we can insert new tags or delete existing ones.
Using route (B) described above, we can set or remove tags on the current question's responses by selecting one or more responses and right-clicking to bring up the menu options shown:
Using route (C) described above, we can change the selection of tags applied to the current question or add new tags here too.
What about if we want to make the same change of tags on many questions in one go?
Well, back in the questionnaire tree, highlight all the questions you want to update and then right-click. You'll see the set and remove tag options again in the context menu:
-
Filtered Weighting
Filtered weighting removes the need for manual intervention when implementing certain weighting schemes. We have a blog article on this handy new feature here. The article contains a video where I walk through an example that Analyse could not handle previously.
In summary, you'll see these new options in Analyse:
A: This allows you to set a sub-population of only the records that will be considered when the weighting scheme is run.
B: For those records that are outside of the sub-population, we need to set a value for these records to be weighted to. Often this will be 1 or 0.
C: The option (B), above, can be set by default in your Analyse options. This means that each new weighting scheme created has the value set here for filtered weight.
-
Developed sub-questions for calculated loop variables
Calculated/Derived loops have been a success since being introduced in Analyse 5.3.5. With looped variables alone, you can do almost all of the analysis you come across day to day. However, there were one or two specific instances in that time where our clients were stuck and needed to create a derived loop that also had developed questions available to them. So, in the latter part of 2023, we introduced the following options to deal with this gap in functionality:
Any time users now change the level on a calculated question from interviews to that of a loop in the survey, they will see the new option 'Develop questions in level' (A) become enabled. If you tick and save you'll see the developed questions appear now under your original undeveloped loop variables:
We also have the option to set loop shortcuts in the caption section of your variable (B). After saving, these are expanded to put the correct loop item label in the long caption of your developed question. This gives additional flexibility when building tables from developed calculated loops:
-
New Col Sig option in 'Test each column against'
A need that arose for a handful of users since the last big version release was in col sig testing. There was a need to be able to carry out col sig tests only between the responses of a question in the edges, and not across all edges. We have the ability to do this with the existing options in columns using 'All columns of the question' but since some users put their main banner in edges rather than columns, we felt this new feature would be useful.
You'll notice there are two new options shown in the screenshot above.
i) All columns of the edge question
Note that the first edge variable is Agreement and it has three visible responses. In this case the testing remains within the columns related to the current edge variable B1-G1, and repeats for all following edge variables (1. Appreciation → tests H1-Q1, i3. Profession → tests R1-C2). The column response we are on does not matter but it does in the next option.
ii) All columns of the edge question AND the corresponding column response within the edge
Note that the first edge variable is Agreement and it has three visible responses. In this case the testing remains within the columns related to the current edge variable B1-G1. However, in addition, we restrict the test so we only test the same column response each time i.e. Man vs Man and Woman vs Woman. For example, in this case:
-
Export all decimals in Excel but cell format should match decimal place setting in Analyse
In Analyse we can change the decimal places of a cell output in the following two places:
Previously in Analyse, when you went on to export to Excel, the cells there would contain only the rounded number you set with the options above.
Having discussed with users through support, it became apparent that more flexibility around this was required.
One may want the cells to be formatted to 3 decimal places and appear like this on first look. However, the full unrounded numbers are often needed to accurately put together further calculations e.g. either done manually in the Excel document or automatically when the Excel output is consumed by another software.
We can now achieve this with the 'Export rounded values' option:
It's set to 'Yes' by default to minimise the risk of back-compatibility issues. So set it to 'No' if you want this behaviour. What you'll now see after exporting is as shown below:
-
Analyse script calculation keywords: EdgeQuestion & EdgeSubQuestion
Since Analyse 5.3.5, we have had the keywords; ColSubQuestion, ColQuestion, RowSubQuestion, RowQuestion.
These are particularly useful when creating loop summary tables and were introduced to minimise the amount we refer to specific variable shortcuts in calculation scripts. This increased the re-usability of our tab templates. However, there were still one or two scenarios where we needed to access the variables in the edges and so, to maximise this reusability, we added the following:
EdgeSubQuestion, EdgeQuestion.
-
Add 'Profiles' & 'Tab templates' tab to the 'Import survey definitions' menu
For a while now, Analyse has possessed the ability to import survey definitions such as: Sub-populations, Weightings & Calculated questions from one survey file to another. We have now expanded this to allow transfer of Profiles and Tab-templates as well.
In addition, with the latest updates to this feature, we have enabled multi-selection of elements and the transfer of calculated loops.
-
Keywords to access name of tab definition and its sub-population
The new keyword, 'SubPopulation', is a property of the table object returning the name of the sub-population used.
The following is an example using the keyword in the calculation arithmetic scripts returning the sub-population name from CurrentTable.
This example uses the keyword in cleaning scripts returning the sub-population name from GetTable("tab definition name").
The new keyword, 'PortfolioItemTitle', is also a property of the table object returning the name of the tab definition. Again, this can be used with calculation arithmetic and cleaning scripts as shown below:
-
Keywords to manage levels and interview order
The new keywords are:
-
-
-
- RecordOrder - Retrieves the order of the record in that level (1-based). For scripts run in the interview level, this is equal to the interview number
- InterviewOrder - Retrieves the order of the interview as stored in the database (1-based)
- ChangeLevel() - Moves the data object from the current level into the level of the question specified (interview level if no question is specified). The aggregation type is specified for numeric responses
-
-
For the first two, the fact that these are virtual questions means you can use InterviewOrder.Data etc. Below you can see how these keywords work on the interview level and on a loop level:
Lastly, we have added a new method to the data object:
Data.ChangeLevel( question , aggregationType )
-
-
-
Question - indicates the level (or a question in that level) in which you want the data. If not specified, it moved the data in the InterviewLevel
-
AggregationType - indicates the type of aggregation you want for numerics. here are the possible values (sum is the default)
-
-
This performs the same action as converting a level question to a different level question e.g. when you right-click > Change level...
All weights or sub-populations specified before a call To ChangeLevel is lost, so apply them after if needed.
-
Updates to Analyse Command Line
i) Command line parameters to change Excel export options
There are now four Excel export options which can be changed from the command line rather than a user having to go in to Analyse and manually update these:
Parameter names;
-
-
-
-
- /xlNewSheetPerChapter
- /xlNewSheetPerTabDef
- /xlNewSheetPerTab
- /xlNewSheetPerEdge
-
-
-
For each of these parameters you can use the following values: true, false, 1, 0.
Usage example:
Analyse.exe "c:\qes\me\ex.qes" /exportexcel /portfolio:"c:\qes\me\p1.xml" /output:"c:\qes\me\p1.xlsx" /xlNewSheetPerChapter:true
ii) Open live survey connection from command
Live surveys can now be opened from the command line:
iii) Allow to pass parameters from the command line to the automation scripts
You can now set up an automation script using the new GetCommandParameters() function (see line 1 below).
In this way, you have flexibility to change script values simply by changing your command line syntax using -parameters:"param1,param2,param3" etc. The usage example below shows this in relation automation script above:
iv) Return an error code for
-
-
- no Field task/Open live survey connection
- missing automation script
-
Previously, some automated processes through the command line or the API would just hang when incorrect details were submitted.
In the example where user or survey credentials for opening field tasks/live data are wrong, we added a quick fix for this type of error to produce an exit code of '-3' (CANNOT_OPEN_SURVEY_FILE) e.g.
In the example where your automated process uses an automation script that couldn't be found, the process would just hang like this with the dialogue box open:
To improve this, the dialogue box now automatically closes and we've added an exit code of '-4' (CANNOT_READ_FILE). For this example, you can also see below the difference when the API is used:
Towards the bottom of the command line article, linked above, you will see some useful information on how to extract error codes and what the main ones mean:
-
Automation Scripts: Create Calculated Variables
We're excited to have added another powerful new feature in this version that allows more flexibility and automation possibilities.
Since the earlier versions of Analyse 5.6.1. we could create derived variables by script. Initially we could only create the following variable types with no additional settings:
-
-
-
-
- Numeric by script
- Closed by script
- Closed by single script
-
-
-
In later versions of Analyse 5.6.1. we added the ability for automation scripts to set the level & sub-population to a calculated variable as well as create three new variable types:
-
-
-
-
- Find all values of a script
- By weighting
- Open by script
-
-
-
There is a Knowledge Base article that goes in to more detail covering usage examples here.
-
Various additional updates to the automation scripts language
i) Automation Script: Set Global Sub Population in Portfolio
You can now set the portfolio level sub-population from the automation script:
ii) Automation Script: Keyword to set title within general part of tab definition
You can now set the tab-definition title from the automation script:
iii) Automation Script: ReadTabStyle() & IsScaled Keywords
You can now set at Tab style to be used in the tab-definition title from the automation script e.g.
resItem.TabDef.ReadTabStyle("Askia color")
You can now differentiate between questions which have the 'Scaled responses' property enabled in Design e.g. this will return true or false (1 or 0).
QRating.IsScaled
iv) Automation Scripts: Set Universe to tab definition
You can now use automation scripts to set the 'Universe' setting to the tab definition.
.TabDef.SetUniverse() is used. Inside the brackets you can put the name of a sub-population within double quotes to set this as the universe. Otherwise, you have the ids for the preset universes between -1 and -12 to use in the brackets:
v) Automation Scripts: New methods to close open numerics
When your automation script asks for a numeric question to be placed in the tab definition, you can now also instruct the script to find all values, create bands and find N intervals (i.e. define percentiles) in the tab definition using the numeric:
vi) Automation Scripts: Create weighting (by initial weight question)
You can now use automation scripts to create weightings (by initial weight question). Below is an example script showing how to do this along with the weighting it outputs:
vii) JSON portfolios and automation scripts to create any weighting
We can now save .xml portfolios as .json portfolios. The reason for this is to improve their readability and flexibility within automation scripts, whilst opening the door to two-way compatibility of portfolios between Analyse and Vista Rapide in the future.
To save a portfolio as .json:
Open the .xml portfolio in Analyse > File > Save as > Change the extension from .xml to .json > Save.
Once you have done this in a portfolio containing weighting, you can copy the .json for the weighting and use it in an automation script to create the same or similar weightings.
I use Notepad++ as it has some plugins which are very useful when reading .json syntax:
Search for "weightings" and you should find a definition that looks like this:
Start the automation script with this:
Dim myWt as Weighting
myWt.Name ="WEIGHTING NAME"
Dim res = myWt.Parse(@{
Now you can copy the section shown above and paste in to your automation script.
You'll need to remove lines beginning with "guid" & "id" whilst amending the line beginning with "key" to reference shortcut instead as you'll see below. The line beginning with "subPopulation" is optional and should updated when using filtered weighting.
So different weightings can now be built, conditionally, using scripts. It's best to save a weighting scheme first that has a structure close to what you want to work with. This will make the scripts easier to write and test.
viii) Automation Scripts: New weighting keywords
More functionality for working with weightings in automation scripts was required in the shape of two new keywords:
-
-
- Weighting.Run().. [method] - needed to generate weighting report
- Weighting.Efficiency [property] - captures the weighting efficiency value from the report
-
We can now run actions in a script conditionally based on whether a weighting converges or has an required efficiency. Below is a further usage example:
ix) Automation Scripts: Allow reading of text files in all Analyse languages
ReadTextFile() was only permitted in askia Design. We have moved it to the core of Askia Script so it is now available in all Analyse scripts. Below is an example of how we'd call a .json file and load its contents for use in an automation script and in a derived variable:
x) Automation Scripts: ProfileItems[n] property
We can now access the response properties of a question in the tab definition to set weightings & sub-populations.
You can also reset the default of inheritance to a response by using ("Default") e.g.
Portfolio.Items[1].TabDef.Rows.ProfileQuestions[1].ProfileItems[2].SetWeighting("Default")
Portfolio.Items[1].TabDef.Rows.ProfileQuestions[1].ProfileItems[2].SetSubPopulation("Default")
xi) Automation Scripts: Replace JSON unfriendly characters in portfolio output
We can now use Chr(n) keyword to identify certain characters that might cause problems for another software or process consuming the outputs from analyse:
If you set up a function like the one shown above then it's quite easy to clean any string by passing it into the brackets of CleanCaptions().
xii) Automation Scripts: Add tags by automation script
We can now add tags to a question from an automation script:
xiii) Automation Scripts: Update numeric aggregation options
We have now introduced three new keywords to set the numeric aggregation options on a question in the tab definition:
a) Numeric aggregation type* set on question
b) Allow calculations for numerics set on question
c) Value property set on response
* full list of types available:
xiv) Automation Scripts: Remaining miscellaneous keywords...
-
-
- Tab definition invisibility
- Flagging 'Ordered responses' questions
- Access to Left-Header
-
We can now manage the visibility of a portfolio item from an automation script:
We can flag if a multi-code question has 'Ordered responses' ticked in Design:
And lastly, we can call any tab template and update its 'Left Header' before outputting the results:
Other mentions:
We do have a few more automation script keywords such as .Sandbox, .FilterHas(), .FilterCurrentInterview() and FilterByCurrent(). They are fairly specific and require a bit more detail to explain fully. For now, there is a brief summary of their functionality below. If you feel like they may help, please get in touch with Askia Support and we'll expand on these. The same goes for anything presented in the article which you may require further information on.
-
-
- .Sandbox - Returns the directory where the output should take place. If the sand box is not
- .FilterHas() - filters the data so that it includes only those items that have the chosen
- .FilterCurrentInterview() - performs the filtering action per interview and not across the aggregated data
- .FilterByCurrent() - ensures that any filter applied within the portfolio cuts the data first (prior to aggregation) according to that sub-population
-
-
New Askia Surf Features
i) Selective matching & reconciliation
Askia Surf is now able to leverage the power of tags, speeding up work on set-ups which have lengthy and/or highly customised matching requirements. Six new parameters have been added and are summarised in the table below:
|
When the source .QES file added to the existing .QEW/campaign has tags on its questions or responses, you can choose to ignore matching these during reconciliation. |
|
When the campaign .QEW file has tags on its questions or responses, you can choose to ignore matching these during reconciliation. |
|
Specify the question matching methods by tags in the source .QES file. You need to list the name of the tag and the letter code for the question matching method in JSON format (see codes listed under |
One tag, one method format:
Two tags, one method:
Three tags across two methods:
|
|
|
The default matching is done on first letter code it has success matching on. This new parameter allows you to specify that all matching methods listed must be satisfied for a match to be made. |
There are a few usage examples at the end of our Askia Surf from the command line article.
ii) Command line parameter to write wave & volume names
You can now set the the name these elements when building a Surf set-up from the command line e.g.
"Surf.exe" /source:"Wave 2.qes" /target:"MergeWaves.qew" /output:"REPORT"
/WAVE_NAME:"My cool wave name" /VOLUME_NAME:"My new volume" /A:QR /I:QR /M:SCLIU /N:CEOIU
iii) Case sensitivity option from command line
The behaviour has been changed as follows:
-
-
- An optional command line parameter
/CASE_SENSITIVE
has been added - if used, matching is made with case sensitivity - By default (i.e. when the option above is not used) matching is made without case sensitivity, which is the opposite behaviour of pre-5.6 versions (to aid back-compatibility and intuitiveness)
- An optional command line parameter
-
iv) Surf 'Caption Libraries' to allow pasting lists of labels
Let's consider, for example, the case where you have an accepted response label at the beginning of building your Surf file but then, in subsequent waves, you receive variations of the caption that are also to be matched. 'Caption libraries' can be quite a time-saver when handling this. You add the variations to your caption library file, using the GUI in Surf, and they're automatically matched next time you add a wave.
Until now, you were restricted to single selection and one by one caption entry in this GUI but we've added that functionality now making this process a lot smoother.
We can now:
-
-
-
Paste a single column of cells from Excel to window [B]
-
Paste a single column of cells from Excel to window [C] - N.B this presents a new view to paste in to depending on what’s selected in window [B]
-
Multi-select groups in window [B] so they can be sent to window [A] in one go
-
-