Analysis
During a playtest, users will provide feedback on which dialogue options they choose and why. They can select from a number of reasons and provide free from comments. This allows for answering many research questions (see our paper for examples).
You can explore the results of these playtests using the experimentation panes:
-
ExperimentsDB
pane -
ExperimentUsers
pane -
ExperimentAnalysis
pane -
ExperimentAnnotation
pane
Experiment Database
Use the
ExperimentsDB
pane to connect to the database which stores the results of your playtest.
This can be a CouchDB database or an export to a file.
Experiment Users
During the registration process, players can provide a number of details regarding their previous gameplay experience.
The
ExperimentUsers
pane allows for reviewing this information, along with providing aggregate statistics regarding the number of annotations users provided.
You can also include/exclude particular users from the analysis by checking/unchecking their row in the table.
Experiment Analysis
The
ExperimentAnalysis
pane has a large number of features for helping to understand all the feedback provided by players during a playtest.
You can filter by specific Conversations, Dialogues, and Utterances.
Additionally you can limit the results to only include human-written dialogue or machine-generated.
It’s also possible to use regular expressions to filter free-form feedback.
Finally, you can select reason tags from the top to filter feedback to only include these reasons.
Each tag contains a number indicating how many users provided that feedback in the current filtered selection of feedback.
Similar to the tags in the
Info
pane, you can create groupings of tags which can be used to filter the feedback:
These groupings will carry over to both histograms types. The Tag histogram represents the counts of tags chosen by players matching the filtering criteria.
The Quorum histogram is a way to visualize the number of dialogue options that were rated by a specific number of players.
Experiment Annotation
The
ExperimentAnnotation
pane was used for manually coding free-form feedback from our pilot studies into common themes to speed up annotation for future playtests.
It can still be beneficial for future scenarios when new themes are discovered in future playtests.