AI language models are not trained to give the answer. They are trained to give a likely possible answer that you as a researcher can then check using the evidence provided.

The quality of responses in the analysis grid or chat is heavily dependent on writing good prompts. Prompts must be clear, detailed and specific.

Below are some examples of this for the demo project (First time voters in the US).

Things to try…

Ask Specific Queries

The accuracy of the evidence retrieved for its answer will improve with more context so asking more longer more direct questions or clarify what you are interested in with a follow up will help.

Good ExampleBad Example
Summarise what voters said about their first time voting experience.Summarise the main points.

Communicate in the imperative

CoLoop relies on language models that have been trained to follow instructions. Posing queries in an imperative mood particularly when asking CoLoop to format or summarise previous conversations will improve its performance.

Good ExampleBad Example
Format the last message into bullet points.Can you format your last message into bullet points?

Ensure your question contains specific language

CoLoop searches over its “memory” to find evidence to use when answering your query. More targeted language will increase the chance it will identify the right parts of the transcript to refer to.

Good ExampleBad Example
What did participants say they like about concepts A, B and CWhat did participants say they liked?

Ask qualitative questions in the AI chat

Update coming soon to allow you to ask quantitative questions..!

When using the AI chat be careful about asking quantitative questions. Current state of art language models cannot reason reliably about quatitative questions. If you want to know how many people mentioned something use the Analysis Grid.

Good ExampleBad Example
-How many participants mentioned problems with registration?

Break up bigger or multi-step questions for better results

The example below isn’t necessarily bad but given the limited context the first one asked each way i.e. “what are some of the pros of concept A?” followed by “What are some of the cons of concept A?” will result in a much more detailed answer.

Good ExampleBad Example
What are some of the pros of concept A?What are the pros and cons of concept A?*

Provide sufficient context

If it isn’t explicit in the transcript you can help CoLoop by providing a bit of extra context. This is particularly useful for concept testing where people may be loosely referring to “A”, “B” and “C” for instance.

Good ExampleBad Example
Concepts A, B and C are examples of different advertising options. Which of these did the participants prefer?What did participants think of the different options?

Use your imagination..!

LLMs and prompting are a rapidly evolving field. Lots of creative ideas for different prompts are being discovered every day by non technical enthusiasts with a good intuition for how LLMs respond to inputs.

Things to be careful of…

Asking the AI to provide direct quotes

The underlying AI models are non constrained to quote directly from the text. If you ask them to ‘give me some quotes’ it may not return verbatim quotes. To get verbatim quotes click through on any of the generated text and use the quotes provided in the menu on the right hand side.