Discover how a healthcare consultancy used insights gained from experts sourced by Techspert to...
What are LLMs, and how can they be used in primary research?
Large language models entered the mainstream in 2023, with companies rushing to answer the question: how can we use this technology to save time or money? After a few false starts and a whole host of dubious quality LinkedIn posts, clear applications of using generative AI tools have emerged – with one standout opportunity: primary market research.
LLMs use deep learning techniques to produce human-like responses to natural language inputs (citation: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10192861/). Companies including OpenAI, Microsoft, and Google all use LLMs and generative AI to power digital assistants or chatbots, which can speed up the analysis phase of primary market research—provided you have good-quality data, clear use cases, and well-structured prompts to help you surface the right information.
Techspert’s new digital assistant, ECHO Ask, has been created to help our healthcare and life science customers with the value creation part of primary market research; redeploying customer time from admin, to more valuable and strategic activities. For every completed call, a Techspert customer can ask the virtual assistant questions relating to one individual call record or a combination of them – relying solely on the contents of the calls to generate the answers required, ensuring 100% reliability.
So, how do you get there?
You need good quality data
This is your key mantra, so repeat after me: The outputs produced by an LLM will only be as good as the data that feeds it. For qualitative calls, your call transcripts are your data source.
No one likes scope creep: using anything but your call data to power your analysis can introduce hallucinations. Don’t do it!
In healthcare and life sciences, transcripts can be riddled with inaccuracies stemming from poor interpretation of medical jargon, product names, company names, rare conditions, and more. If these transcripts were used to power a virtual assistant, it’d be a non-starter – you need a comprehensible document for an LLM to understand and produce quality answers. Plus of course, you need great experts to conduct these calls with who have undeniable subject matter expertise (you can read more on how we achieve this in our case studies here).
Techspert’s advanced transcript service correctly identifies medical terminology, enabling customers to successfully use these files as the foundation for rapid analysis through our digital assistants. These files are uploaded to our customer portal soon after the calls complete, narrowing down the time it takes to extract value.
Alternatives to our advanced transcript service include manually amending the files (keep on trucking!) or sending them out to specialist providers to process.
Identify the use cases you want to solve
Time is money, and when you have a great deal of data to sift through (think, 20 transcript files with 20,000 words apiece to categorise and structure) this can quickly become an expensive task – shoehorning expert answers into a vast data capture excel file so your more experienced colleagues can do the interesting job of surfacing insights. But that’s how early career graduates earn their stripes, right?
Wrong! Onerous, time-intensive tasks are rife for automation – but don’t worry, our 2024 roadmap has got that covered.
Analyzing vast quantities of data can be a challenge, it’s much easier to go in knowing what you want to find out (and knowing what your assistant is capable of), by identifying your use cases in advance. Then, it’s a case of knowing the right questions to ask. Here’s a sample of use cases which Techspert’s digital assistant, ECHO Ask, is capable of handling:
Summarising insights from a respondent group
What do your US-based KOLs think about future treatment trends? What do pharma executives think about the impending patent cliff? Use ECHO ASK to quickly summarise insights across a series of calls, generating answers in a matter of seconds.
Comparing answers across respondent groups
How do responses differ between physicians based in Germany vs France? How do perspectives differ between KOLs and Payers? Carefully structured prompts can spot the difference between cohorts of experts, provided the data is carefully segmented.
Generating quotes and citations
Validate the notes taken during interviews by pulling out key moments from the call. Ask your assistant to evidence each of your key points with a quote, and identify the location within the file it came from.
Drilling down into detail
Identifying unmet needs, pricing opportunities, challenges in the treatment landscape, patient perceptions, instances where experts used specific keywords or language - it’s all possible with the right prompt.
Craft a top-tier prompt
Remember the mantra from earlier? The outputs produced by an LLM will only be as good as the data that feeds it. The same holds true for the prompts that you write - the answers your digital assistant give will only be as good as the instructions that you provide, so let’s break down a structure for you to follow:
- Set the context.
- Ask your question.
- Add extra detail.
- Be specific about the output.
Example 1 – querying a single file:
I am the consultant asking questions in this file. The person answering questions in this file is the expert [THE CONTEXT]. Summarize the expert’s experience in prescribing Drug A to patients [THE ASK]. Include any information about unmet needs in the management of Condition B, and what tools they use currently to help with diagnosis [THE SPECIFICS]. Provide a direct quote and citations for each point made, and the time stamp at which the citation begins [THE OUTPUT].
Example 2 – querying multiple files:
You have access to several files on a project where the experts in each transcript file are discussing similar topics [THE CONTEXT]. What treatment challenges did the experts identify in these transcript files [THE ASK]? Which challenges had the consensus of multiple experts [THE SPECIFICS]? Evidence each point with a quote and confirmation of which experts agreed. Provide citations for each point and the file name and the time stamp at which the citation begins [THE OUTPUT].
Further tips:
- Context can be set at a macro level. For example, ECHO Ask has the context set behind the scenes so that you don’t need to write it every time.
- A slight tweak to the prompt can mean the difference between a good answer and an outstanding answer. Keep refining your instructions.
- Strip out anything unnecessary. Keep it concise, specific and provide just enough context for the assistant to understand the desired output.
- Ask follow-ups. If you start with a general question, you can use your digital assistant to drill down into more detail in your follow-up questions.
Handily, Techspert’s assistant comes with a pre-loaded set of prompts you can use to get started.
Keep compliant
It wouldn’t be a complete healthcare and life sciences blog without at least one reference to compliance.
It is critical that you only use a provider who will not retain or ingest any information uploaded into their foundational model. You can rest easy knowing that your confidential data is ringfenced with ECHO Ask. Techspert's service prevents any other service or agent from accessing your data.
Please read our AI compliance statement here – our Compliance Officer Adrian will be delighted.
In conclusion
Provided that you can ensure your assistant is powered with high-quality data and questioned with well-structured prompts, the primary market research landscape is set for disruption with manual, low-value tasks on the verge of eradication.
Using tools like ECHO Ask, our customers can expect significant time savings, quicker access to insights, and greater job satisfaction for those note-taking graduates.
Get in touch to deploy ECHO Ask on your next primary research project.