Skip to content

GenAI in Healthcare: Navigating Trillion-Dollar Opportunities

GenAI is touted to contribute $15 trillion to the global economy by 2030. However, the heavily regulated nature of operating in healthcare and life sciences spaces presents a reality where there can be no mistakes made when it comes to implementing and maintaining AI technologies. How can the every day, compounding benefits of GenAI technologies be de-risked? And what practical advice can we rely on when looking to save time and duplication of effort when running market research projects in our industry spaces?

Hallucinations

Ever wondered how cheese adheres to a pizza when cooked? (This is a relevant tangent, promise). Well, some 11 years ago, before the Brazil World Cup or Taylor Swift's '1989' World Tour, a bewildered Reddit user took to the Internet to find out why their cheese kept slipping off their pizza slices.

Being Reddit, responses included the following:

"To get the cheese to stick I recommend mixing about 1/8 cup of Elmer's glue in with the sauce. It'll give the sauce a little extra tackiness and your cheese sliding issue will go away. It'll also add a little unique flavor. I like Elmer's school glue, but any glue will work as long as it's non-toxic."

Redditors, being Redditors, upvoted this. (It is kind of funny).

Fast-forward to 2024. Google's AI picked this up and nonchalantly regurgitated it as a foolproof recommendation for getting cheese to stick.

The Impact of Training Data

Training data ingested into AI models can involve a series of purposeful data collections presented to a generative AI model to shape the weighting of probabilities assigned to the potential answers that would be logical for a given scenario. However, when training data does not account for the relative frequency of a situation or the "unwritten" contextual rules a human may understand to know what is appropriate, where, and why, logic can take some amusing turns.

For want of a better phrase, GenAI hallucinations at their most overt could be likened to the model being the machine-based equivalent of an "educated idiot".

And because Google had linked its search engine archives and indexing to the retraining process for its GenAI "enhanced" search, the popular and bizarre answer had its time to shine as a source of knowledge. The above example was one of the more PG, too (it also liked telling us to eat a certain number of rocks a day if asked how many rocks we should eat daily). And, interestingly, hallucinations seem as much about how you ask a GenAI model a question as the output data it has been trained on. After all, if you're asking how many rocks to eat a day – the asker could imply that you have to eat at least one rock daily, right?

Many GenAI systems are based on versions that integrate the questions asked of it as another source of training data. Particularly after the rock-eating and gluey-pizza debacles (which lead to others asking the same questions to replicate the silliness), it isn't beyond the realm of possibility that the retraining process could start to tie together other dietary or eating questions to consider glue and stones, too!

Real World Risks

Hallucinations undermine our trust in the accuracy and efficacy of GenAI as a content-producing tool. Beyond the realm of cheese melting and the broader concept of stickiness, lawyers/solicitors have been caught relying on fake cases, and universities are clamping down on AI-generated content being turned in. For healthcare and life sciences spaces, hallucinations could be life-threatening. What if an AI-powered dosing system intended to help tailor treatment for a patient hallucinated its response, perhaps because it had been trained on patient data with similar characteristics, that lead to an overdose or other adverse event? Preventing – or at least flagging – the hallucination risk in real-time is essential to avert a disaster.

Google had to turn off its AI search feature because of how bogus, lewd, and downright unsafe some of its outputs were – with safety being especially a concern for some of the more covert ways hallucinations can appear. Scaling training data and the overall size of a language model has merits in principle, but the fact-checking needs to keep up, as not all users will be in a position to be able to fairly assess what is true (or not) – especially as AI-generated content contributes to the exponential growth of content being published.

IT Compliance and Data

Samsung had to go as far as banning all use of ChatGPT amongst its employee base because proprietary information was being uploaded into chats with the AI service. For example, an employee may have copied and pasted email chains with PI and other secret content within them, perhaps to ask for a draft of an appropriate email response. ChatGPT isn't ringfencing interactions like the enterprise offerings can allow.

Any user's information can theoretically be reverse-engineered out of the system by another user, which, in the Samsung example, would likely be of great interest to Apple, Google, or Huawei if new Samsung phones were the topic of discussion! Data security is also paramount to both patients and providers because of the sensitive nature of health-related data.

This boils down to understanding what happens to the interactions you're having with any AI system. Are you interacting with an unchanging, cloned instance of an AI model or a model training in real-time based on interactions? Even if retraining isn't taking place, how is the chat history or files you upload to a system stored? Blanket bans by IT policies aren't necessarily the way to go (however, they were arguably appropriate while the likes of ChatGPT were just coming out of the early adopter phase for the everyday person), but are well worth bearing in mind across any tool you consider using.

Before any interaction with GenAI, consider if the info you were about to share/discuss would be appropriate for a colleague, client, or the public domain to see. When in doubt, obfuscate.

GenAI for Insight Extraction

You may be familiar with our proprietary search technology for recruiting the best experts for primary research projects. Our roots in machine learning, large language models, and natural language processing meant we could quickly deploy a compliant GenAI offering to the client portal.

We're proud to be the first expert network to launch a data-compliant GenAI tool, ECHO Ask, to speed up the analysis and synthesis of insights so you can focus on the value of the conversations. Answers provided by ECHO are supported by citations from project transcripts. No access is provided to third-party content, and the tool highlights if evidence is missing, so answers do not embed hallucinated content.

All without needing to juggle a million windows, clean up transcripts (as we've deployed other machine-based techniques to improve the recognition of medical terms, so inputs into our insight extraction tool are a good foundation), or download files to get on with your analyses.

Prompts are provided to give you a leg up, as sometimes the iterative approach required to engineer effective prompts can be a time sink that's difficult to juggle amidst other deadlines. You can also approach the tool conversationally if you want to experiment freely.

(For some light bedtime reading, you can also view our AI Compliance statement. GenAI and AI as a whole have a lot to offer, but the background work to make a tool like this be able to interface with sensitive primary research data cannot be understated.)

Current customers can access the tool for free in the client portal. If you don't see 'ECHO Ask' when you log in, please reach out directly to your Customer Success point of contact. If you're shopping around for a primary research partner, get in touch for a bespoke demo.

Connect to the right experts today.

Let us find the expertise you need

Mask Group 8