The topic of the plenary session for Day 2 of ISPOR revolved around the potential use of artificial intelligence (AI) in health economics and research outcomes (HEOR). Previously identified by ISPOR as one of the top 10 HEOR trends for 2022-2023, AI has rocketed into the public consciousness with the development of easily-accessible large language models (LLM) such as ChatGPT.
Whether the mention of artificial intelligence has you dreaming of a digital utopia, or brings you out in a cold sweat at a Skynet takeover, there is no doubt that it will play a foundational role in the future of healthcare. Below is a summary of the fascinating conversation that played out at the second plenary session.
The ChatGPT phenomenon
- The panellists agreed that the biggest change introduced by ChatGPT is a greater public awareness of AI, with its wide accessibility providing users direct insights into the potential for AI
- Putting these type of large language models into production, and bringing machine learning to millions of people, is associated with managing risks.
- As part of managing this risk, there needs to be baseline understanding of what AI models do, in what areas they work well and what areas they work poorly (and why).
- Trust was an interesting area of discussion of the plenary. It was raised that some patients may not trust the outputs of AI models (e.g. may not follow up recommendations, such as meeting with a specialist). Questions were raised around how lack of patient trust in AI can be taken into account in economic modelling, and how inherent bias within LLMs could be tackled
- Other panellists also raised the challenge and possible inequalities in AI models operating in countries with lesser spoken languages
- It was stated that an obstacle in the wider scale AI implementation could be the mismatch in knowledge between stakeholders (e.g. model developers, regulators, providers and patients). HEOR can work to close these gaps
Potential usages of artificial intelligence by HCPs and HEOR
- The panellists discussed how AI could be integrated into patient care, both at home and at the clinic
- One example was the potential ability to monitor patients at home (e.g., for chronic conditions). In this scenario, healthcare providers would be receiving information from the patient (e.g., via voice/text message) and AI would be able to map the language used by the patients onto existing, validated instruments.
- This would allow healthcare providers to collect information every day, avoid the need to bring the patient into the clinic (and fill in repetitive questionnaires) and limit the resource implication of collecting such information.
- Panellists also raised another example of the utilisation of AI – a potentially huge uptake of ‘ambient technology’ over the next five to ten years. Such technology would allow conversations between patients and clinicians to be recorded and analysed to provide a new source of real-world evidence. These conversational insights are currently either lost or summarised ineffectively in patient notes.
- Currently, the most likely place that AI will complement the healthcare services will be in the ‘business’ of healthcare: speeding up and improving efficiency of administration, operations, coding and marketing. For the physicians themselves, it could be the summarisation of large amounts of patient data.
- However, one of the panellists cautioned that there must always be a ‘human in the loop’ to interact directly with the patient. The potential for AI could be so transformative in delivering healthcare that the education of future HCPs will have to be reinvented.
- The use of AI in collecting and analysing PROs were a major thread of the conversation, especially in the potential ability to speed up identifying and understanding variables and cofounders.
- Since PROs are collected and coded already by humans, they are seen as ‘low-hanging fruit’ for the wider implementation of AI
- For example, the process of extracting information from clinical notes as part of a chart review could take months: LLMs could do this in a few days or even hours
- There is a need for HEOR to understand what is going into the model, and how the model should be interrogated and fine-tuned, and how often such models should be retested.
- Panellists also outlined the need to balance demands for more patient data (e.g. more specific metadata) and patient privacy. Moving forward, there is a clear need to establish industry best practices and frameworks or checklists
- The overall goal is to move towards validated instruments which avoid risk of bias and are transparent enough for regulators to understand.
- Expect to see lots of innovation in large language models, with more access for HEOR professionals. As HEOR works in the patient domain, the focus should be on innovation that is safe.
- The time for collection and analysis of RWE data could go from months to days – this could be revolutionary in the understanding of drug effectiveness.
Read our summary of Day 1 here.