Patients recognise the key positive impacts of AI: support for healthcare professionals, self-care, more available information, quality and efficiency of diagnosis, improved health systems efficiency and delivery of care. But they also see some challenges. This was one of the themes of the online seminar, that took place on 15th February 2023, as part of the SPCC Educational Project on Artificial Intelligence in Cancer Care. It was chaired by Wendy Yared, Director of the Association of European Cancer Leagues (ECL), Brussels, BE, who opened the session with a brief summary of the project’s objectives: to identify opportunities and challenges, to share best practices in the application of AI in cancer diagnosis and treatment, to educate healthcare professionals on the latest innovations of the application of AI, to understand how it works on the clinical side, to raise cancer patients’ awareness of AI in cancer care, and to discuss what the implications of the new frontiers of AI in precision oncology are.
AI applications to improve the patient’s journey
Michele Calabrò is Director at EUREGHA, the European Regional and Local Health Authorities. Calabrò has previously worked on policy projects and communications for the European Health Management Association, European Patients’ Forum, and DIGITALEUROPE. EUREGHA is a reference network that brings together local and regional health authorities from across Europe to share knowledge, best practices, and promote cooperation in health policy. Currently it has 19 members, and 5 ongoing EU projects. It is particularly active in the EU health policy arena.
Together with big data, AI can support both professionals and patients. It has the potential to improve population health, healthcare operations and healthcare related innovation. It can have several levels of direct impact on patients, such as self-care, prevention and wellness, triage and early diagnosis, clinical decision support, and care delivery. As with any new technology, however, there are expectations and challenges to be addressed. One of these is the patients’ view about artificial intelligence. A report was recently published of the findings from a survey by the European Patients’ Forum (EPF) on AI from the patient perspective. The survey indicates that generally patients feel hope about AI’s potential, but lack a strong understanding on how it works and on its potential. Participants with more expertise in AI showed concerns about policymakers’ and politicians’ understanding in the field. This is quite an important point for a network such as EUREGHA, that works with local authorities. Then, of course, there are still questions regarding bias data, AI design, etc. Patients feel that more campaigning is required to promote early and complete participation by all patient groups in AI development and policy making, despite the high cost of such involvement. On the whole, they recognise the key positive impacts of AI, as mentioned earlier: self-care, information, quality and efficiency of diagnosis and delivery of care. But they also see some challenges, such as their non-involvement in the co-design, lack of transparency on how AI works and how data is used, risks of incorrect diagnosis and lack of clarity about medical liability, plus the risk of reduced human interaction with healthcare professional. There has been a lot of discussion at European level to deal with these issues, develop a course of action, and provide guidelines.
WHO outlined six guiding principles to limit the risks and maximize the opportunities inherent in the application of AI to health:
- Protecting human autonomy: this is related to a concern from patients about still keeping in direct connection with the professionals. Health-care systems and medical decisions should remain under human control.
- Promoting human well-being and safety and the public interest: regulatory requirements and quality controls must be met.
- Ensuring transparency, explainability and intelligibility: transparency is a concern for patients; thus, it must be ensured all around.
- Fostering responsibility and accountability is a question that patients often raise. How would accountability work? Those adversely affected by algorithmic decisions should have access to effective mechanisms for questioning and redress.
- Ensuring inclusiveness and equity, from many points of view: age, sex, gender, ethnicity, income, and so on, to build AI for everybody and not just for pockets of population.
- Promoting AI that is responsive and sustainable: AI applications should be continuously and transparently evaluated by designers, developers, and users, in order to determine if they meet expectations and requirements. Also, they must be designed to minimise their environmental consequences and increase energy efficiency.
Beside companies and governments, regional and local health authorities should also play an important role to make this happen.
In the second part of his talk, Calabrò presented a regional best practice example focussed on breath analysis, and the use of AI and machine learning to analyse volatile organic compounds for oncologic disease diagnosis. This example comes from the Italian region of Puglia, which is quite advanced in the area of digital transformation, with medical centres such as the Oncology Institute of Bari, the Oncology Hospital S.G. Moscati in Taranto, and the P. Pederzoli Hospital in Peschiera del Garda. These centres are mostly specialised in malignant pleural mesothelioma (MPM), lung cancer, colorectal and pancreatic cancer. Regione Puglia looked into AI based breath analysis, as they recognised its strong potential as a non-invasive and limitlessly repeatable method, unlike traditional diagnostics. They were also encouraged by the rising interest shown by the scientific community. Regione Puglia developed a new methodological approach to breath analysis, introducing an automatic sampler, the ‘Mistral’, that helps gather information to be fed to the machine learning model. One of the region’s research projects is Breath Analysis in Lung and Pleural neoplasm (BALP). It is a prospective observational study evaluating volatile organic compounds (VOCs) in exhaled breath, in order to create a predictive statistical model for discrimination among patients with lung cancer. It is also designed to monitor over time subjects with asbestos exposure and MPM patients during pharmacological treatment.
A second project, linked to BALP, aims to identify a distinct mesothelioma-related VOC profile through the development and validation of a data mining approach, in order to discriminate between patients affected by MPM and healthy controls. Again, the validation of the model uses former asbestos-exposed volunteers. The inclusion criteria were healthy conditions for health control and for the other arm, diagnosis of MPM confirmed by CT scan or chest radiograph, and enrolment before starting pharmacological treatment. The exclusion criteria were subjects affected by lower and upper respiratory tract infections, chronic obstructive disease, or systemic disease (diabetes). The dataset was based on chromatographic peak areas and the machine learning framework on three classifiers for the discrimination between MPM and healthy controls. The approach had two objectives: to investigate the predictive power of the compounds in classifying MPM and HC groups, and to identify the most predictive features in the overall dataset. The Puglia research looked into investigating and showcasing the direct potential of the application of machine learning to a diagnostic model. The key findings of the research were quite positive. There were good levels of agreement between the model output and “gold-standard” diagnostic exams. This, Calabrò concluded, was just one of the many examples that we find at regional level across Europe. European regions are increasingly looking into typical machine learning applications in healthcare to provide better solutions for patients living in those regions.
AI Tools to Improve Patient-Centered Approaches
The potential of AI to revolutionize healthcare is undeniable, especially in the field of cancer diagnostics and treatment states Ketil Widerberg, the General Manager of the Oslo Cancer Cluster, a non-profit member organization with the goal to speed up the development of new cancer treatments.
Widerberg draws from his experience in the biotech and software industries to set the direction of the organization. One of the driving forces behind commercial innovation in AI application for cancer is to develop more accurate oncology treatments. The treatments and medicines administered to patients today are not always effective as intended. More precise medicines are urgently needed.
The convergence of software engineering and biotechnology creates exciting opportunities in precision medicine. Oslo Cancer Cluster encompasses the entire ecosystem of cancer innovation, from clinics and researchers to large and small companies, investors, and patient organizations. Current expansion plans include more than 50 000 square meters innovation centre.
The Cluster’s incubator houses around 20 companies, some of which are using AI in their work. For example, Augere analyses data streams from colonoscopy videos to increase detection rates with just a few milliseconds of delay, enabling surgeons to act on the information immediately. DoMore Diagnostics has developed a software that analyses pathology slides to determine whether a patient is likely to benefit from chemotherapy, thereby reducing the number of patients who receive unnecessary treatments. Kaiku Health uses AI to analyse patient-reported outcomes and symptom tracking to alert healthcare professionals when patients require attention.
The adoption of AI systems in healthcare more widely requires a change in how we approach the technology, as well as regulatory adjustments to ensure patient safety. Physicians may be hesitant to utilise AI technology due to inaccuracies in current consumer-based applications. Technical AI experts may be hesitant to enter the healthcare field due to the difficulty of acquiring data. A convergence of knowledge between the medical and technical fields is essential to harness the full potential of AI in healthcare.
The convergence of software engineering and biotechnology has created exciting opportunities for innovation, but regulatory changes and a convergence of knowledge between the medical and technical fields are necessary to fully realise the potential of AI in healthcare Widerberg commented.
Perceived Benefits and Risks of AI
Michael Strübin is Senior Advisor for Digital Health at DIGITALEUROPE. He has over 15 years of experience in advancing digital health in Europe, working for membership associations, industry interoperability initiatives, European trade, and stakeholder groups. DIGITALEUROPE is a Brussels-based trade association for the digital or digitally transforming industries. It represents over 45,000 businesses, has more than a hundred corporate members and 40 national trade associations across Europe. It has digitally advanced companies that represent the pharmaceutical sector, MedTech, tests and vaccine developers, software and hardware companies operating in this space, and also Cloud companies that provide important services, particularly as we are getting into the area of data storage, data gathering, storage manipulation, analysis, and sharing.
A report entitled ‘The socio-economic impact of AI in healthcare’ was published in 2020 by Deloitte and MedTech Europe, an organisation of which Strübin was Director of Digital Health at the time. The team identified eight different applications where AI in healthcare can make a difference. In the order of their impact on Europeans, wearables currently represent the most impactful area for AI, things like smartwatches. It is quite a big deal that they can now detect arrhythmia; they are not medical devices, but they can assist in monitoring our healthcare, flag a possible anomaly and advise us to see a doctor. Glucose metres can take over the monitoring of blood sugar levels. That makes an enormous difference in patient self-management of diabetes, for example. It enhances their quality of life. It helps them manage their health themselves without necessarily having to go see a doctor.
The other big area where AI can make a difference is in imaging. Advanced software can now interpret medical images, and detect cancer very early on. According to the latest studies, the accuracy of an advanced software almost rivals that of a fully trained radiologist at the beginning of their shift, and, unlike a radiologist at the end of a long shift, software does not get tired. AI in labs can speed up the delivery and analysis of samples, and provide results that enable cures. AI can help and assist with research and innovation. AI algorithms can go through large amounts of data and detect anomalies. That is how, for example, some advanced medical device makers were able to detect signs of falls, and thus prevent them among seniors that were remotely monitored with motion devices. According to the report, if we managed to get to remove all of the barriers and get everything right to enable the introduction and spreading of AI technologies in healthcare, we would be able to save 400,000 lives annually, almost the population of a whole city. We’d be able to save 200 billion euros, which is more than 10% of the total European healthcare annual expenditure. We would also be able to free 1.8 billion hours of healthcare professionals, who would then be available for their patients when a human touch is needed. We are not talking here about potentially taking jobs away from healthcare professionals. We actually have a crisis in the healthcare workforce. With AI we could improve access to healthcare, improve accuracy, avoid mistakes, and ultimately deliver healthcare more efficiently.
What about the perceived risks? Leaving out some of the most irrational fears, like the rise of the machines, terminators, and so on, there are some real risks that we should consider:
- Safety. Is it safe, for instance, to use a continuous-glucose-monitor? Is it better than an analysis with the help of a healthcare professional? Does AI do the job as well as a physician? Those are legitimate questions. And the answer is that they can probably do it with sufficient quality. In the case of smartwatches detecting arrhythmia, for example, the alternative would be that nobody detects it. And obviously there is a vast difference between consumer electronics and medical technologies.
- Quality. Is the advice received from an app of the same quality of that given by healthcare professional? Also, is the quality of the underlying data representative? As we know, bias is a risk. Is the quality of the data sufficient to make predictions about my healthcare situation? This is an important issue to consider.
- Privacy. Does the AI violate personal data? When AI is uploaded to a Cloud, what happens with the data? Can somebody learn about a patient’s health situation by looking into their data?
- Security Is the algorithm secure enough against hacking?
- Access. We have seen that AI can improve access to healthcare.But how do we deal with those populations that may not have the skills to use AI devices or a smartphone, access the internet, and get advice from a machine?
We need to work together on trust. Key recommendations for building trust are:
- Invest in robust systems, which also means investing in cybersecurity, because we know we can deliver robust systems, but just one bad piece of news or one incident can seriously undermine trust.
- Build literacy and skills. We must train healthcare professionals in reading and analysing results generated by machines. And we also need to improve health literacy, among citizens and patients.
- Address reimbursement. Our healthcare systems are built on money for services. A patient goes to a doctor, the doctor sends the bill somewhere, gets reimbursed, maybe the patient pays, or does a co-pay and gets reimbursed. We have not as yet figured out how to reimburse for AI powered technologies and digital health devices.
- Deliver balanced regulation, that fosters technological innovation and evolution. This is very important for the industry, and there is an AI act in front of policymakers right now.
- Advance the data base. AI is only as good as the underlying data. In Europe we need European data sets because we need AI technologies that are fit for European patients. We do not want data that is trained on other populations. But if we all agree on that, we also need to find ways to get the data from Europeans, and we need to take this message to patients and citizens all over Europe that they should give their consent for responsible use of their data for all apps, researchers, and innovation, which is what the European Health Data Space is about.
Following the last presentation, the panellists discussed legal and ethical challenges from the patient’s perspective, such as data, funding by public money, and for-profit companies taking the results and developing products for sale. Mr. Calabrò highlighted that regional local health authorities collaborate with industry partners to implement initiatives related to healthcare, because of the use and link to medical devices, etc. He emphasised the importance of ensuring that projects involving AI and data use have clear and shared benefits for the community, and that individuals, including patients and professionals, understand the benefits and risks of the technology. He suggested that regional authorities can lead efforts to bring project results back to the audience and patients, thus creating trust and engagement. Ms. Yared agreed that it is important for respectable organisations to be involved in AI initiatives related to healthcare. She then directed a question to Mr. Widerberg about the solutions developed by industries, and whether hospitals and patients are charged for them, as there are no reimbursement schemes in place yet. She asked about data ownership and whether the industry sells data generated by AI or gives it away for free if somebody would like to advance knowledge, for example, at government level. She asked the panellists for two perspectives, one from a patient’s point of view and the other from a company’s point of view, on the legal and ethical challenges related to data ownership. Widerberg responded by saying that as a patient in a government healthcare system, he would be happy to have a learning health system that improves treatment quality over time, even if private money is needed to finance it. Companies make up-front investments to recoup and make a profit over time, and this is how the current economic system works. There could be a question, of course, is if they make too much profit. The challenge we are often facing is the reluctance from government or hospitals to share data, which limits its availability for AI innovation in healthcare, and this is a big problem for the progress of AI in healthcare. Healthcare data can be used to move away from using RCTs and utilise public data instead to understand what treatment to give to a patient. Governments are reluctant to invest in drug development. As a politician you will never get re-elected if you invest hundreds of millions into a project that has 95% probability to fail. The development process of drugs and devices has such a risk burden that so far only the private market is willing to and can absorb that risk and run with the winners. Widerberg emphasised the importance of building a structure for sharing data and enabling a supervised learning system, which learns and improves from our choices. He also mentioned concerns about the adoption of new technology in health due to current regulatory issues. Government and institutions need to work together to update existing procedures and systems to make data collection and sharing easier.
Ms. Yared asked Mr. Strübin to give his view on public versus private gains and data generation from the perspective of both patients and the industry. Strübin replied that in his opinion patient data should be considered a public good, as patients are often willing to share their data for research purposes in the hopes of finding a cure for their disease. Data in itself is not yet valuable and needs to be aggregated, curated, and collected to add value. FinData, a Finnish organization that pools data from different silos, is an example of centralising and democratising the process of making data available for research and innovation. Anybody can access it for queries, but there is a fee involved. The purpose of the query matters, not who is accessing it, and this is how it should be. Strübin emphasised that data should be a public good, and that, although there are charges to use it, everybody should donate to this pool because it will enable research and innovation. Ms. Yared asked the panel to share their thoughts on how healthcare professionals should address the concern expressed by patients to share their data in case it ends up in the hands of bankers, insurers, employers, etc. and could be used to discriminate against them. The bottom line, Mr. Strübin suggested, is that if the use of data does not comply with GDPR, then it is a criminal act and should be treated as such, instead of us hiding our data inside impenetrable silos. We should enable data to be used, and we should trust criminal law against it. We should simply disallow purposes for which the data was not created. Widerberg added that companies licensed to operate in the healthcare field will comply with relevant laws on privacy, so patients should not be concerned about their data being shared. Regarding genetic data, there is a potential for de-anonymisation, but the effort required and the laws that would need to be broken make it unlikely, at least in the short term. He praised Europe’s GDPR framework and believes it could set the scene for the world’s technical development in this area. Mr Calabrò explained that healthcare professionals can play a vital role in building trust and convincing patients to share their data. Some patient groups, such as those with rare diseases, are more willing to share their data, while others may need more convincing. However, healthcare professionals must themselves be given the right skills and competencies to understand the benefits and potential risks of data sharing. Digitalisation is not evenly distributed, and there are differences in the implementation of digital solutions across regions and nations. Harmonisation needs to be obtained, in order to avoid reinforcing differences between regions, local authorities, or nations, which ultimately reflect on patients and may harm trust.
Ms. Yared agreed on the importance of raising awareness among healthcare professionals about AI in medicine so that they can address patient concerns. She then asked the panel about the ethical and legal implications of misdiagnosis using AI, who would be responsible for misdiagnosis and the resulting wasted time and treatment. The question about responsibility for misdiagnosis with AI is a difficult one to answer. Calabrò suggested that transparency in the decision-making process is important, as mistakes can still happen with or without AI. Patients should have the right to point out mistakes and ensure that they do not happen again. According to Widerberg, the responsibility for misdiagnosis ultimately falls on the physician, even with the use of AI tools and assistance in decision making. Variability and unforeseeable factors can still impact the outcome. More interesting is to see how we can build and use AI for monitoring, so that we see the results of what we are doing in our public health system to improve and ensure mistakes are not repeated. Ms. Yared appreciated the response that AI will not replace healthcare professionals but rather assist them in verifying diagnoses and findings. Strübin agreed that ultimately, doctors are the ones who use medical devices and make diagnoses. He believes that the fear of AI becoming self-improving and out of control is far-fetched and that AI is simply advanced software subject to strict regulations. He highlighted that if a medical device maker knowingly brings a defective product to market, they can be held liable and subject to legal consequences.
Each panellist was then asked to provide two take-home messages for cancer patients and healthcare professionals. Mr. Strübin: AI medical devices in Europe are well regulated and safe. You should really make your data available for research because it can ultimately save you, your loved ones, and countless other people. Mr. Widerberg: as a patient, know that these are tools, and they are safe. As a medical professional, share, so that we can get healthcare to benefit from AI as much as other industries. Mr. Calabrò: for patients, trust in the potential of AI, trust in the potential of digital transformation. For healthcare professionals, gain patients’ trust, as it is fundamental for your role. As for regional and local health authorities, they can bring all these stakeholders together, industry, patients, professionals to facilitate positive change.
Artificial Intelligence in Cancer Care Educational Project
Artificial intelligence has given rise to great expectations for improving cancer diagnosis, prognosis and therapy but has also highlighted some of its inherent outstanding challenges, such as potential implicit biases in training datasets, data heterogeneity and the scarcity of external validation cohorts.
SPCC will carry out a project to develop knowledge and competences on integration of AI in Cancer Care Continuum: from diagnosis to clinical decision-making.
This is the report of the fifth webinar part of the “Artificial Intelligence in Cancer Care Educational Project”.
Click here to read the report of the first webinar.
Click here to read the report of the second webinar.