In 2022-23 seven webinars on the theme of Artificial Intelligence in Cancer Care were organised by the SPCC. In a virtual round table, held on May 25th 2023, key insights were each presented by one of the participants. The symposium was chaired by Aleksandra Filipovic, Clinical Research Fellow at Imperial College London, United Kingdom, and Dimitris Kalogeropoulos, Chief Executive at the Global Health & Digital Innovation Foundation, UK. During his brief welcoming speech, Dr. Kalogeropoulos emphasised the importance of actively listening to the Global Health Innovation community. This community consists of devoted individuals who consistently work towards improving healthcare standards and willingly share their insights with researchers, innovators, and policy makers. Despite the challenges they face and the urgent demands of patients, they dedicate their limited time to collaborative endeavours.
Webinar 1 – AI in Cancer Care: An Overview
The highlights of the first webinar in the series were presented by Eduardo Farina, Radiology Resident at UNIFESP (Federal University of São Paulo, Brazil) and AI Developer for healthcare at DASA (da América SA), SP, Brazil. The first speaker at that webinar was Felipe Kitamura, who gave an explanation of the differences between AI, machine learning, and deep learning. Artificial Intelligence is a computer science field focussed on creating intelligent machines that can perform tasks requiring human-like intelligence. Machine learning, a subset of AI, develops algorithms enabling systems to learn from data without explicit programming. It allows computers to automatically learn patterns, make predictions and decisions based on training data. Deep learning, a technique within machine learning, trains artificial neural networks with multiple layers to process and comprehend intricate patterns from extensive data sets. One way the technology can be applied to cancer care is risk estimation. Deep learning techniques can be utilised to analyse multiple patient data and make predictions, such as identifying patterns related to cancer and assessing risk factors. This can aid in creating robust models for estimating the risk of cancer in different populations. Deep learning also finds applications in medical imaging, where it can be used for detection, tasks like segmenting bounding boxes around findings in CT scans, MRIs, X-rays, and colonoscopy images. Deep learning can be employed in pathology for diagnosis and even in laboratory diagnostics. DL techniques can also be applied to treatment optimisation, particularly in predicting drug response. At DASA they have successfully developed natural language processing (NLP) models to rapidly detect BI-RADS classifications from radiologists’ reports, enabling faster diagnosis and treatment planning for mammograms and ultrasonography. Turnaround time can thus be reduced from 60 to 15 days. Moreover, advancements such as AlphaCode, a powerful neural network, allow AI to be utilised in developing new drugs, greatly decreasing the time and cost involved.
Limitations regarding the implementation of AI were also acknowledged during the webinar. Creating an AI system is not a straightforward process, and careful consideration is required when choosing the problems to address. Improving every aspect of healthcare, including cancer care, can benefit from AI, but it necessitates following a comprehensive approach. This involves steps such as data collection, model construction, including training, validation, testing, and sometimes external validation. Finally, the AI model needs to be deployed and continuously monitored to ensure its effectiveness and performance over time.
The second talk, given by Aziz Nazha, was about opportunity, bias, barriers, and gap. A case study on the application of AI in myelodysplastic syndromes was examined, shedding light on the challenging limitations involved. The topic of explainability was also explored, as to whether explainability for predictions is required or not. Currently, research is being conducted to develop improved methods for explaining how AI models make predictions, including techniques like Grad-CAM and SHAP values. Reproducibility was also highlighted as a critical factor to consider. A model that works effectively in one region or country may not necessarily perform similarly in another, emphasising the need for careful evaluation and adaptation. The issue of bias was also raised, underscoring the importance of addressing and mitigating biases in AI systems. Fairness, regulatory approvals, implementation in the workflow and model monitoring were other issues that were discussed.
Bias is indeed a significant challenge in the field of AI, and is one of Dr Farina’s fields of research. A paper by Judy Gichoya from Emory University (https://doi.org/10.1016/S2589-7500(22)00063-2) has shown that AI can identify a person’s race from chest X-rays, even when the images are intentionally degraded. This raises questions about how AI systems can predict race from such images, as there are no visible indicators like those present for sex prediction. Various experiments have been conducted, including the exclusion of bones and lungs, yet the reasons behind the model’s ability to predict race remain unclear. The concern arises when this information is used in training AI models, as it can lead to biased models that learn associations between race and diseases. This highlights the need for careful consideration and measures to address bias in AI algorithms to ensure fair and equitable healthcare outcomes. Including diverse populations in the dataset when creating deep learning models is crucial to ensure their effectiveness across different ethnicities.
The third speaker in the first webinar, Dr. de Moraes, highlighted the potential of AI in improving the patient journey and medical decision-making in cancer care. As a radiation oncologist experienced in working with AI, he shared insights on how AI can be integrated into healthcare, particularly in a patient-centred manner. Dr. de Moraes discussed the challenges associated with bringing AI into cancer care, emphasising the vast amount of data generated in this field. Cancer care involves data related to chemotherapy response, genomic information from biopsies, and medical imaging, making it one of the specialties that generate more diverse and extensive datasets. These have the potential to drive significant advancements in prediction models and innovative approaches. It is worth noting that Dr. Moraes’s presentation was given before the release of ChatGPT. A recent article published in JAMA compared physician responses in online medical forums to the responses generated by ChatGPT (specifically GPT-4). Patients rated the empathy of the chatbot higher than that of the physicians, and the quality of answers provided by the chatbot was rated higher by physicians themselves (https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/2804309). These findings do not suggest that chatbots will replace physicians, but rather highlight how we can use them to improve communication. Engaging with patients on the internet and answering their questions without any personal connection or knowledge of their specific circumstances can be challenging for human physicians. However, the use of chatbots, powered by AI, can provide valuable support, and augment the care provided by healthcare professionals.
In the final talk, Nishith Khandwala’s focus was on the process of developing AI algorithms and bringing them into clinical practice, specifically in the context of the United States. Dr. Khandwala highlighted the regulatory aspect of AI algorithms, mentioning that in the US, these algorithms are considered Class II risk level medical devices by the FDA. Therefore, they must go through the 510(k) process, which involves submitting the device for FDA clearance, a process that can take more than 90 days, although it can be expedited if there are no refusals or holds during the process. Doctors should be involved in this process because much medical knowledge is required when submitting AI software as medical device for FDA approval.
Round table on the discussed topics
The discussion focused on the major obstacles to the broader adoption of AI in healthcare. The participants emphasised the need to convince clinicians of the usefulness and value of AI in their workflow. They highlighted the importance of involving healthcare professionals in the development process to address the right problems. The role of the business case and reimbursement in the adoption of AI algorithms was also discussed, with varying speed and rules across different regions. Obtaining reimbursement for AI algorithms was seen as a challenge, hindering their integration into clinical practice. The question of clinical validity was raised, with a debate on whether reimbursement would improve trust in AI or just generate more revenue to improve the tools. A comprehensive approach that combines regulatory frameworks, proof of benefit, and reimbursement for proven AI devices was advocated. Reducing the initial investment in AI and creating incentives for adoption were proposed. Trust emerged as a recurring theme to be explored further in subsequent sessions.
Highlights from the second webinar: “AI Applications in Diagnosis”
The highlights from the second webinar were presented by Prof. Viktor Kölzer, Attending Pathologist and Assistant Professor, Institute of Pathology and Molecular Pathology, University Hospital and University of Zurich. This webinar brought together experts from various fields related to diagnosis in medicine, and explored the potential of AI support in these areas. The seminar highlighted that the field of medical diagnosis and diagnostic technologies is currently at a critical turning point: This pivotal moment is characterised by the convergence of advancements in sensor development across various fields, such as wider sensor arrays, and imaging devices, alongside breakthroughs in visualisation, computation, and high-performance computing, specifically catering to the demands of precision medicine.
To effectively summarise the different aspects of AI application and their timelines, we can divide them into distinct phases: the near, the intermediate, and the long-term future. The initial diagnostic applications that have been on the market and utilised in recent years have primarily focused on improving efficiency within existing workflows. These technologies aim to enhance the effectiveness of current practices and processes. However, analogue workflows, particularly in pathology, have limitations, such as delays in obtaining second opinions and reliance on physical slides for reporting. The transition to freely shareable and structured diagnostic data has the potential to transform workflows, enabling seamless collaboration and integration across disciplines. Studies demonstrate the benefits of AI in diagnostic workflows. Prof. Andrew Janowczyk’s (Assistant Professor, Biomedical Engineering Department, Emory University, Georgia, US, and Data Analyst at Geneva University Hospital, Geneva, Switzerland) presentation showed advanced applications of AI to further improve diagnostic workflows. And of course, one critical point is quality control. Image-based analytics and AI can monitor the production of diagnostic slides, and tools like HistoQC enable real-time assessment of quality metrics. Another important point is the challenge of translating expert knowledge into a resource for designing AI algorithms that can replicate known factors in disease detection, such as tumour detection or mitotic detection or other features that are informative for the disease. One approach would be to use smaller AI models that can be pre-trained on specific parts of a given cohort or feature space. These models can provide initial detections of objects, which can then be confirmed or rejected by experts. This iterative process allows for the creation of well-annotated and extensive datasets that can be used to train larger AI models.
In the intermediate timeframe, there is a focus on developing tools that intelligently support medical experts in clinical tasks, surpassing current capabilities and standards. These tools are expected to integrate data at a detailed level and leverage multi-modal data analytics to enhance and support prognostic or predictive decisions. Radiology presents a notable example of this advancement, as was highlighted by Prof. Bettina Baeßler, Professor and Head of Cardiovascular Imaging and Artificial Intelligence at University Hospital, Würzburg, Germany. The field of radiomics represents a shift from purely visual or qualitative interpretation of radiology imaging, such as CT, PET, or MRI, to a quantitative approach. This quantitative approach utilises advanced mathematical analysis and AI methods to enhance existing datasets. By leveraging AI and quantitative tools, the workflow transitions from a simple image description to an integrated approach that supports interpretation through features derived from signal intensity distributions, pixel interrelationships, or spatial distributions within the image. The application of informed algorithms in pathology, as pointed out by Prof. Janowczyk in one of his works (Janowczyk A, et al. JCO Clin Cancer Inform. 2019), presents exciting prospects similar to radiomics in enhancing clinical decision-making. These algorithms enable the analysis of spatial interrelationships among cells, scaling this analysis across thousands of events on a single slide, which would be impractical for a human expert. These pathomics approaches hold great promise in the intermediate timeframe to enhance clinical decision making in the future.
How do these technologies integrate with existing drivers of precision medicine? Molecular profiling has been the main driver of precision medicine and treatment development over the past decade. But of course, comprehensive molecular profiling is still a very expensive and resource-intensive technology, in particular if the purpose is to develop multi-omics data from tissue samples. Molecular profiling is also tissue destructive, as acquiring DNA or RNA-based information often requires sample dissolution, which also entails specific infrastructure and expertise. This raises an important question, especially in pathology: Can we directly utilise images to inform precision treatment by employing deep neural networks to derive predictors for genetic or molecular information from pathology cases? This could hold significant advantages due to the widespread generation, standardised nature, and cost-effectiveness of working with standard diagnostic slides. By directly using image data instead of relying on sequencing, valuable information could be accessed rapidly that can inform the diagnostic workflow in terms of pre-selection of patients for molecular testing or screening for hereditary cancer syndromes.
Image-based analysis could also contribute to clinical stratification and treatment decisions in challenging cases, adding additional information to our expert assessment of a given case. In her presentation, Dr. Maria Rodriguez, Staff Member at IBM Research, Zurich, Switzerland, discussed novel methodologies for interpretable modelling in predicting gene expression profiling. Specifically, she explored the use of histology images as input to enable RNA expression profiling. A seminal study published in 2022 at MICCAI demonstrates the potential for pathology to evolve into a morpho-molecular field by leveraging advanced AI and machine learning techniques. It highlights the ability to connect genotype and phenotype by analysing the microscopic appearance of cancer tissue and its correlation with gene expression profiling or the presence of oncogenic driver mutations. This breakthrough opens up possibilities for predicting not only the molecular subtype but also survival outcomes and treatment responses directly from histology images (Graziani, M, et al. 5th International Workshop, iMIMIC 2022, Held in Conjunction with MICCAI 2022, Proceedings, 2022).
These exciting prospects in the pathology space pave the way for a revolutionary transformation in how different data types can be integrated using deep learning methodologies. Instead of relying on the broad categories used in current clinical practice, a more personalised approach to treatment and monitoring can thus be achieved by combining clinical information, staging information, image information, and morphology. This integration can help move towards a more personalised approach towards treatment and monitoring in clinical practise.
In the realm of future possibilities for AI in medical diagnostics, the concept of digital twins holds great promise. It was explored by David Iqbal (Postdoctoral Fellow at the Digital Society Initiative of the University of Zurich, in Switzerland) in his presentation, and also in his recent paper in The Journal of Law Medicine and Ethics (Iqbal, J., Krauthammer, M., & Biller-Andorno, N.; Journal of Law, Medicine & Ethics. 2022). The concept revolves around harnessing data from various sensors and devices, such as smartwatches and mobile phones, to create a comprehensive digital representation of an individual. By continuously monitoring a person’s physical activity and other relevant factors through advanced sensors, it becomes possible to simulate and model various aspects of their physiology, including organ functions and even neurocognitive processes. A digital twin is essentially an in-silico representation of a person. This virtual counterpart holds e.g., the potential for early disease prediction to name an application example. It becomes possible to anticipate the development of diseases before they manifest in the physical twin, enabling timely actions to mitigate potential health risks. Additionally, the digital twin could serve as a testing ground for evaluating the effects of different treatments. Through simulations within this virtual model, optimal treatment schedules can be explored and tailored to the specific individual, thereby informing decisions regarding the actual person’s treatment plan.
Prof. Kölzer closed his presentation with a takeaway message: to realise the benefits of AI in medical diagnosis, it is crucial to acknowledge a fundamental shift in perception. While the term “artificial intelligence” may generate hype, the reality in the next 5 to 10 years will likely involve the development of smart tools that augment the existing intelligence of medical experts. It is important to distinguish between hype and reality and recognise that we are still far from replacing doctors with high-level AI. Instead, we currently have trained models that excel at specific tasks and outperform human experts in those particular areas. However, they do not possess the comprehensive cognitive abilities of a human mind. In order to effectively utilise these AI tools and comprehend their limitations, there is a pressing need to enhance technical literacy in computational fields, especially within the field of medicine. This can be achieved through education at professional degree level and the establishment of specialised post-graduate programs that provide the necessary knowledge and skills. By increasing technical literacy, medical professionals can effectively navigate and leverage the potential of AI in diagnosis, while understanding its boundaries and ensuring responsible use.
Round table on the discussed topics
The need for attention to endotyping in addition to phenotyping and genotyping was raised by Dr. Kalogeropoulos. Limited data availability for endotyping experiments and the expansion beyond siloed electronic health record (EHR) data were discussed. Prof. Kölzer emphasised the importance of individuals regaining control and ownership over their data and proposed a new approach where individuals can decide to participate in studies or exchange their data in a controlled manner. The timeline for advanced AI applications and a holistic representation of data was discussed, with the possibility of more advanced applications emerging within 5 to 10 years. The concept of a twin database for testing new drugs without human clinical trials was also mentioned, highlighting the potential of virtual or in silico trials based on reliable and unbiased data. The challenges and limitations of fully understanding biological processes and predicting outcomes were touched upon.
Highlights from the third webinar “AI in Genomics and Reporting for Clinical Practice”
Tim Chen, Scientist at Foundation Medicine, Boston, US, presented the highlights of the third webinar, starting with a brief introduction on NGS and CGP. These are two methods for obtaining genomic data. NGS, or Next-Generation Sequencing, is a powerful technique that allows for massively parallel sequencing. It offers various approaches, such as single marker testing or broader panel tests like the hotspot NGS test. These methods can identify specific mutation regions within limited areas of a gene. On the other hand, CGP, or Comprehensive Genomic Profiling, is a more comprehensive test performed using a hybrid capture-based platform, as utilised by Foundation Medicine. This platform enables the simultaneous testing of over 300 genes. Tissue preservation is employed, and it can detect all four main types of DNA alterations. Additionally, due to its broader scope, CGP assays can identify complex biomarkers like TMB (Tumour Mutational Burden) and MSI (Microsatellite Instability), which are crucial for predicting the response to I-O treatment.
When considering the application of AI in clinical genomic data, one of its most valuable aspects is its ability to identify hidden patterns within large and complex genomic datasets. Dr Chen provided two real-world examples of applications in genomic analysis. The first was the oncogenic variant classifier. Identifying oncogenic mutations is essential as it enables the understanding of cancer genomics and enhances the discovery of potential treatment targets. At Foundation Medicine, they have a vast collection of 600,000 cancer genomes available for analysis. A team of highly skilled scientists is responsible for conducting literature reviews and performing analyses to make the oncogenic status decision. When AI is incorporated into this process, it involves utilising a machine learning classifier, specifically a decision tree classifier. The machine learning classifier employs around a thousand features, which encompass genomic alterations, complex signatures, patient characteristics, and tumour type information. By inputting these features into the classifier, it generates predictions and ranks the variants based on their likelihood of being oncogenic. At this point the scientists can prioritise the top-rank variants, do the literature research and run analyses to make the oncogenic status decision.
As a motivating example, Dr Chen chose a mutation in the EGFR gene with a specific protein effect. Through literature research, his team found this mutation mentioned only once in all medical literature. However, within their database, they identified it in seven cases, and its frequency aligns with its oncogenicity. This mutation is associated with lung cancer, specifically in cases with low TMB, occurring in a hotspot region of the EGFR gene, and it is mutually exclusive with other drivers.
The second example involved building a biomarker using big data and AI, which also has the potential to aid in clinical decision-making. Traditionally, biomarker development begins with research and development to define the question and biomarker of interest, followed by data collection. This process typically involves a small sample size, such as a few hundred or thousand cases. The development then progresses to Investigational Use Only (IUO) and clinical trials, with the hope of obtaining positive results. However, the integration of AI and big data in biomarker development offers new possibilities. The process begins by defining the question and then leveraging a larger dataset, often consisting of thousands or tens of thousands of cases. Feature generation techniques are applied, and suitable AI algorithms are chosen to define the biomarker and move to the IUO and clinical studies. It is worth mentioning that in some cases, biomarker development based on big data and AI can be achieved without relying on clinical outcomes.
In the example given by Dr. Chen the aim is to build a biomarker for predicting the HRD (homologous recombination deficiency) status in patients. HRD is a phenotype characterised by the cell’s ability to effectively repair double-strand DNA breaks using the Homologous Recombination Repair (HRR) pathway. Studies have demonstrated that patients with HRD+ status derive benefits from PARP inhibitors in various tumour types, including breast, ovarian, prostate, and pancreatic cancers. However, there is currently no consistent approach to accurately define HRD, as it can result from genomic alterations in the BRCA1/2 genes or epigenetic changes like BRCA1 promoter methylation. In order to identify HRD, the presence of a scarring pattern in the copy number profile of patients can be observed. To automate the recognition of this pattern, Foundation Medicine developed a machine learning algorithm called HRDsig. The performance of the HRDsig model was evaluated and compared with other algorithms such as gLOH and GIS, which are alternative methods for coding HRD status. The HRDsig model demonstrated superior performance compared to other algorithms, indicating its effectiveness in identifying HRD status. Furthermore, the biomarker developed using HRDsig was found to be highly consistent across different tumour types. This was demonstrated by observing the significant overlap of the biomarker among patients with biallelic BRCA mutations, indicating a high HRD+ status. The study aimed to demonstrate that the scar-based biomarker can effectively detect HRD beyond BRCA mutations in BRCA wild-type patients. The biomarker showed enrichment with other HRD-associated genes such as PALB2, BARD1, and RAD51. Interestingly, even though the algorithm was trained without any clinical data, it showed correlation with clinical outcomes.
To validate the clinical relevance of the biomarker, the study utilised a real-world clinical genomic database that included both clinical outcome and genomic data. Looking at ovarian and prostate cancer, the results showed that patients with HRDsig+ status had better survival outcomes compared to those who were HRDsig negative, even after adjusting for relevant clinical characteristics. Additionally, the team investigated the use of platinum-based chemotherapy in pancreatic cancer. The findings revealed that patients with HRDsig+ status also had improved survival compared to those who were HRDsig negative.
In the webinar the importance of real-world data was also discussed, and how AI can be a valued component in this area. Traditional drug and medical device development often rely on prospective clinical trials, which aim to control variables by implementing inclusion-exclusion criteria and studying specific populations. In this scenario, we can evaluate the experimental treatment against the standard treatment and draw conclusions. However, there are important considerations when applying findings from clinical trials to a broader, unfiltered population. There may be a gap between what is observed in controlled clinical trial settings and the real-world treatment outcomes. This is where the analysis of real-world data becomes crucial, and AI algorithms excel in handling large, high-dimensional databases. By leveraging real-world data and AI, we can enhance our understanding and application of treatments in a wider patient population.
The webinar also covered the key elements required for precision oncology. These include clinical data, patient-reported outcomes, genomic data, as well as other omics data such as RNA and protein-level data. Additionally, digital pathology and radiomic data play significant roles. Having a reference dataset that defines what is considered normal and abnormal is also crucial. Bringing together this comprehensive information, the Molecular Tumour Board, consisting of clinicians, pathologists, and scientists, can make treatment decisions based on all of these data. To complete the loop, the patient’s treatment and outcome are returned to the database, ensuring a continuous feedback loop of data integration. In his presentation, Prof. Michielin introduced The Swiss Personalized Oncology (SPO) project, which aims to connect hospitals through a shared data infrastructure.
As to challenges and limitations in AI, interpretability remains a question in this field, emphasising the need for clear and understandable explanations of AI-driven decisions. Furthermore, larger, and higher-quality cohorts are necessary to train more robust algorithms. Biases, particularly under-representation in training datasets, can introduce sub-structures that may impact the relationship between risk factors and health outcomes. External factors such as socio-economic status can also influence these biases, requiring careful handling and mitigation. Another intriguing observation, brought up by the Stanford Institute for Human-Centered AI, is the environmental impact of AI technology, specifically its carbon footprint. The resource-intensive nature of its models raises concerns about energy consumption and sustainability, highlighting the need to address this issue.
In summary, AI has surpassed state-of-the-art performance, gaining FDA approval in the US in various clinical diagnostic applications. Big data facilitates cancer research and, importantly, improves patient care. It is noteworthy that in some cases, robust biomarkers can be developed using big data and AI even without clinical outcome data.
Round table on the discussed topics
During the discussion, several aspects of digitisation in healthcare and the integration of artificial intelligence were addressed. One important aspect mentioned was the need for standardisation and consistency in datasets. It was noted that different organizations may have varying standard operating procedures (SOPs), leading to missing data and potential biases in analysis. Improving the infrastructure and databases within academic centres was emphasised to fully deploy AI in healthcare workflows.
The panel also discussed the importance of comparing and contrasting outputs derived from different data sources, such as tissue-based analysis and liquid biopsies. The potential of liquid tests, like chip variant definition, was highlighted as a powerful tool that may complement tissue-based analysis. The consistency between different tests and the need for clinical trials to validate biomarkers developed using AI algorithms were also discussed. The topic of developing biomarkers without measuring them against clinical outcomes was also explored. During AI algorithm development, explained Dr Chen, they can use genomic data alone without considering patient treatment and outcomes. However, clinical trials would be necessary to prove the reliability of these biomarkers.
The issue of bias in data and potential social disparities arising from genomic data access was raised. To address this, the panel suggested making tests more accessible and affordable, particularly for liquid-based tests. Analytical methods, such as propensity score matching, were mentioned as a way to mitigate bias in the analysis of underrepresented populations.
The discussion highlighted the significance of timing in the implementation of AI in healthcare, particularly in the field of oncology. Prof. Luchini emphasised the need for research and knowledge growth in the era of AI, focussing on the differences between liquid and solid biopsies and genomics. The integration of genomics and transcriptomics, along with the emergence of proteogenomics, was also mentioned as an area where AI can play a crucial role. The importance of considering both common and rare malignancies was emphasised. While rapid progress is expected in the management of common cancers through AI, rare cancers should not be neglected. The panel suggested the collection and analysis of data on rare tumours using AI systems, starting with small series, to build foundational knowledge and understand deviations and responses in rare diseases.The panellists also touched on the need to establish robust databases and share data across institutions to facilitate research, clinical trials, and the development of new biomarkers and drugs. Overcoming the challenges of sharing and accessing databases was acknowledged as an important step towards advancing AI in healthcare.
Highlights from the fourth webinar “AI in Surgery”
Pietro Valdastri is Professor and Chair in Robotics and Autonomous Systems, University of Leeds, UK. AI in the field of surgery primarily emphasises image analysis. Therefore, most of the fourth webinar centred around how AI can assist surgeons, endoscopists, and other operators by performing real-time image analysis and providing valuable support during medical procedures. The topic was also expanded to include flexible endoscopy. Specifically, how AI can assist in upper endoscopy, lower endoscopy, and various forms of enteroscopy, including standard flexible endoscopes and wireless capsule endoscopes. Furthermore, the panel delved into the application of AI in robotic surgery as well, which is Prof. Valdastri’s main field of research.
There are numerous platforms available for surgical procedures, many of which are already in use in hospitals. One notable platform is the Da Vinci, from Intuitive Surgical, which has been present in hospitals since 2000, marking 23 years of robotic surgical technology utilisation. The adoption of robots in surgery has shown significant progress, as reflected by the numbers. For instance, in 2020, approximately 15% of all general surgery procedures were conducted using robotic assistance. Robots in surgical settings offer improved dexterity and precision at the same level of invasiveness. However, it is important to note that current robots operate based on the surgeon’s instructions, lacking true autonomy or independent intelligence. This raises an intriguing debate in the field of surgical robotics regarding the extent to which computer assistance and artificial intelligence should be incorporated to support and enhance surgical procedures performed by operators or surgeons.
In robotic surgery, a classification exists for different levels of autonomy. The first level is zero autonomy, referred to as transparent teleoperation. In this mode, the robot replicates the exact actions of the surgeon, translating their movements from a joystick or controller to the patient. Currently, the platforms used in hospitals operate based on this model. While the robot can prevent movements that could pose a risk to sensitive areas of the anatomy, it does not possess independent decision-making capabilities. Moving up the autonomy scale, robots can perform specific tasks autonomously, such as suturing, while still operating under the supervision and guidance of the surgeon. The ultimate goal is to automate entire sections or even the entirety of a surgical procedure. However, it is important to note that the current role of AI in robotic surgery is primarily focused on assisting the surgeon rather than replacing their expertise. There is immense potential for further development in this field, but it is a long journey to reach the point where a robot can independently execute an entire surgical strategy.
So, in what ways can AI contribute to this field? The panel approached AI implementation from the perspective of image analysis. A notable example is the application of AI in vehicles, as demonstrated by Tesla. By interpreting camera images using artificial intelligence, the car system can understand and perceive its surroundings. This concept can be applied to flexible endoscopy and surgery as well.
Currently, the primary use of AI in these procedures revolves around image analysis. In flexible endoscopy, AI algorithms can assist in localising lesions, such as polyps or bleedings. Additionally, in robotic surgery, AI can be employed to select and highlight specific features in real-time image streams, informing the operator about delicate anatomical structures that should be preserved.
Colonoscopy has seen significant advancements in AI integration. Decision support modules utilising AI can help gastroenterologists identify the type of lesion they are observing and assess its severity. AI can also be used for rating the quality of the examination and be employed for teaching gastroenterologists. The FDA has approved several real-time polyp detection systems for colonoscopy, which highlight potential lesions for removal by the doctor. This feature proves invaluable in situations where an operator may miss certain lesions. Similar to the example of colonoscopy, in robotic abdominal surgery, AI can be utilised to classify the quality of procedures and provide feedback to aid in the training process of surgeons. These advancements are already making significant strides within hospital settings. In the context of robotic surgery, AI can be employed to highlight crucial anatomical features, such as tissue and organ tracking. This capability proves highly beneficial in preventing unintended complications, such as inadvertently cutting a vessel that should be preserved. By leveraging AI algorithms, surgeons can improve precision and safety during robotic surgical procedures.
At the webinar the relevance of AI in clinical management was also discussed. AI can provide a quantitative analysis of procedure quality, including documentation of various metrics. An FDA-approved system already available on the market utilises artificial intelligence to record colonoscopy procedures, capturing the time required for completion, speed during different phases, and other relevant data. This system can generate quality indicators, which are valuable for mentoring purposes and practicing colonoscopies.
The need for improved integration was also discussed. Currently, there is a limitation in terms of having only a single centre, single machine, and single surgical team utilising the technology. The ideal future would involve a more comprehensive integration, where continuous bidirectional feedback exists between machine learning, development, and the actual surgical operation. This integration would encompass various elements, including sensors, imaging, health records, and the robotic systems within the entire ecosystem. Unfortunately, this level of integration has not been fully realised yet. One major obstacle is the scattered nature of training datasets, which are often poorly curated. Annotating and curating datasets of endoscopic images for procedures like colonoscopy or abdominal surgery is a time-consuming task. Additionally, those who curate these datasets may be reluctant to share them freely, posing a challenge for widespread access and collaboration.
What is currently lacking in the field of AI in surgery and endoscopy? The main roadblock is the restriction to a single surgical centre and a single surgeon, with limited or non-existent expert annotation. Furthermore, there is a need for more diverse datasets that encompass a wider range of cases, as the current datasets are often derived from small and homogeneous groups. The European Commission has taken steps to address this issue, developing guidelines for trustworthy artificial intelligence that are being applied to surgical and endoscopic datasets. However, this is still an emerging area, and further efforts are needed to fully realise the potential of AI in surgery.
Round table on the discussed topics
The use of AI in preoperative imaging was highlighted as a way to identify potential problems and complications during surgery, reducing strain on health systems and enhancing patient safety. The example of neurosurgery and the challenges of maintaining registration between preoperative imaging and the surgical field were discussed. AI applied to endoscopic imaging was seen as a promising tool to assist operators, improving surgical performance and potentially democratising surgery.
The application of AI in colonoscopy to detect abnormalities in real-time was mentioned, with studies showing improved procedure quality. However, there was a concern that over-reliance on AI may diminish the expertise of skilled gastroenterologists, emphasising the need for further clinical studies to optimise the benefits of AI while maintaining surgical skills.
The integration of robotic surgery with AI for reliable tissue sample collection and genomic analysis was discussed, particularly in developing endoscopic capsules for non-invasive sampling of the small intestine. While surgical robotics platforms are already in use, the complexity of microbiome sampling presents additional challenges.
The limits of AI in relation to imaging were acknowledged, particularly in procedures like colonoscopy where there may be blind spots and uncertainty about the extent of visualisation. The potential for a robotic solution providing feedback on visualisation was suggested, but it is still in the experimental phase.
Highlights from the fifth webinar “AI for patient-centric personalisation of medicine
A slide presentation of this webinar’s highlights was provided by Dorothy Keefe, CEO Cancer Australia.
The European Patients’ Forum (EPF) recently published a survey on AI from the patient perspective, indicating that they generally feel hope about AI’s potential but lack a strong understanding of its workings. Most participants recognised its positive impacts, such as: enhancing healthcare professionals’ ability to provide personalised care; improving and facilitating self-care and access to information; enhancing the quality and efficiency of diagnoses; supporting efficient delivery of care.
They also raised a number of concerns, such as: insufficient involvement of patients in co-designing AI solutions; lack of transparency regarding the functioning of AI systems; potential unauthorised use of AI training data or outputs; risks of incorrect diagnoses, errors, and ambiguity regarding liability in case of errors; potential reduction in human interaction with healthcare professionals. The World Health Organization (WHO) has outlined six guiding principles to limit risks and maximise opportunities in AI application: protecting human autonomy; promoting human well-being and safety and the public interest; ensuring transparency explainability and intelligibility; fostering responsibility and accountability; ensuring inclusiveness and equity, and promoting AI that is responsive and sustainable. Regional and local health authorities should play an important role in addressing these issues.
Michele Calabrò presented a best practice example from the Italian region of Puglia, highlighting the use of AI and machine learning in breath analysis for diagnosing oncologic diseases. Puglia recognised the potential of non-invasive and repeatable breath analysis as opposed to traditional diagnostic methods, along with the growing interest from the scientific community. They developed a novel approach to breath analysis by introducing an automatic sampler called the ‘Mistral’ and initiated research projects such as BALP (Breath Analysis in Lung and Pleural neoplasm) to create predictive models for lung cancer classification and monitor patients with asbestos exposure and malignant pleural mesothelioma (MPM). Another project aimed to identify a distinct VOC profile for MPM patients compared to healthy controls. The research utilised chromatographic peak areas and a machine learning framework with three classifiers to discriminate between MPM and healthy individuals. The results showed promising agreement between the model output and standard diagnostic exams. This case exemplifies how regional efforts across Europe are increasingly exploring machine learning applications in healthcare to enhance patient solutions.
A report titled ‘The socio-economic impact of AI in healthcare,’ published in 2020 by Deloitte and MedTech Europe, highlights the various applications where AI can significantly benefit healthcare. Wearables, such as smartwatches, have the greatest impact, as they can now detect arrhythmia and assist in monitoring health, alerting users to possible anomalies and advising them to seek medical attention. AI also plays a crucial role in glucose meters, enabling patients to manage their blood sugar levels and improve their quality of life without frequent doctor visits. Imaging is another significant area where AI excels, as advanced software can interpret medical images and detect cancer at an early stage, rivalling the accuracy of trained radiologists. AI applications in labs accelerate sample delivery, analysis, and provide insights that aid in finding cures. Additionally, AI contributes to research and innovation by analysing extensive datasets and identifying anomalies. By removing barriers and implementing AI technologies in healthcare, the report suggests potential annual savings of 400,000 lives, equivalent to a city’s population, along with €200 billion, over 10% of European healthcare expenditure. Furthermore, it would free up 1.8 billion hours of healthcare professionals’ time, allowing them to provide essential human touch when needed. The goal is to improve healthcare access, accuracy, and efficiency while addressing the existing healthcare workforce crisis.
Legal and ethical challenges from the patient’s perspective include data ownership, reimbursement schemes, and data ownership at the government level. Patients and healthcare professionals should be involved in AI initiatives, and governments and hospitals should share data to advance AI innovation. Governments and institutions need to work together to update existing procedures and systems to make data collection and sharing easier. The European Health Data Space aims to promote responsible use of data for apps, researchers, and innovation. Results obtained from patient’s data are relevant to innovation and better cure.
Healthcare professionals should address concerns about data sharing, as it could be used to discriminate against patients. Data sharing should comply with the General Data Protection Regulation (GDPR), and companies licensed to operate in the healthcare field will comply with relevant privacy laws. Healthcare professionals can build trust and convince patients to share their data, but they must be given the right skills and competencies to understand the benefits and potential risks. Raising awareness about AI in medicine is crucial to address patient concerns. The ethical and legal implications of misdiagnosis using AI are difficult to answer, but transparency in the decision-making process is important.
AI will not replace healthcare professionals but assist them in verifying diagnoses and findings. The fear of AI becoming self-improving and out of control is far-fetched, and medical device makers can be held liable if they knowingly bring defective products to market.
Highlights from the sixth webinar “Artificial Intelligence for Specific Tumours”
Giancarlo Pruneri, IRCCS National Cancer Institute, Milan, Italy, presented the highlights from this webinar. The panel was made of expert pathologists. Each presentation focussed on different aspects, but there were also some commonalities. One of the main focal points highlighted was the significant challenge of establishing the required infrastructure for the implementation of digital pathology. The adoption of digital pathology varies considerably among different countries and centres. One success story, presented by Prof. Eloy, comes from a centre in Sicily, The Gravina Hospital in Caltagirone. This is a small hospital that achieved complete digitisation of their department, despite the various hurdles, such as infrastructure, costs and acquisition of expertise.
Maintaining quality control in pathology laboratory processes is crucial for accurate patient diagnoses. While the adoption of a quality management system is not universally mandated for transitioning to a digital pathology workflow, laboratories with robust quality management systems may find the transition easier as they already have established critical checkpoints in their analogue workflow. To assist laboratories unfamiliar with quality control programs, the European Society of Digital and Integrative Pathology provides suggested checkpoints. It is essential to develop a well-structured workflow that promotes collaboration among lab technicians, pathologists, and the entire hospital. Many software options are available to pathologists, pointed out Prof. Kather in his presentation, providing them with the opportunity to obtain more dependable information from slides. However, implementing the use of such software presents its own challenges. While the software can be acquired from companies, integrating it effectively into the existing workflow becomes crucial for result validation within the specific laboratory environment. Thus, the process involves not only obtaining the software but also validating its performance within the laboratory’s unique workflow.
Another common topic of the webinar was the intersection of artificial intelligence (AI) and digital pathology, which serves as a valuable tool for sharing slides among pathologists. This activity of slide sharing is a common practice in daily operations and helps save valuable diagnostic time, particularly in a world facing economic and personnel limitations. The current times are characterised by economic constraints. The shortage of medical professionals affects various specialisations, but pathologists are particularly affected. Therefore, having a system that enables time-saving and remote slide sharing becomes exceedingly important in addressing these challenges. The Covid-19 pandemic highlighted the significance of utilising digital images to bridge the gap caused by physical distance and maintain diagnostic quality when accessing hospitals was challenging. While this situation was unique, that the need for remote slide sharing is pertinent to all pathologists.
The ability to share slides remotely enables discussions and facilitates second opinions on specific cases. It is crucial to acknowledge that artificial intelligence plays a valuable role in assisting pathologists with their tasks. AI and digital pathology can sometimes even outperform pathologists in those activities which are related to screening. For instance, in the case of lymph node screening, particularly in breast cancer, where accurate scoring for the presence of metastatic lymph nodes, including isolated tumour cells, is crucial, the utilisation of software capable of providing more reliable results proves highly beneficial. Prof. Eloy presented a paper on this topic (https://doi.org/10.1093/ajcp/aqaa151), which focussed on the application of artificial intelligence to enhance the accuracy of histological classification in breast lesions. The study specifically examined small biopsies and employed two distinct approaches, resulting in a classification system consisting of four classes: normal, benign, carcinoma in situ, and invasive carcinoma. Notably, the results demonstrated that the AI approach was comparable, if not superior, to pathologists in classifying these tumours accurately.
Professor Laurinavicius presented research highlighting the potential of artificial intelligence in capturing intratumoral heterogeneity, particularly in the context of the Ki67 biomarker. Ki67 is a biomarker that has been the subject of much debate among pathologists due to its heterogeneous distribution within tissue slides, leading to low concordance in its interpretation. In his study, Professor Laurinavicius demonstrated that leveraging artificial intelligence can enable a more reliable assessment of the heterogeneous distribution of Ki67. This is of significant importance because Ki67 serves as a clinically relevant biomarker, as evidenced by its application in conditions like aggressive ER+ breast cancer, where it influences the administration of CDK4/6 inhibitors.
The utilisation of AI for predicting tumour biology from H&E slides has garnered significant attention and has been published in major journals. The most critical aspect of these AI-based approaches is their ability to surpass the limitations of our current tools. An extremely interesting aspect is that AI can make assumptions on gene modification and genetic characteristics of cancer based only on H&E slides, without DNA analysis. Another notable example is the use of pan-cancer analysis with artificial intelligence to obtain a more dependable assessment of intratumoral heterogeneity. It is well-established that intratumoral heterogeneity is associated with resistance, particularly in personalised therapies. Therefore, gaining insights into a tumour’s propensity for intratumoral heterogeneity can hold significant clinical utility.
Artificial intelligence tools offer the potential to predict the PD-L1 status in breast cancer by analysing H&E-stained histopathological slides. Furthermore, the use of AI can also be applied to predict the EGFR status in breast cancer, as highlighted in two separate papers discussed by Prof. Eloy during the webinar. Notably, an influential paper by Prof. Kather and his group, demonstrated the efficacy of deep learning in H&E slides to directly predict microsatellite instability in gastrointestinal cancer (https://doi.org/10.1038/s41591-019-0462-y). This ground-breaking research showcased the ability to extract valuable genetic information solely from histological analysis. Additionally, Prof. Kather delved into the extension of these capabilities to other genetic alterations, referencing a paper he and his team published in Nature Cancer (https://doi.org/10.1038/s43018-020-0087-6). This study illustrated how artificial intelligence reliably predicted the occurrence of complex genomic alterations.
To conclude, Dr Pruneri emphasised that while the discussed approaches utilising artificial intelligence in histopathological analysis show promise, he does not perceive them as replacements for genomic or other comprehensive analyses in the very near future. The sensitivity and specificity of these methods are not yet at a perfect 100%. However, there is potential for their utilisation in low-income countries to aid in determining which patients should undergo genomic analysis. The IRCCS INT National Cancer Institute in Milan isis currently undertaking a study using AI on H&E-stained breast cancer samples to predict risk scores, assisting in identifying patients who would benefit from oncotype testing. This approach holds significance as oncotype testing incurs costs, and by fine-tuning the population through AI-based predictions, we can better define individuals who require specific genomic studies.
Could artificial intelligence push the boundaries of what we can achieve in our daily clinical work? The key lies in finding the most effective way to leverage this technology. A possible avenue is to integrate various data sources such as proteomics, genomics, clinical metadata, and spatial transcriptomics. In prognostication, for example, combining multiple data points can provide a more comprehensive understanding of a patient’s prognosis. With a robust prognostic model in place, we can use this information to guide treatment decisions, such as de-escalating chemotherapy in specific scenarios. However, to fully unlock the potential of these approaches, it is essential to build robust databases and foster data sharing to advance this collaborative approach further.
Round table on the discussed topics
The discussion revolved around the application of AI in tumour analysis, with a particular focus on its potential in genomics, exceptional responders, and digital tumour boards. One notable aspect discussed was the use of AI in genomics to enhance diagnosis. The example of a patient misdiagnosed with neuroendocrine carcinoma, whose comprehensive genomic analysis revealed breast cancer, highlighted how AI could aid in refining diagnoses by identifying specific mutational profiles. This demonstrates the value of AI in leveraging genomics to improve accuracy and potentially guide treatment decisions.
Another important topic addressed was exceptional responders, individuals who exhibit a remarkably positive response to treatment. The potential of AI in identifying the factors contributing to exceptional response and predicting such outcomes was emphasised. By collecting comprehensive data and utilising AI algorithms, researchers can gain insights into the characteristics of exceptional responders, enabling the development of more tailored treatment strategies.
The benefits of digital tumour boards were also discussed, highlighting their role in facilitating virtual collaboration and multidisciplinary discussions among healthcare professionals. The establishment of virtual molecular tumour boards was cited as an example of how these platforms can enhance collective decision-making, knowledge sharing, and remote consultations. Such digital platforms enable efficient information exchange, promote collaboration among experts from different locations, and contribute to improved patient care and the advancement of telemedicine.
Highlights from the seventh webinar “AI in Managing Clinical Data and Trials”
The last speaker, who summarised the seventh webinar, was Prof. Claudio Luchini, Associate Professor of Pathology and Surgical Pathologist at University and Hospital Trust of Verona, Verona, IT. This webinar featured a diverse panel of experts spanning various fields related to oncology, including pathology, surgery, and clinical perspectives. This diverse representation provided a broad and comprehensive outlook on the subject matter.
One key concept that emerged in relation to artificial intelligence and the management of clinical data in trials is “acceleration.” AI has the potential to accelerate the development and discovery of actionable alterations in cancer, as well as the development of tailored drugs. It plays a crucial role in identifying potential targets for drugs and aiding in the prediction of drug efficacy and resistance. By harnessing the power of AI, researchers and clinicians can expedite the identification of relevant alterations in cancer genomes, leading to the development of targeted therapies that are more effective and precise. Additionally, AI algorithms can analyse vast amounts of data to predict a drug’s potential effectiveness and identify patients who are more likely to respond positively or develop resistance to specific treatments. AI has the potential to assist in various aspects of cancer research and management, particularly when it comes to image models and conventional tumour types. While it may be premature to discuss AI models specifically for rare diseases or rare cancers at this stage, progress has been made with in silico models, as well as other experimental models like mice and organoids. By applying AI-based systems, we can accelerate our comprehension of complex molecular interactions within tumours, the tumour microenvironment, and the interplay between tumours and surrounding cells. AI can help unravel complex molecular and metabolic interactions and predict protein expression and structures. Furthermore, AI enables the integration of data from genomics, transcriptomics, proteomics, metabolomics, etc., which represents the next level of biological analysis. This integration of multi-omics data can expedite the advancement of personalised medicine, tailoring treatment strategies based on individual patient profiles.
In terms of approved artificial intelligence-based devices in oncology and related fields, the current landscape primarily focuses on cancer radiology and pathology. These branches have seen significant advancements and benefits from AI-based devices, particularly in common tumour types such as breast and lung cancer. However, the scope of AI applications should not be limited to these, and efforts should be made to include rare tumours as well. Radiology and pathology play pivotal roles in this domain due to their close association with imaging. Images can be segmented, allowing for translation into numerical data. AI-based devices can effectively manage and analyse this data.
Undoubtedly, AI is revolutionizing medical imaging, as highlighted by Prof. Fishman during the webinar. Deep learning and radiomics are two prominent methodologies currently employed in radiology to leverage the power of AI. While deep learning employs a layered set of self-correcting algorithms to create a mathematical model that best fits the data, radiomics is able to convert imaging data into mineable features, including signal intensity, shape, texture, and higher-order features (grey-scale). Both methodologies hold very promising potential not only for improving disease detection but also for tumour characterisation, and prognostication.
As a pathologist, Prof. Luchini recognises the significance of starting a diagnostic workflow with haematoxylin-eosin slides, observed under a light microscope. However, with the adoption of AI-based systems and digital pathology, we can extract a wealth of information from a simple slide. These systems enable us to analyse numerous features, including cell count, cell morphology, tumour cell count, stromal cell count, and even the identification of different cell types, such as immune cells and tumour cells. Furthermore, AI facilitates the study of interactions between various cell populations within a selected area. By incorporating AI-based systems into our approach, we can benefit greatly from the wealth of information derived from pathology slides. When discussing pathology, it is important to consider not only haematoxylin-eosin staining but also other techniques like immunohistochemistry and immunofluorescence. AI-based systems have the ability to integrate information from these various analyses onto a single slide. By extracting and translating numerical data from these slides, we can obtain reliable insights.
This integration of data from different techniques opens up new possibilities for research, diagnostic workflows, and also clinical trials. In the realm of clinical trials, AI can greatly enhance its phases, starting from patient selection. The use of AI allows for modelling and conducting in silico trials, which have been proven to be highly valuable. Longitudinal monitoring over different time periods becomes feasible, enabling the adaptation of trial designs based on patient responses. Real-time data extraction facilitates predictive modelling, presenting an exciting opportunity in this landscape.
AI can be very effective in devising trials and defining the different phases. In Phase I, AI-based systems can assist in tasks such as dosing estimation, identification of side effects, and identification of biomarkers. In Phase II, AI can aid in patient stratification and refining selection criteria, and to identify trend and pattern. In Phase III, AI can help predict the success of trials by analysing longitudinal patient data and integrating it with statistical analysis. In Phase IV, the response of patients can be comprehensively analysed.
The challenges of AI in oncology encompass various disciplines within the field and represent the future prospects of AI in this domain. The key concepts at the forefront are integration, standardisation, and personalisation. It is crucial to undertake certain steps, some of which have already been initiated, to pave the way for advancements. These steps include: establishing multidisciplinary platforms; the importance of studying all neoplasms, including rare tumours; providing continuous support to ensure the growth of AI; foster the development of future diagnostic, prognostic, and predictive biomarkers with novel machine learning approaches, such as microenvironment and spatial phenotypes.
Round table on the discussed topics
In the discussion, the experts highlighted key points regarding the implementation of AI in oncology. The first step discussed was the practical aspect of digitalising pathology workflows, such as acquiring new scanners and addressing cost-related issues associated with large datasets and storage. They emphasised the need to overcome these challenges in order to progress further in the field. The complexities of personalised monitoring and adaptive treatment were also acknowledged, as it requires specific authorisation for each drug and patient type, involving health and law regulators. The panel recognised the intricate nature of this issue and the need for further exploration and discussion. The panellists concluded with a reflection on the fascinating nature of the topic and the potential for future seminars to delve deeper into the subject. The importance of humanising the AI space and recognising it as augmented human intelligence was emphasised, with the hope that this mindset would facilitate the integration and adoption of AI in oncology, leading to its full range of benefits.
Closing remarks
Dr. Kalogeropoulos summarised the key points discussed during the symposium. The recent advances in biotechnology and science, coupled with new care models, present revolutionary opportunities in healthcare. AI offers incredible opportunities, and new roads to expand access to care. This target needs to be aimed at without compromising patients’ safety and by offering opportunity to fulfil the promise of better patients’ outcomes with equity and inclusion. AI can transform pathology laboratories, enhance patient-centric approaches through telehealth and tumour boards, and enable the development of new treatments and insights into disease processes. To this end, we need more and better evidence. But AI can help with that too, it can accelerate processes and decrease workload in terms of hours and documents. This is why studying AI uses and results is of paramount importance in cancer and other fields, to encourage adoption and to study new modalities that better interface with care processes.
Dr. Kalogeropoulos then expressed gratitude to the SPCC and all participants. The SPCC Online Symposium on Artificial Intelligence in Cancer Care has set out to highlighting the importance of continued engagement and collaboration, it has done a magnificent job exploring advances and in setting the tone for future contributions.
Artificial Intelligence in Cancer Care Educational Project
The Symposium consisted of a virtual round table with take-home messages presentations discussed between faculty and participants. It summarised the main topics of the webinars (one for each area) with a special focus on multidisciplinary interaction.
Available on demand via: https://www.oncocorner.net/online-events/52.
Click here to read the report of the first webinar.
Click here to read the report of the second webinar.
Click here to read the report of the third webinar.
Click here to read the report of the fourth webinar.
Click here to read the report of the fifth webinar.