As the Austrian writer Karl Kraus pointed out at the turn of the twentieth century, the development of technology will leave only one problem: the infirmity of human nature. That ‘remaining problem’ was the focus of an article published last December, in which pioneering AI scientist Fei-Fei Li was asked about the significance of the bygone year: “I would call this an inflexion moment,” she said. “2023 is, in history, hopefully going to be remembered for the profound changes of the technology as well as the public awakening.” It was, she added, a year for people to figure out, “what this is, how to use it, what’s the impact – all the good, the bad and the ugly.”
Now the initial excitement has subsided, society’s approach to AI seems to be developing in line with the “Amara Law”, coined by the futurist Roy Amara to describe human behaviour when it comes to emerging technologies: we overestimate their short-term impact and underestimate their effects in the long term.
The dawn of artificial intelligence
AI has been studied for decades and is still one of the most perplexing subjects in computer science. The concept of using computers to simulate intelligent behaviour and critical thinking was first described by Alan Turing in 1950. “Can machines think?” is the opening line of the article Computing Machinery and Intelligence that Alan Turing wrote for Mind magazine in 1950. He described a simple test, which later became known as the ‘Turing test,’ to determine whether computers were capable of human intelligence.
Six years later, John McCarthy – then assistant professor of mathematics at MIT – first used the term ‘artificial intelligence’ to describe the field of research focused on “the science and engineering of making intelligent machines”. In 1956 he co-hosted a summer research project on AI at Dartmouth College in New Hampshire.
Several limitations in early models prevented widespread acceptance and application of this new science to medicine. This early period, however, was an important time for digitising data, which later served as the foundation for future growth and utilisation of AI. In the early 2000s, many of these limitations were overcome by the development of deep learning.
The principles of AI
The field broadly known as AI comprises various technologies such as deep learning, machine learning, natural language processing, neural networks, and rule-based systems.
- Machine learning: subfield of AI that uses algorithms trained on data sets to create self-learning models that are capable of predicting outcomes and classifying information without human intervention.
- Deep learning: composed of multi-layer neural networks which enable machines to learn and make decisions on their own
- Natural language processing: a process that enables computers to extract data from human language and make decisions based on that information
- Computer vision: a process by which a computer gains information and understanding from a series of images or videos
Current application of AI-based devices in cancer
As of May 13, 2024, the US regulatory body, the FDA, authorized 882 AI/ML-enabled medical devices, with the majority of devices in the radiology space.
AI is profoundly revolutionising the field of cancer care, quickly becoming an indispensable tool in many aspects such as predicting treatment outcomes, cancer screening and detection, new drug discovery, and boosting personalised medicine.
In the diagnostic space, AI has a role in the detection, characterisation, and staging of cancers, and interpreting information based on different types of data, e.g. by linking radiomic information, which is extracted from imaging, to data from other omics fields. Radiomics, which can capture phenotypic information of tumour, has shown promise in differentiating benign from malignant tumours and predicting treatment responses. AI algorithms can improve the accuracy of predictive models used for the diagnosis and evaluation of treatment responses. Radiomics based on extraction of CT features has been shown, for instance, to predict response to immunotherapy in solid tumours, including non-small-cell lung cancer and melanoma, offering a potentially more specific and less invasive alternative to PD-LI as a predictive biomarker.
Computational pathology, which uses AI approaches that computationally evaluate digital pathology whole-slide imaging, makes it possible to identify features that are difficult to detect by eye alone, and is now state-of-the-art in digital pathology.
In clinical decision-making, AI can enhance human capabilities by incorporating increasingly complex knowledge into tailoring personalised therapies. It accelerates the interpretation of diverse and complex data, and its application for personalised treatment. For example, the predictive and prognostic ArteraAI Prostate Test has been included in the US National Comprehensive Cancer Network’s Clinical Practice Guidelines for Prostate Cancer, making it the first AI-enabled tool of its kind to be recommended for localised prostate cancer.
In the screening setting, AI has a role to play in interpreting images obtained for early detection of colon, breast and lung cancer.
In breast cancer, in the study published by Rodriguez-Ruiz A et al, the performance of an AI system for breast cancer detection in digital mammograms and digital breast tomosynthesis was statistically noninferior to that of the average of 101 radiologists. However, there is limited evidence on how AI performs in real-world screening settings.
In colorectal cancer AI has been shown not only to help detect polyps and precancerous lesions, but also to distinguish these from early-stage or advanced cancers. In 2021, the FDA authorised marketing for the GI Genius tool – the first device that uses AI to aid the detection of lesions during colonoscopy.
In lung cancer, Google AI researchers presented results obtained using a deep learning algorithm which detected and predicted malignancies in 8,000 cases, outperforming non-thoracic radiologists.
Promises and pitfalls in AI
Concerns and challenges relating to the development of AI were addressed at the 2024 World Economic Forum in Davos. These included the potential for AI algorithms to exacerbate biases in healthcare. This can happen when the data used to train algorithms reflects and amplifies the inequalities existing in healthcare and society today, such as disparities in treatment based on race, gender or socioeconomic status. Concerns were also raised over the protection of patient data.
A major limitation to the broad application of AI is the requirement for diverse and inclusive data sets for training. To prevent or minimise bias from being introduced into AI algorithms, training data sets and clinical endpoints must be inclusive of the underrepresented cohorts and healthcare settings they are intended to serve.
Stanford University has reported a massive increase in the number of countries with laws containing the term ‘AI’ – growing from 25 countries in 2022 to 127 in 2023. While these individual jurisdictions advance with their own frameworks and approaches, multilateral efforts to coordinate them are also intensifying, be it through uptake of the AI principles of the Organisation for Economic Co-operation and Development or discussions in the United Nations and G7.
On March 13, 2024, the European Parliament adopted the Artificial Intelligence Act (AI Act). It is considered to be the world’s first comprehensive horizontal legal framework for AI.
A recent cross-sectional survey of US oncologists highlighted a number of ethical concerns about the use of AI in clinical practice. The results showed that, while few of the respondents believed that patients need to understand AI models, most agreed that patients should be asked to give consent to their use in decisions relating to their care, and many said patients should be able to choose between physician- and AI-recommended treatment regimens.
These findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions as well as decisional responsibility.
In his message for the 57th World Day of Peace, 1 January 2024, Pope Francis stressed: “We have a duty to broaden our gaze and to direct techno-scientific research towards the pursuit of peace and the common good. Algorithms must not be allowed to determine how we understand human rights, to set aside the essential human values of compassion, mercy, and forgiveness.”
In a blog post on ‘The Age of AI’, Microsoft founder Bill Gates emphasised the need for AI algorithms be tailored to the unique healthcare challenges faced by poorer countries. These models must address diseases that are more prevalent in these regions and accommodate different languages and logistical hurdles, such as patients living far from medical facilities. “People will need to see evidence that health AIs are beneficial overall,” he said, adding that “[they] have to be tested very carefully and properly regulated, which means it will take longer for them to be adopted than in other areas.”
Right now, we are in a defining moment for AI to safely leverage the full potential for the benefit of humanity.
*Mirjana Pavlova Bojadžiski, Ljublijana, Slovenia. Article selected as part of the ESCO Voices Contest, in collaboration with Cancerworld>