Despite the prominent global debate on the ethics of technology and the increasing need for technology assessment before approval, anticipatory approaches, especially in oncology, are still in their infancy
In the age of artificial intelligence (AI), big data and machine learning, technology assessment is becoming a necessity to ensure the approval of new technologies and devices by regulatory agencies. The explosion of applications based on emerging technologies in the field of health and medicine, together with evidence of how biased these systems can be, and other potential negative impacts in many spheres and fields (including oncology), has led to a global debate on the ethics of technology.
How to develop, deploy, and implement new devices and applications in a fair, equitable and accountable way is a question that has yet to be answered. However, many efforts are underway on this matter; for instance, in April 2019 the European Commission (EC) published Ethics guidelines for trustworthy AI. As this document establishes, the first component of Trustworthy AI is lawful AI that accomplishes horizontally applicable rules and domain-specific rules that apply to particular AI applications (for instance the Medical Device Regulation in the healthcare sector).
The ethical assessment list provided on the guidelines includes: ensuring that the AI system’s entire life cycle meets seven key requirements (human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, environmental and societal well-being and accountability); setting realistic expectations, and involving stakeholders throughout the AI system’s life cycle, among others.
Prior to the recent EC guidelines, Professor of Philosophy of Technology, Philip A.E. Brey, from the University of Twente (the Netherlands) had presented a new approach for the ethical study of emerging technology ethics, called anticipatory technology ethics (ATE). In an essay published in NanoEthics in 2012, Brey describes ATE as “the study of ethical issues at the R&D and introduction stage of technology development, through anticipation of possible future devices, applications, and social consequences”.
As the researcher describes, ATE analyses consist of three phases: (1) foresight analysis; (2) identification of ethical issues; and (3) evaluation of ethical issues. After these three stages, there are optional stages for further action (a design feedback stage, a responsibility assignment stage, a governance stage…). “However, the ethics of emerging technology is in many ways still in its infancy, and more work is needed to develop more sophisticated approaches and to assess which approaches deliver the best results,” Brey wrote in the book The Ethics of Technology: Methods and Approaches (2017), edited by Sven Ove Hansson
Indeed, while Brey’s ATE approach and the EC guidelines offer guidance for AI applications in general (also appropriate for other emerging technologies) they constitute just a first step, a foundation. However, different situations raise different challenges. The document states that “It should therefore be explored whether, in addition to this horizontal framework, a sectorial approach is needed, given the context-specificity of AI systems”. That is the case for health and medicine, and more specifically in cancer.
Tech assessment in cancer
What are the consequences of introducing emerging technologies in oncology without a proper anticipatory assessment? How will the ethical evaluation of the future impact of technologies in cancer be managed? Ajay Aggarwal, a Consultant Clinical Oncologist at Guy’s & St Thomas’ NHS Foundation Trust and Honorary Senior Lecturer at King’s College London, answers the first question in a study published in Lancet Oncology in 2017.
Using prostate cancer surgery as a model, Aggarwal and his colleagues investigated the effect of patient mobility and hospital competition on service configuration and technology adoption in the National Health Service (NHS) in England. They assessed the reasons why men would choose to have prostate cancer surgery at a centre other than their nearest. “We found that in England men were attracted to centres that carried out robotic surgery and employed surgeons with a national reputation,” researchers wrote.
Of the 65 radical prostatectomy centres open at the start of the study period (2010), 40% had a statistically significant net gain of patients and were established robotic centres. None of the 10 centres that closed their radical prostatectomy service during the study period were robotic surgery centres, irrespective of the net losses of patients. The number of centres performing robotic surgery increased from 12 (18%) of the 65 centres at the beginning of 2010, to 39 (71%) of 55 centres open at the end of 2014.
Researchers interpret the results as a consequence of “large-scale investment in equipment for robotic surgery without evidence of superior outcomes”. Encouraged by competitive factors, policies advocating centralisation, and the requirement to do a minimum number of surgical procedures, all of which have contributed to the closure of cancer surgery units. The study reports that “If quality performance and outcome indicators are not available to guide patient choice, these policies could threaten health services’ ability to deliver equitable and affordable cancer care”.
In the same vein, Alan Fraser – chair of the Regulatory Affairs and Medical Devices Task Force of the Biomedical Alliance in Europe – asserts that there are huge variations in the performance of medical devices in different studies. “A small software change can make a huge variation and have huge clinical implications,” the University of Cardiff professor said at EC JRC Humaint’s Workshop on the Impact of AI in Healthcare.
For that reason, Europe will apply new rules and regulatory requirements considering software as a medical device within the EU Medical Devices Regulation(EU MDR), with a mandatory compliance date of 26 May 2020. The EU MDR is aligned with the International Medical Device Regulators Forum (IMDRF) 2019 recommendations. Recently, the FDA also released a Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD). Fraser synthesises the FDA’s suggested pathway: responsible data selection and management, clinical testing for safety and effectiveness and – the key point in the expert’s view – real-world testing in a separate population for clinical evaluation performance. “There are human lives at stake if the algorithm goes wrong. The manufacturer has the responsibility to monitor the software and also to re-evaluate it if it changes,” he asserts.
Fraser points out, however, that AI may be of help for making clinical decisions, diagnostic support, providing objective, consistent, reliable and validated outputs, and as a research tool, among others. He also notes the importance of considering the limits and risks of AI related technologies, such as the risk of finding chance associations using large databases, an increasing number of false positives, or interpretability issues.
The limits of regulation
One of the least understood challenges with respect to reaping the benefits of AI within health and medicine is that its most effective methods are anathema to medical regulation. The FDA, EMA, and other regulators rightly require that medicines and devices produce consistent results, given the same set of circumstances, and that any deviation from these results can be explained and understood. However, this is impossible to achieve with the most cutting edge machine learning techniques, for instance deep learning. Here, the algorithm is essentially a black box; no human is able to explain why it gives any particular result, and why it might differ from one set of circumstances to another.
Whilst this lack of explainability is unacceptable to regulators, clinicians, and patients, the consequence is that medicine will not be able to fully benefit from AI. Fortunately, efforts are already underway to address this by creating methods to explain deep learning algorithms. These efforts are in their relative infancy and their development should involve medical practitioners to ensure the appropriate degree of explainability. It should also be encouraged and supported by regulators, and patients, as a part of a proper ATE.
Another big challenge, which has been proven specifically in the case of oncology, is related to what the EC document calls “enabling a realistic expectation setting”. An example of how to deal with it badly is the IBM Watson Oncology tool, as explained in the article How IBM Watson Overpromised and Underdelivered on AI Health Care, published in IEEE Spectrum. The article mentions one excoriating article about how Watson for Oncology provided useless and sometimes dangerous recommendations (IBM contests these allegations). It also shows how MD Anderson project failed dramatically: A 2016 audit by the University of Texas found that the cancer centre spent $62 million on the project before cancelling it.
“A deeper look at these two projects reveals a fundamental mismatch between the promise of machine learning and the reality of medical care—between ‘real AI’ and the requirements of a functional product for today’s doctors.” article says. Was it worth trying? In the opinion of Naila Siddiqi Kamal, senior clinician affiliated with the NHS working on Cancer Gynaecology and in medical education, patient safety and informatics in healthcare, we need to give the emerging technologies a chance while being vigilant to pick up any concerns. “This is a similar situation as when a clinician in an emergency has to make a clinical decision when the patient is unable to give consent. If we wait, we are losing an opportunity to breakthrough real health challenges. If we plough through, we risk leaving a legacy of unforeseen challenges. Hence it has to be a balance. Careful and cautious approach with a regulatory and safety body evaluating as we make a way forward”, she argues.
How this tech assessment should be managed? In Siddiqi Kamal’s view, there are 2 aspects: one is from the engineering and information and communication technology (ICT) discipline regarding quality assurance, and the other comes from its unique use in healthcare and its implications on clinical outcomes. “We need regulations from both disciplines under the umbrella of Health Informatics, a regulatory body which structures a process for applications for safe use of any tech device or solution looks at its technical QA, then its clinical use, its reliability and validity in predicting its outcomes, the ethics of its implications, etc.”, expert says.
For Enrique Terol – a Doctor of Medicine in charge of the implementation of the European Reference Networks (ERNs) – ethical considerations and approaches are fundamental in cancer, where the right diagnosis and treatment decision can make a huge difference to the outcomes. ERNs goal is to tackle the need to deliver very specialised care for complex or rare diseases and conditions -like rare cancers- that require highly specific treatment and a concentration of knowledge and resources, following the motto ‘Let the knowledge travel rather than the patient’.
Terol says that the ERNs are enthusiastic about developments in the field of AI and to their applicability to rare diseases. However, he warns that to incorporate emerging technologies into the clinical process, they must be evidence-based, safe, effective, efficient, and sustainable. “ERNs are looking forward to including these tools but only once tested by the scientific community and approved by the regulatory authorities,” he says. He also thinks that the final clinical decision should be always human, “as only human beings have this capacity”.
The HTA approach
Is the technology assessment in oncology being managed according to expert’s recommendations? What are the patient’s needs and claims? The European Cancer Patient Coalition (ECPC) has been advocating for a harmonised Health Technology Assessment in the EU since 2014. When the EC published – in January 2018 – its proposal for future EU cooperation on Health Technology Assessment (HTA), the ECPC urged the European Parliament and the Member States to adopt it, in order to harmonise patient organisations’ involvement in HTA.
“Under the proposal, Europeans with cancer will benefit from increased cooperation, reduced delays, added transparency and better patient involvement in the decision-making process. By avoiding unnecessary duplication of efforts, mandatory Joint Clinical Assessments will remove the risk of diverging results and outcomes of national assessments, thus minimising the delays in access to new treatments”, ECPC says in a statement.
“Cancer patients across Europe are currently faced with disparities and delays in access to innovative treatments, largely due to non-existent or fragmented HTA. The proposed Joint Scientific Consultations would ensure that all the necessary data and evidence is generated ahead of the HTA submission, contributing to quicker availability of innovative cancer therapies.” the statement adds. The ECPC launched last February a module to involve cancer patients in HTA processes, aiming to equip patients with the knowledge to participate in the HTA of cancer treatments and advocate for patient involvement in HTA in countries and regions where patient involvement is lacking.
What is exactly the goal of the health technology assessment? According to the EC definition, HTA “is a multidisciplinary process that summarises information in a systematic, unbiased and robust manner about the medical, economic, organisational, social and ethical issues related to the use of health technology”. The goal of HTA is to support decision-makers at a national, regional or local level in their efforts to ensure that patients are treated with the best available treatment while keeping the health budgets under control or in balance.
HTA measures the added value of a new health technology in comparison to existing technologies and current standard of care. How? That is one of the core issues, as there are differences in national HTA methodologies for clinical and economic assessments.
On the clinical side, stakeholders consulted for the EC proposal note that there are different data requirements for carrying out the assessment, and also different clinical practice approaches, including choice and acceptance of comparators, selection and acceptance of endpoints, which ultimately may explain the different outcomes of national HTA clinical reports. In addition, there are different ways of expressing the added therapeutic value, which in some countries may be linked to the subsequent appraisal process. The level and ways of participation of interested stakeholders (patients, professionals, industry) in the clinical assessment process may also vary
On the economic side, respondents underlined that the differences between countries are even more pronounced. The importance of the national (regional) local socio-economic context and the need for contextual adaptation, the use of indicators such as cost/QALY or incremental cost-effectiveness ratio (ICER), and the differences in the evaluation perspective (payers or societal) are quoted among the major issues which would make difficult the formulation and acceptance of an EU joint economic assessment. Patients and consumer organisations, as well as providers and academia, emphasised the lack of transparency in some countries, which makes it hard for patient representatives to adequately contribute to the HTA process.
To address this issue in the case of cancer, researchers of the Portuguese Institute of Oncology proposed in 2016 a new methodology called CAVALA (Cancer Value Label), described in Ecancermedicalscience. CAVALA starts with the assumption that the value of a treatment is measured by the relationship between its results and the amount of resources it requires. To operate this concept of value, they propose a decision grid that compares a candidate technology and an incumbent or another comparator by analysing two parameters: the expected outcomes and the expected costs.
The model is for the evaluation of new technologies. If the ‘entrant’ is a generic drug, then a cost minimisation analysis could be used. “The great asset of the CAVALA is that it allows the addition of other variables, such as overall survival, progression-free survival, and duration of treatment. For treatments that cover multiple cancers, other outcome measures, like DALYs, can always be used”, researchers say. According to them, it’s crucial that the method is used as an instrument and not an instruction. “Recommendations must be established on scientific findings about the effectiveness both clinically and cost-wise, but also to take into account social preferences as expressed by social value conclusions,” they say.
Researchers claim that CAVALA is able to escape a common conceptual and communicational trap, which is the apparent trade-off between money and the time of life for patients. “We discussed value and carried the discussion to the point where it becomes relevant to patients and providers. This is possibly a good step to bring them closer to the HTA process, improving participation – for example on deciding which outcomes to measure – and in turn making economic analysis of health technologies a well-accepted, mainstream function within drug development”, the paper concludes.
Apart from the different methodologies for clinical and economical assessments, some respondents of the EC proposal mention differences for the assessment of other dimensions, such as organisation impact, ethical and legal impact, or social impact, which are also relevant to the HTA process. “That into diverging outcomes of HTA reports which may affect patients’ access to new technologies (e.g. delays, restricted access), duplication of work and high costs for both HTA bodies and industry, decrease in business predictability, and even affect innovation in a negative way,” the document says.
Respondents also shared their opinion about the limitations of the current type of cooperation. For example, the lack of flexibility of the framework for EU-funded projects or discussions that may have excluded expertise from stakeholders other than HTA bodies (e.g. clinicians for assessing methodology and data, public procurement hospital-based HTA representatives, hospital-based pharmacists, payers). Patient organisations note that voluntary participation from both Member States authorities and industry may create a vicious circle, with industry hesitating to take part and HTA bodies hesitating to contribute as authors or to use joint work. They also note that Joint Actions did not address sufficiently the issue of patient involvement in HTA.
A large majority of the answers (87%), however, consider that EU cooperation on HTA should continue beyond 2020. They underlined that EU cooperation is needed to ensure a constant exchange of information and knowledge between HTA institutions in Europe, to increase synergies between Member States, to streamline HTA methodologies, to increase transparency and evidence-based decision-making, as well as business predictability. It could also enhance access to added value and affordable technologies in a timely manner and in the long run can also lead to savings, improving resilience and contributing to the sustainability of health systems.
Horizon scanning performed at EU level is seen as a joint activity, which could support national healthcare systems to better allocate resources and ensure sustainability. There are also voices in the proposal that advocate for a legal framework for EU cooperation on HTA to streamline interoperability of HTA national systems, thus improving the internal market of health technologies. Finally, several stakeholders note that significant public resources have been invested in EU cooperation on HTA and all the results achieved so far should be capitalised to support sustainable healthcare systems and guarantee equitable access to technologies with added value to all patients in Europe.