UK patients, the healthcare system and the life sciences sector are set to benefit from a new scheme that will see the time taken by the Medicines and Healthcare products Regulatory Agency (MHRA) to approve the lowest-risk clinical trials reduced by more than 50%.
The scheme is based on that outlined in the MHRA’s clinical trials consultation which was endorsed by 74% of those who responded. It forms a significant part of the regulator’s overhaul of the clinical trials regulation, supporting the government’s ambition for the UK to be one of the best countries in the world to conduct clinical research for patients and researchers.
For details see: https://www.gov.uk/government/news/new-streamlined-notification-scheme-for-lowest-risk-clinical-trials-marks-start-of-mhra-overhaul-of-regulation
The European Medicines Agency (EMA) has released a discussion document titled “Reflection paper on the use of artificial intelligence (AI) in the medicinal product lifecycle”.
This reflection paper provides considerations on the use of AI and machine learning (ML) in the lifecycle of medicinal products, including medicinal products development, authorisation, and post authorisation.
Given the rapid development in this field, the aim of this paper is to reflect on the scientific principles that are relevant for regulatory evaluation when these emerging technologies are applied to support safe and effective development and use of medicines.
Data is generated and used increasingly across sectors, including those related to the lifecycle of medicines. In the healthcare sector, data is captured in electronic format on a routine basis. The utilisation of artificial intelligence (AI) systems displaying intelligent behaviour by analysing data and taking actions with some degree of autonomy to achieve specific goals. This is an important part of the digital transformation that enables the increased use of data for analysis and decision-making. Such systems are often developed through the process of ML where models are trained from data without explicit programming.
However, as these technologies often use exceptionally great numbers of trainable parameters arranged in non-transparent model architectures, new risks are introduced that need to be mitigated to ensure the safety of patients and integrity of clinical study results. Also, as the overarching approach is inherently data-driven, active measures must be taken to avoid the integration of bias into AI/ML applications and promote AI trustworthiness.
Hence, the need for the guidance: https://www.ema.europa.eu/en/news/reflection-paper-use-artificial-intelligence-lifecycle-medicines