Governance of artificial intelligence interim report published
27/09/2023
A REPORT examining the governance of artificial intelligence (AI) has been published.
The House of Commons Science, Innovation and Technology Committee launched a consultation on the subject of the governance of AI in October last year and it has now published its interim report.
IPEM submitted a response to the consultation, which was led by IPEM’s Clinical and Scientific Computing Special Interest Group (CSC SIG) and the AI Working Party.
Dr Robert Ross, Chair of the CSC SIG, said the interim report provided a good summary of the challenges of AI and the absence of specific rules around it.
‘However, it does not address the environmental impact of AI, which uses huge quantities of energy and hardware, both in training AIs and in applying AI to everyday use,’ said Dr Ross.
‘The carbon cost and electronic waste associated with this must be considered at a governmental level to ensure that AI is not working against our climate and ecological goals, and for those of us in healthcare, we need to consider the impact of these on our organisation's green goals.’
Transforming healthcare
The report states that AI models and tools can transform healthcare provision, by assisting with diagnostics and, perhaps more significantly, freeing up time for the judgement of medical professionals by automating routine processes, which SIG members agreed with.
They added, though, that medical staff need to trust the systems they use, and as medical physics and clinical engineering staff, they could influence their colleagues on AI matters.
The report said the inquiry heard how AI models and tools can help deliver breakthroughs in medical research. The SIG members said as a discipline AI should follow rigorous research best practice and stick to patient privacy expectations around data – and not spend time on AI projects that are either likely to fail or whose potential benefit is not worth the investment of effort.
Language models and assessment
Members of the SIG said language models, such as ChatGPT, have been used by students on the Scientist Training Programme to write evidence for competencies. They said assessors need to ensure such assessments actually reflect the work done by the trainee, and engage in oral assessment, even informally. They added, however, that students should be helped to acquire the skills to utilize these tools proficiently, responsibly, ethically, transparently and with a critical mindset.