Idea

‘Care needs to be taken not to become over-reliant on opaque AI-based research’

On 6 June, UNESCO and the Royal Society in the UK are hosting an event at UNESCO headquarters in Paris which will explore the relationship between AI and open science. In the lead-up to this event, Professor Alison Noble, who chairs the Royal Society’s working group on science in the era of AI, shares her insights.
Portrait of Alison Noble. Alison Noble is smiling in the foreground. She is wearing glasses, a red top and scarf, and her hair is cut above the shoulders.

Follow the event live

The discussion will be an opportunity to discuss the findings of a new report by the Royal Society entitled Science in the Age of AI: How Artificial Intelligence is Changing the Nature and Method of Scientific Research.

What inspired the Royal Society’s new Science in the Age of AI report?

Our starting point was wanting to understand better which emerging technologies are disrupting scientific research. During the project scoping stage, we put this question to scientists from a wide array of scientific disciplines, from biology to astronomy – and the overwhelming response was artificial intelligence (AI). 

Why? On account of the rapid pace at which AI technologies like deep learning and large language models have developed in the last decade or so and the excitement about how vast swathes of scientific data might be analysed to recognise patterns and build models in a way that has not been possible before. 

We decided to delve deeper and go beyond producing a set of case studies on how AI was being used, in order to consider how AI is transforming the nature and methods of scientific research. The report explores everything from how scientists work together (interdisciplinarity) to how scientists define hypotheses and test them assisted by AI technology. 

Our approach was very much from the bottom up – engaging directly with over 100 members of the scientific community to hear their experiences and concerns firsthand through interviews, roundtables and workshops. This methodology ensured that our findings and recommendations are truly reflective of the needs and challenges faced by scientists today.

What is the report’s key message?

Our Science in the Age of AI report is quite wide-ranging in terms of topic coverage, which spans reproducibility and data bias, interdisciplinarity, understanding the current role of the private sector and ethics.

One of the report’s key messages is that care needs to be taken not to become over-reliant on opaque AI-based research, as this could undermine the reliability of scientific findings and their practical application. This, in turn, would undermine trust in science. 

Although AI is already providing valuable scientific insights, the complexity and ‘black box’ nature of advanced deep learning-based models means that it is challenging to explain how a model makes a decision. This matters as, like humans, AI makes errors. This issue is compounded further, if the training data are proprietary or inaccessible for research purposes due to commercial interests or the sensitivity of the data, such as in healthcare research. 

The report recommends a range of measures to address these challenges and to maximise the benefits that AI can bring to research and the wider public. These measures include establishing open science, environmental and ethical frameworks for AI-based research that can help ensure that findings are accurate, reproducible and support the public good. 

In addition, the report recommends promoting AI literacy among researchers and collaboration with AI developers to ensure its accessibility and usability. 

It also recommends investing in fostering interdisciplinary AI research, which brings together scientists working on the application of AI with those at the cutting edge of AI development, to ensure that the benefits of AI are realised across scientific fields.

How can AI reinforce the principles of open science? 

One way in which AI can advance the principles of open science is through privacy-enhancing technologies like federated learning. This is the subject of another recent Royal Society report. Briefly, federated learning builds a model from decentralised data in which partial models are built at each client then combined centrally; sensitive data is not shared directly. 

AI-generated synthetic data can also be very useful for scientific modelling and analysis. By using techniques of this kind, AI can facilitate collaboration and openness within the scientific process.

Why is it necessary for AI tools developed for scientific research to adhere to the principles of open science? 

The principles of open science can play an important role in ensuring that AI methodologies are reproducible and trustworthy. A holistic approach to open science in AI-based research involves making all aspects of the scientific workflow as open as possible. This includes releasing sufficient documentation describing methods, code, data, computational environments and publications, describing performance and applications. 

Towards this goal, many research areas now define guidelines on reporting on AI-based studies. Some fields hold “data challenges” to benchmark AI methods or expect scientists to report methods via open datasets. 

All these initiatives aim to foster greater trust and collaboration within a scientific community, encourage wider participation of underrepresented and under-resourced scholars and greater trust by data owners or non-scientist publics.

What do you envision for the future of AI in science? 

In the report, we offer specific recommendations to ensure that AI in science can help us solve society’s biggest challenges. 

Important areas that I would highlight include addressing the environmental costs of AI and working towards building more sustainable AI systems is crucial to minimise the environmental impact of AI-based scientific research. 

In addition, providing researchers with access to the appropriate AI infrastructure is essential. Furthermore, establishing AI and data literacy will help researchers understand the opportunities, limitations and adequacy of AI-based tools within their specific research context. 

Lastly, fostering interdisciplinary AI collaboration is vital. This involves computer scientists working closely with domain experts – a process that, while challenging, is necessary for AI to be integrated effectively across all scientific fields. We are already seeing early examples of successful interdisciplinary AI projects but scaling this across all areas of science will require significant effort and time. 

There is an exciting decade ahead full of opportunity and challenges that will strive to leverage AI's full potential to drive scientific discovery, while at the same time ensuring scientific research integrity. 

Interview by Ruhi Chitre