April 27, 2024

The challenges and opportunities for research in the age of automation

Author: nathalie.carter@jisc.ac.uk
Go to Source

Powerful new technologies are changing how research is carried out.  What are the challenges and opportunities for research now? We asked an early career researcher, a senior researcher and a research journal editor how they see research 4.0 panning out and what they believe are the key issues needing addressing. 

[#insertinlinedriver report#]

Research is changing, as shown in Demos’s recent report, Research 4.0: Research in the age of automation. The convergence of advanced technologies such as artificial intelligence (AI), machine learning, robotics and the internet of things (IoT) is significantly impacting research practices. A vast increase in the amount of data available for researchers to analyse, alongside powerful new analytical tools to do so, has driven further breakthroughs and discoveries.

This is AI’s ‘double dividend’ – it allows researchers to ask questions that would have been impossible a decade ago.  But with these opportunities comes new challenges, in areas as diverse as ethics, data skills and research data management. 

Unintended consequences

Louise Dennis

The consequences – particularly unforeseen consequences – of working with AI loom large.

“In the ethics of AI data science, there’s clearly a big problem with practitioners not used to thinking about the consequences of their systems,”

says Dr Louise Dennis, senior lecturer at the University of Manchester. She researches autonomous reasoning and the verification of artificial intelligence systems in the areas of nuclear, space and autonomous systems where, clearly, actions can have very serious consequences.

“In the worst case, people die,”

she says, starkly. But Louise also points to the wider societal impact of AI research, especially in the area of facial recognition technologies.

“If you are not very careful about how you’ve created the data, your facial recognition technology will have biases in the way it categorises faces. Joy Buolamwini, who pioneered this area, showed that if you didn’t have enough black faces, particularly female black faces, in your learning dataset for facial recognition technology, it could see 20 white women and would know they were all different women but give it 20 black women and it would think they are all the same person. That’s a big problem, particularly if you’re deploying this technology for law enforcement.”

Samantha Kanza

Dr Samantha Kanza takes up the story. She’s an early career researcher – an enterprise fellow at the University of Southampton and coordinator of the AI 4 Scientific Discovery Network, with research interests in semantic web technologies, digitising scientific research, IoT and AI. Or, as she puts it,

“I’m trying to drag laboratories and scientific research kicking and screaming into the 21st century!”

Samantha’s concern is unintended consequences.

“IoT is great. You can have services, control smart things and gather a lot more data for research. But then you also suddenly have a lot more data that could be subverted to ill intent, or where additional information could be inferred. Every time you add a new technology, you add a whole new set of unforeseen consequences.”

She points to one of her own projects, using IoT sensors in the lab to collect data about the equipment.

“We’ve got a lot of good reasons to have sensors in the lab. I want to know if something’s going wrong with the lasers, so I can go back in and restart my experiment. I want to know if someone else is in the lab, particularly with COVID, because then I can wait before I go back in. We’ve got camera feeds, so you can look at one bit of equipment and line it up with another bit of equipment: you couldn’t normally do that unless you had another person in there.

“But you could also probably build up a picture of researchers’ working patterns in the lab for monitoring purposes. You could maybe figure out when it’s going to be empty if you wanted to steal something, for example. And those consequences would never have been in the intentions of the lab system.”

The reproducibility issue

If the issue is misuse or misinterpretation of underlying data, then not storing the raw data is definitely not the answer.

“You need that underlying data,”

says Samantha.

“I would be willing to bet that at a large percentage of the dataset records that have gone into systems don’t have enough data in them to reproduce the experiment. And reproducibility is such a huge issue.”

She points to chemistry where, she says, a lot of papers lack sufficient data for anyone to replicate the experiments.

“Which is a major problem, because then how can you ever know if that’s been done properly?”

But it may be that COVID, and the inability to share physical lab books, is changing things, with a rise of electronic notebooks using such tools as OneNote.

“There’s been a push from the digital research community as a whole to say, ‘We should be digitising. We should be recording. We should be using these kinds of technologies.’ However, a lot of people haven’t wanted to do it, and there’s been no real need to do it, until now.”

Remote lab working is an area that Jisc is looking at as part of its response to the Demos report.  Jisc is also actively exploring rapid responses and technical development in pressing areas of research culture. For example, by investigating how machine learning can support research assessment, and asking how the use of metrics can further enhance research excellence.  

Creative Commons attribution information
Victoria Moody, Jisc

“We already know that researchers are clear in their understanding of the importance of maintaining responsible, trustworthy, reproducible research that is secure and transparent,

says Victoria Moody, Jisc’s research strategy lead.

“Members are a key group that we derive strategic direction from so understanding the potential concerns is key – during Demos interviews researchers highlighted that they are not always supported to be completely aware of the ethical risks associated with their research.”

The sheer complexity of this new world of research is something that Dr Sarah Callaghan is very aware of. She’s editor-in-chief of Patterns, a new journal of data science from Cell Press, following a 20-year career creating, managing and analysing scientific data.

“As a concept, I think Research 4.0 is absolutely right in that it’s acknowledging the fact that research is moving into far more complex and data-driven ways of doing things,”

she says. Sarah argues that we’re moving away from the popular notion of the ‘solo genius scribbling equations down on a blackboard’ and towards the industrialisation of scientific research and an acceptance that the sheer quantity of data means it cannot be treated as it was in the past.

What will help?

“Interdisciplinarity is going to be absolutely vital,”

says Sarah.

“The science problems that we’re trying to address are far, far bigger than any one person can solve. These need to be group efforts. It’s not enough to be the solo genius anymore. It’s just not possible. Not for the really big challenges we’re facing like climate change or coronavirus.”

Those interdisciplinary teams might not be the kind we commonly think of – such as a physicist and chemist working at the same lab bench. For Dr Callaghan, there are three groups that need to be brought together more effectively.

Firstly, data scientists and computer scientists:

“the people who are building the really cool new algorithms and technologies”

but who don’t necessarily have the sort of datasets for testing or the scientific or real-world problems to use with them. Then there are the researchers in the domain sciences, who do usually have the real-world problems and large, well-defined datasets,

“and they’re trying to figure out how to use these cool new technologies like AI or machine learning. But they don’t go to computer science conferences.”

Finally, there are the data stewards and engineers who are building the infrastructures, developing the software and standards and the policies that enable researchers to share data and implement those cool algorithms.

Digital research communities

[#insertinlinedriver event#]

But supporting this essential interdisciplinarity requires a shift in how researchers communicate and how research itself is communicated.

“If we want to be truly interoperable and we want to be truly interdisciplinary, then we need to have common standards and common vocabularies and common ways of talking about things. We can’t work together if we can’t communicate with each other,”

adds Dr Callaghan. That’s where Jisc’s new digital research community comes in. Continuing the conversations around the Research 4.0: Research in the Age of Automation report, Jisc is working closely with the research community, bringing innovative ideas together to form a new digital research community group. 

Samantha is enthusiastic about the potential for these communities to help with some of the challenges, from finding collaborators and benefiting from research experience to sharing issues and knowledge:

“If we could pool together that knowledge and actually come up with a viable set of best practice guidelines and frameworks for conducting ethical research in the 4.0, that would be really, really good.”

Augmented intelligence

Sarah Callaghan

Despite all the benefits of AI and associated technologies, Samantha, is keen that researchers do not lose sight of the humans – and their unique skills – at the heart of research.

“I think AI is great, but I don’t think we should ever be looking to write humans out of the loop. There’s lots of things that technologies can automate, but we should be looking to automate where it hits the technology skillsets. And we should be looking at keeping humans in the loop where it hits their skillsets. We need augmented intelligence.

“Would you rather have an AI system determine your health treatment, or a highly trained doctor, or both?” I think both is always the answer, because machines are going to do things that humans can’t do, humans are going to do things machines can’t do.”

Sarah agrees.

“If we’re essentially handing over to computers and algorithms the difficult job of making decisions, we need to be aware of how those algorithms are making those decisions. It’s very easy for researchers to think, ah, they’re just numbers, it’s just a model. But we need to remember that these numbers and models have real impacts on real people,”

she concludes.

Read more