With the advent of AI, technology is presenting humanity with new ethical questions. Ouida Taaffe talks to Professor Nigel Shadbolt about regulation, artificial intelligence and what it means to be human in the 21st century.
The worldwide web did not start out as a way to access millions of cat videos. It had a much a more thoughtful aim – “universal access to a large universe of documents”.
On 6 August 1991, thanks to the ideas of Tim Berners-Lee, the first website was put online at Cern.
The world’s first webcam came in 1993. Code was written to broadcast shots of a coffee pot via a browser so that the researchers did not have to make a potentially fruitless trip upstairs. The grainy images were a viral hit.
Now, there’s an ocean of all sorts of media online, but that can give us a false impression of what is accessible on the web.
Media are not necessarily presented in a way that is machine-readable, nor in a way that supports creative cross-fertilisation. That has led to many information islands – which is at odds with the aims of the web.
The semantic web
The so-called semantic web, another innovation from Tim Berners-Lee, has helped to overcome this. However, problems remain – perhaps not least in what can be done with financial data.
“The semantic web made a huge difference to the machine aggregation of data. All the search engines use variants of it,” says Professor Nigel Shadbolt, Principal and Professorial Research Fellow in Computer Science at Jesus College Oxford.
“Point applications like open banking are a massive short-circuit.”
Making innovative use of data from lots of sectors is not easy. Professor Shadbolt says that “the correct representation of core data is vital – this applies even in deep neural networks”.
The prosaic job of labelling comes before any clever analytics. He points to the efforts that scientists in fields including genomics and structural chemistry have put into representing part of their domain.
“This is not perfect everywhere, but it has yielded interesting results.”
Examples include the experiments that ‘cheminformatics’ can carry out using virtual molecules.
Financial services are still some distance from having the sort of shared definitions that are helping natural scientists make novel use of information.
“The industry could really benefit from not having a Babel of standards,” says Shadbolt. However, financial services are not alone in this.
Shadbolt points out, for example, that there are no universal standards in the ‘internet of things’. But without them, there will be no full interoperability.
“But tech activism by regulators,” says Shadbolt – referring to where they start to push for the implementation of technology to achieve certain outcomes – “will be a delicate balancing act.”
Ethics and AI
What those outcomes should be – and how to get there – will also require thought.
“There has to be a recognition that algorithms are not value-neutral. Sampling methods can lead to egregious mistakes. And you have to be clear about the underlying basis of the data and research,” says Shadbolt.
He says that there is a range of core ethical questions, including whether the data is accountable, fair, accessible to all and used for public or private purposes.
“These questions are not always best addressed by technologists and engineers,” says Shadbolt.
“For example, 40 years ago in medical ethics, equity of access to data and informed consent were not always welcomed by doctors.”
He says that the ethical problems the technology industry faces have equivalents in a lot of fundamental scientific and technological transformations. For example, they include choosing who has access to medical treatment and who should be held to account for mistakes.
“Ethical frameworks sit in the value framework that a society promotes,” says Shadbolt. That means, he adds, that we have often made important assumptions without being aware of them.
“For example, do we assume a Benthamite, utilitarian model, or a Kantian one, or do we look to model from the Far East which does not aim for individual autonomy?” says Shadbolt.
Getting beyond vested interests
However, even when we know what we value and why, we may still be stumped.
“Until recently, the patient had no role in decisions. Now they have an absolute right to know possible complications and alternative treatments. But people feel they cannot make sense of the information. There are a lot of parallels with AI ethics.”
One of the thornier problems is the question of where liability for any harm sits.
“Medicine is licensed and software engineering is very resistant to that,” says Shadbolt. “How do you get beyond vested interests?”
To help do that, he suggests, it will be necessary to get people to think critically about technology and what it does. That, he says, might be difficult.
“When people were building assisted teaching systems in the 1980s, for example, they found that if there was an error in the formulation of a problem… many could not believe that the machine was wrong.
“That meant that revising the model got stuck. We have a strange mental model that was in on a screen is right.”
Shadbolt says that critical thinking is about imagining you or the data could be wrong. There could, for example, be sampling bias. Or what you think you are looking at, might not be what you’re looking at all.
Shadbolt’s current role at Oxford in computer science was deliberately placed within a humanities department. He also collaborates with Tim Berners-Lee at the Open Data Institute.
“Part of the question we pose is about being human in the 21st century. How will we be human and express ourselves?” he says. “Ethics in AI is a big part of this.”
No sector will remain untouched by the questions that have to be answered.
“Banks are not just holding pens for assets,” says Shadbolt. “If nothing else, people want wider purpose. It is not just about sweating value.”
Read more fintechs Insights
Find out about our Centre for Banking and Digital Finance