There is a lot of data in the world. According to the Cisco Global Cloud Index (Cisco is a leading US-based network equipment provider), the world will be generating 847 zettabytes of data by 2021. A zettabyte is sextillionbytes (10 to the power of 21). To compare, one plain text email is 2Kb.
As that suggests, a zettabyte is vast oceans of data – which is both good and bad. It is bad because it is far more than individuals or firms can begin to manage, and good because artificial intelligence benefits from having plenty of data – from different sources – to train its systems.
The need to have ways of automating analysis of vast quantities of data – and the possibility of finding new insights through fresh combinations of previously siloed data – make artificial intelligence hugely interesting both to financial services firms and to investors.
As McKinsey, in its April 2018 paper: ‘Notes from the AI frontier: applications and value of deep-learning’, points out, the idea of using computers to simulate human analysis is not new. What has changed is its feasibility. Access to cheap, high-power processing and storage – as well as to lots of data – is making it possible to train artificial intelligence systems.
We spoke to the CEO of one AI start-up – Thoughtonomy’s Terry Walby – about what it is doing.
AI is supposed to require a lot of data. How do you tackle that?
The foundation of intelligence is data, which is needed in order to make assessments and probabilistic judgements. That means most AI systems require a huge data lake. We do that differently. A new data lake is a new database and that brings potential problems, such as privacy issues. What we do is leave all data in the client’s systems and access it as it is needed.
What about siloed data, which is said to be a particular problem in financial services?
People tend to think about legacy data sources only being an issue in older companies, that have, say, gone through acquisitions or undertaken outsourcing. Every business we have ever worked with has disparate systems. No company has one system underpinning all its applications. Even we have different systems in place to do functionally specific tasks, such as CRM, HR, or customer support. That leads to inefficiency. One answer is to say, let’s have a huge enterprise resource planning system, but that doesn’t actually happen, doesn’t work and would cost huge gobs of cash. The inefficiency that comes out of having people working on legacy systems is the reason for Thoughtonomy. You can approach automation and digitisation by replicating what a human does, by wrapping technology around what exists today. It is not perfect, but perfection is the enemy of progress.
What about fear of redundancies, won’t people block change?
I was worried that we would meet a lot of organisational resistance. A key nuance here is that we are automating work not workers. For example, a well-trained finance person will find that they can hand over tasks that waste their skills and talents and use them better. Automating certain processes can free up capacity to do much more, to learn more, to have happier employees who are spending less time on day-to-day drudgery. When they see that, people stop resisting and go to ‘I love this idea’. At Rentokil, for example, they call the virtual workers ‘Trevor’. Now, when they are fed up with a routine task, they ask ‘could Trevor do that?’.
There is a lot of debate about low productivity in the UK economy. Can technology help with that?
People don’t work like a machine. They need sustenance and rest, they make mistakes. Finding cheaper people to be unproductive (aka off-shoring) is not a solution. Using legacy applications with people as the glue between different processes can be a lot of the problem. Also, the nature of work across time is rarely linear, there are peaks and troughs. HR, for example, is busy at year-end and may then have a lull. Businesses need to be staffed to cope with those peaks, but that leads to inefficiency because workers are siloed. Quite often, when people approach technological transformation they buy tools designed for siloed functions. That has benefits because they are niche approaches, but it also replicates inefficiencies. Technology should be a resource – function and task agnostic. A digital worker should be a worker across all domains – whether HR, finance or IT – and should be able to decide which task is most urgent and should be done first. That can boost productivity by 20 per cent to 30 per cent.
Can AI ‘clean’ data?
What I would mean by cleaning data is removing spurious data and duplications that could lead to the wrong conclusion. A customer record across a series of systems will almost certainly vary and a lack of consistency will make it very hard to treat all those records as the same person. So, yes, you can teach a machine to clean data records and come up with a holistic outcome. We don’t do that for companies, but give them an ability to do it.
Find out more about Cloud Blueprism