The need to ensure that the employees of banks really reflect the society they serve will be made more urgent by the increasing use of automation and artificial intelligence in recruitment.
The importance of a diverse workforce
Much is written about the benefits of diversity and inclusion and we know from experience that it is those with a fresh perspective who help to innovate, bring new ideas and prevent group think.
But how can firms hire a more diverse workforce? And is there a danger that the use of AI will actually work against diversity objectives?
Some institutions simply look for candidates who have performed particularly well at a top university. The problem, though, is that education is no guarantee of capability in the working world. Technical knowledge is necessary, but not sufficient.
Here’s an example of why academic achievement and theoretical knowledge should never be the sole basis for hiring policy. A UK lecturer in Russian politics likes to start the first postgraduate seminar of each year by talking about a remarkable student who knew so much about Soviet propaganda that he was hired straight out of college into an important government job. There is silence in the room while everyone considers how marvellous that is and how they might get there. It only rarely happens that someone laughs out loud and says: ‘what was his name?’. His name, of course, was Philby – the infamous Russian spy.
The dangers for banks and society of programming bias
Financial services firms do not generally face hiring decisions that could endanger national security – but they do make a lot of decisions that can have social and economic impact. And, just like MI6, they need to guard against group think – particularly as the danger of group think is, arguably, being turbocharged by the latest technological advancements used in the recruitment process.
That turbocharging comes from the assumption that computers make perfectly objective and fair ‘decisions’. But that is only true in a world where the data computers use, and operations they carry out, do not reflect the unconscious assumptions, or even outright bias, of their programmers.
The industry is concerned about financial inclusion and fairness. In early November, for example, The Association of Corporate Treasurers (ACT), launched some principles intended to “promote and improve diversity and inclusion across all participants in financial markets”.
The ACT said that its corporate members “attend hundreds of bank meetings every year and they observe a wide disconnect between the diversity aspirations of banks and the reality of the coverage teams that represent them”. Put plainly: bank hiring is not as diverse as it should be.
Can financial services firms be more inclusive?
So, as tech-driven automation accelerates, what can financial services firms do to try to ensure that they hire human beings who can, at the right time, put the right amount of sand in the wheels of easy assumptions?
It boils down to bringing in as wide a range of backgrounds and experiences as possible, and giving everyone a voice. That will be sometimes unsettling and difficult to manage. It might even feel as though firms are coming to wrong conclusions.
Technology like AI can massively streamline the recruitment process and even help to sift through initial candidates with virtual interviews and algorithms that match a candidate’s skills against a particular job profile. The challenge for hiring managers, however, is to ensure that any systems they use are not exhibiting any in-built prejudices and that the overall recruitment process recognises the importance of a diverse workforce.
It’s important to persevere, however. Financial inclusion is not a statutory obligation for the FCA, but Rathi’s speech stressed that it “matters deeply” to the regulator. It should also matter deeply to financial services firms.
Read more about our digital banking qualifications
Read more about our online degrees