AI does not belong solely in the realm of science fiction nor is it a harbinger of doom as it is sometimes portrayed in popular culture. On Planet Earth, our most intimate interactions with AI are likely to be with virtual personal assistants the likes of Siri, Alexa and Google Assistant. AI is also gaining momentum in business, from providing insights to increasing operational efficiency and improving customer service and experience.
What colour jacket are you looking for? Black.
Good call. Black matches everything. Do you want a hood? Yes.
Okay, I got it. Some jackets can zip or snap together for layering options. Is this important to you? Yes.
I agree. It’s the easiest way to layer for unpredictable conditions. Pit zips allow for extra ventilation in the right places. Do you want that feature? Yes.
We’re done! Please view my recommended jackets.
The above is a snippet of a conversation I had with outdoor clothing and equipment purveyor, The North Face’s AI-powered personal shopper. With a few initial questions – where and when the jacket is to be used, gender of the user, the type of activity the jacket is meant for, and the style preferred – the personal shopper, called XPS, presented its recommendations. And they were choices that I really liked.
Indeed the whole experience was akin to interacting with a very clued-in sales assistant, without having to sift through the varieties of jackets available online on my own (and possibly being overwhelmed by the choices). The North Face’s AI is powered by IBM’s Watson cognitive computing technology, and it’s just one example of the technology being deployed in the retail scene.
AI itself is not new. Computer scientist John McCarthy – regarded as the father of AI – coined the term in 1956 when he organised the first Dartmouth Conference on Artificial Intelligence. Sometimes called machine intelligence, AI refers to devices or applications that can undertake specific tasks in human ways – mimicking cognitive functions such as learning, reasoning, problem-solving and language understanding. It is these qualities that have fuelled the promise (and fears) of AI in popular culture.
Over the years, we’ve seen writers and filmmakers confronting the concept of AI, and often their vision is of machines intent on terminating the human race – look no further than the cyborg assassin in The Terminator and Hal from 2001: A Space Odyssey. Away from novels and movies, even luminaries in the world of science such as the late Professor Stephen Hawking have warned us of the existential threat posed by AI. Nobel laureate and social entrepreneur Muhammad Yunus also recently said that AI has the potential to transform “self-sustaining human beings into beggars.”
But there are many who are far less gloomy about the prospects of AI. In the realm of business in particular, early adopters of AI are shining the light on how it can make a discernible and positive impact on an organisation’s operations, whether it’s to improve customer service and the customer experience, optimise logistics, predict customer behaviour or improve marketing and advertising.
FORCE OF DISRUPTION
Prof Sander Klous, Data and Analytics Leader for KPMG in the Netherlands, says AI is a development that cannot be ignored and has the potential to disrupt businesses by significantly reducing costs, addressing risks more efficiently (by optimising maintenance, for instance) and achieving additional growth (through better understanding and fulfilling the needs of clients).
In Malaysia, KPMG Head of IT-Enabled Transformation Alvin Gan says automation and AI are already reshaping how financial institutions deliver services and improve internal operations efficiency. With Hong Leong Bank’s chatbot, HALI for example, the bank is said to be looking to improve internal operations efficiency by 60% – driven by the reduction in repetitive and manual operations such as answering emails and internal telephone enquiries.
Hyperlab, a Malaysian start-up that specialises in conversational AI, is the company that created HALI. It counts companies in the financial services, telecommunications and automotive sectors in Malaysia, Singapore, Taiwan, the Philippines and the US as clients, including the likes of Cerebos, Maxis, Proton and Ford, to name a few. Collectively, Hyperlab’s chatbots, (which operate in English, Bahasa Malaysia, Manglish, Singlish and Mandarin), have had conversations with more than a million users around the world.
So what spurs companies to look to AI? Hyperlab co-founder and CEO Vic Sithasanan says those who embark on AI specifically for customer experience, for example, understand that building a brand requires building a great experience. “Gone are the days of million-dollar TV advertising with a film that people fall in love with. Today everything is down to how to give better service and experience.
“The best approach would be to have a vast number of customer experience agents to do one-to-one engagements with your customers. But that’s not going to be possible. So the only scalable and sustainable way is to put some kind of automation that can be personalised, has a brand voice, and can interact with your customers and give them what they want very quickly,” he adds. This form of automation is the chatbot which blends machine learning, AI and natural language processing.
Beyond the customer experience, AI is also deployed behind the scenes. Amazon, for example, has been using robots to help humans move merchandise around its warehouses and it was recently reported that the company has also shifted tasks like forecasting demand, ordering inventory and negotiating prices to algorithms.
Gan adds that KPMG has also worked with the National Basketball Association (NBA) to use data and analytics in its game scheduling. The league – where the average team is reported to be worth US$1.65 billion – has to schedule its 30 teams to play in 1,230 games over a 6-month period.
“Using AI, we established a cloud-based optimisation system, creating more than 32 trillion possible schedules. Each schedule was evaluated by the system which dramatically improved the key performance objectives. “Fan experiences saw substantial improvements as high quality matchups were scheduled at the most optimal times. We also saw overall schedule improvements as logistics issues were addressed especially for teams that shared arenas. Player health metrics also dramatically improved as the number of back-to-back games was reduced with more rest for teams in between games,” says Gan.
READY FOR AI?
Sithasanan says companies that “have got their websites onto platforms, are active on social media, have good process and systems set up would be the ones ready to take on the next step.” He describes these organisations as “visionary enterprises”. Hyperlab co-founder and CMO Chris Greenough says typically its clients comprise organisations that see innovation as being a point of differentiation. “And they want to invest in learning about the technology,” he says.
Klous, who is also a Professor in Big Data Ecosystems for Business and Society at the University of Amsterdam, believes that companies that are delving into AI need to take an agile approach. “Due to the complexity of AI and big data, it is difficult to get things right from the get-go. Being able to move AI in production is not just a technology challenge. It also requires changes, for example, in processes and capabilities. This makes introduction of AI in a business activity complex. The agile approach allows the flexibility to adjust goals and approach based on experiences gained during the project.”
Although generally it is the larger organisations that are seen as the early adopters of AI, Klous says smaller organisations needn’t be left out. “Smaller companies can look at solutions that are available and adapt these for their needs. For example, some companies might be able to use the business equivalent of Siri in their call centres, which would avoid the need to implement a more bespoke system,” he says. Once organisations see how AI benefits the business, they are also more likely to explore its use in other areas of the company operations.
In the case of Hong Leong Bank, for example, Hyperlab is working on ways HALI can be used to shortlist job candidates. To apply for a job at the bank, prospective candidates are asked to submit answers to scenarios predetermined by the bank. “They can respond either via video, audio or text, and HALI will assess that reply,” says Sithasanan, adding that parameters used include richness of the candidate’s vocabulary, coherence, positive outlook and psychology, to name a few.
“To hire 80 people, the bank may have some 13,000 CVs to go through so this will take away some of the heavy lifting,” he adds. Such an approach also removes bias from the equation, says Greenough. “Previously there may be a tendency to pick top students but they are not necessarily the best candidates. This allows you to sieve for candidates that have the qualities you are looking for, for example digital literacy,” he adds.
A QUESTION OF TRUST
The spread of AI, machine learning and the related technologies in business and many aspects of life, raise an important issue – trust. Pointing to a scenario where AI makes the decision on school allocations, Klous asks if parents would trust an algorithm to have their child’s best interests at heart. It’s an issue that’s confronting parents of a major city in the Netherlands following the school system’s move to embrace AI to create a more equitable and evenly distributed student allocation system.
“The algorithm has been designed to prevent the oversubscription of popular schools while providing the overall best result for all children. But how can you prove it works accurately and ethically?” he asks. This was a challenge the team at KPMG faced when they were asked to create a model of assurance that would give peace of mind to administrators and parents alike, that the algorithm was functioning fairly so all affected could have faith in the system.
“Of course trust has long been a defining factor in an organisation’s success or failure – underpinning reputation, customer satisfaction, loyalty and shareholder value. Increasingly though, with the widespread adoption of data analysis in general and more specifically of AI throughout business, machines and algorithms have become a significant part of the trust equation,” says Klous.
Interestingly, KPMG’s survey of 2,200 global information technology and business decision makers involved in strategy for decision makers found that only 35% had a high level of trust in their organisation’s analytics. The recent Facebook privacy breach involving Cambridge Analytica has also brought issues of trust and AI to the fore. The scandal exposed not only the gap between the rapid rise in the use of AI and its regulation but that existential threats arise from human beings’ unethical exploitation of AI. Many countries, from the UK to Singapore, are calling for AI regulations or some sort of governance code for the ethical use of AI and personal data.
Klous points out that one of the elements often overlooked during the development of AI is risk management. “Ideally, internal audit should be involved in these projects from the start. Not as a gatekeeper, but to develop the solutions they need to be able to manage risks without hampering innovation,” he says, adding that AI risk management should be similar to financial risk management.
“Good practices, like three lines of defence – materiality (or relevance), validation of design, existence and effectiveness – are all just as applicable to AI risks and perfectly useful to address issues like privacy risks,” he says.
Where the application of AI has societal implications, such as autonomous or self-driving cars (more so given accidents involving these vehicles), then a more robust risk management framework needs to be in place. Germany last year paved the way for testing self-driving cars with a law that requires a driver to be sitting behind the wheel at all times ready to take back control if prompted to do so by the vehicle.
It also drew up the first ethics guidelines, including a requirement for the software that controls self-driving cars to be programmed to avoid injury or death of people at all cost. In addition, local authorities and municipalities such as the city of Amsterdam are also seeking assurance statements from AI providers in cases where algorithms impact society. “And other cities are also moving in that direction,” says Klous.
But risk management is just one part of the public trust puzzle. The other is expectation management. “A common misunderstanding is that transparency solves expectation gaps, but expectations and trust are based on softer aspects like brand reputation and experience. Again, we see similarities with financial audits where brand reputation of audit firms is key to societal trust in the financial system. In the last years we have seen that the general public experiences a gap between what these audit firms deliver and what they expect them to deliver, leading to trust issues. One of the challenges for AI assurance therefore will be to establish a similar reputation-based trust model and to make sure the expectation gap is avoided,” he adds.
TAKING AWAY HUMAN JOBS?
One of the biggest fears is that AI will take away human jobs. Such fears aren’t new. Program Director of Games and Technology at Laureate Australia and New Zealand, Himanshu Khanna, points to fears of mass unemployment following the advent of machine tools during the Industrial Revolution in the 19th century. This saw English textile workers and weavers – Luddites – destroying weaving machinery as a form of protest.
Most economists say fears of unemployment are baseless and exaggerated. “Some jobs will become obsolete but new jobs will definitely be created – ones that didn’t exist previously. As early as 2020, Gartner predicts AI will generate 2.3 million jobs, exceeding the 1.8 million to be wiped out. Also, according to the McKinsey Global Institute, technology adoption historically can and often does lead to short-term job loss. However, it creates a multitude of new jobs in the long run – more than offsetting the number of jobs it destroys,” he adds.
HSBC in a recent report predicts six new types of jobs – such as algorithm mechanic, conversational interface designer and digital process engineer – as a result of AI and the digital revolution. Khanna says that although the knowledge and skills required to work alongside AI will be specific to the role and the industry, organisations and individuals can follow three main steps to ease the transition. “Although roles are accurately defined in organisations, a comprehensive set of tasks performed within those roles is rarely documented. Listing those tasks combined with a mapping of the skills required should be the first step,” he says.
Next, given that AI will take over most of the repetitive, mundane work, the tasks-skills matrix should be classified accordingly. “This will now begin to point to the new roles required in the organisation and the emerging skill gap. The final step is to analyse the existing skill gap for opportunities for new roles, especially those with complementary skills. Redefine the existing roles to the closest possible new ones. Bridge the skill gap for a redefined role through training programmes, and recruit for the newly created roles.” Adaptability, he says, is the key to navigating the future.
“The employee of the future will need to be a creative problem-solver and a life-long learner,” he adds. Training providers are already responding to the spread of AI by offering courses on the subject. It was recently reported that an AI online course, jointly organised by the Department of Computer Science of the University of Helsinki and the technology company Reaktor, saw more than 80,000 students signing up. Khanna, who manages all games and technology qualifications under Torrens University in Australia and Media Design School in New Zealand, says that apart from an expansion in its games programmes, there are also plans to launch future-forward technology programmes such as AI and Cloud Computing.
AI – HERE TO STAY?
Klous believes that AI will eventually become a standard operational activity, just like finance or IT. “A compliance framework will be put in place, standardised solutions will find their way to small- and medium-sized enterprises, and organisations that are able to adopt AI will outperform the ones that didn’t. “The ones that didn’t will eventually disappear, just like there are no organisations today that don’t use computers. Society will find a way to make sure AI is applied in ways that fit our norms and values, probably in a similar way as has been done for our financial system,” he adds.
And what of humans? Would we be rendered obsolete? Khanna says that although AI will continue to automate and even solve a much broader set of tasks and problems than previously imagined, it isn’t on track to replace humans for the foreseeable future. “The underlying technology just isn’t there to compete with us. We are complex beings possessing a higher general intelligence, which so far isn’t replicable. But no one knows what the future holds, one scientific breakthrough accelerated the field of AI, another one could accelerate it further.”
Crystal ball gazing aside, perhaps the biggest lesson about humans and AI was succinctly put by Arthur C. Clarke when he said: “Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect.”