Trust and Ethics: Building Transparency and Customer Confidence in AI
Are our bank accounts secure? Are our homes secure? Are our phones secure? These are all questions we ask ourselves on a daily basis and despite much wariness around the safety of technology, when we are in need of help about a delivery or service, we, without much question, hand over personal details to chatbots.
Chatbots have been designed to make our lives a little easier, with simple verification questions they can answer common customer service inquiries without the need to sit on hold waiting for an agent. But with the rise of GDPR, it is important for organisations to communicate to customers how the data that we provide Artificial Intelligence (AI) driven chatbots is being used and stored. In this new era of chatbot technology and data regulations, businesses need to put themselves under the same scrutiny that customer and regulators will.
Transparency establishes trust
As businesses continue to discover new uses for AI-based technology the topic of ethics and transparency is becoming increasingly popular. Most organisations are using this technology to improve the user experience but for every ten examples of tech for good, there will always be someone looking to exploit the technology. For example, using automated chatbots to gather personal customer data and then using it for purposes for which it wasn’t initially intended. With this in mind, it’s important that those who are using it for the right reasons are effectively communicating their ethical use of the technology.
In a recent survey by Capgemini Research Institute, approximately three in five consumers who perceived their AI interactions with a company to be ethical said that they place higher trust in that company, were more likely to talk to others about the positive experience, and are more loyal to that company. By empowering customers and giving them the control and ability to seek recourse through the use of AI and chatbot technology, businesses are more likely to see return trust from their customers.
It’s equally as important that businesses are communicating with employees about their use of AI and chatbot technology. When the Capgemini Research Institute surveyed employees they found that 44% have raised concerns about the potentially harmful use of AI systems and 42% objected to the misuse of personal information by AI systems. In order to tackle and prevent issues, CIOs and IT decision makers should ensure they have a clear code of conduct for ethical AI and that they are transparent about the use of AI in the business. At the end of the day, this technology is designed to be making employees lives easier!
Is AI bias getting the better of us?
The growing use of AI for sensitive information including for hiring and healthcare, has stirred great debate around bias and fairness. Yet human decision making in these and other domains can also be flawed, shaped by individual and societal biases that are often unconscious. Although there are many cases where AI can reduce humans’ subjective interpretation of data, we must remember that chatbots algorithms derive from public data, including news articles and social media. By using public data without checks and balances, AI can incorporate the worst traits of humanity and end up embodying prejudices of the society.
Several approaches to enforcing fairness constraints on AI models have emerged but we still have a long way to go. Some of this work has focused on processes and methods, such as “data sheets for data sets” and “model cards for model reporting” which create more transparency about the construction, testing, and intended uses of data sets and AI models. Other efforts have focused on the implementation of assessments and audits to check for fairness before systems are deployed and to review them on an ongoing basis. All of these efforts should be accompanied by ongoing education campaigns to foster a better understanding of the strict legal frameworks that accompany the use of AI and the growing availability of tools to improve fairness.
Again, transparency is essential when it comes to the use of AI and chatbots in customer-facing services. It’s vital that businesses understand the limits of AI and are able to communicate in simple terms that AI is not here to fully replace humans. Regardless of future developments, AI will always lack the uniquely human trait of emotional intelligence. Therefore, for every use of AI there will always be a human operating the controls and customers should feel confident that this will not change.
Be deliberate with technology
As AI and chatbot technology continues to evolve, industries should prepare for the new challenges and criticisms that will inevitably come with it. Businesses shouldn’t be scared of this technology, they should be embracing the innovative and exciting opportunities that it presents. AI is a technology that’s developing fast, and it won’t wait for indecisiveness. Businesses need to ensure that they are being deliberate when they use it and transparent with how they use it. If they can do this, customers will place their trust in both you and your chatbot!
Ryan Lester is Senior Director, Customer Experience Technologies, at LogMeIn
For additional information on LogMeIn visit their Website