Making Generative AI & ChatGPT Safe for Business

Breaking through the hype: Making Generative AI and ChatGPT safe for business

By Richard Farrell, CIO and Richard Higginbotham, Product Marketing Manager Intelligent Automation, at Netcall

Generative Artificial Intelligence (GenAI) and applications such as ChatGPT have become the latest tech buzzwords – not just among technology enthusiasts, but businesses and everyday users too. If you aren’t already using it in some capacity, you’ve certainly heard of it. Following its release to the general public in November 2022, ChatGPT, received over a million subscriptions in just five days, as users across the globe eagerly put it to the test. Echoing its popularity further, Gartner predicts GenAI will have a profound impact on business and society, positioning it on the Peak of Inflated Expectation within its Hype Cycle for Emerging Technologies, 2023, and projecting that it will reach transformational benefit within two to five years.

Despite being met with excitement and anticipation by many, advances in AI as a whole are also being viewed with caution and concern. In fact, AI anxiety is spreading far and wide as people consider the impacts it could have on future jobs, not to mention the potential security implications surrounding the wealth of data absorbed by these tools. You don’t have to look too far through today’s headlines to find news surrounding AI and in particular, how the Government plans to tackle safety surrounding it.

In June, Prime Minister Rishi Sunak announced the UK’s plans to host the first major global summit in autumn this year, in order to mitigate the risks associated with mass AI implementation. Meanwhile, the EU Data Protection Supervisor has created a task force to assess GenAI systems; with other European regulators already publishing plans to explain how it will tackle AI in the future.

With security and data sovereignty remaining a key concern, and many experts starting to dampen the hype, this begs the question – what is the reality of this cutting-edge technology in today’s business landscape?

GenAI in business

Recognised for its ability to create highly tailored content at speed, businesses are starting to consider the benefits such applications could have within their own organisations – and in particular on customer experience. With the right know-how, GenAI promises to create conversational interfaces that engage customers in a more personalised and natural way, leading to improved customer satisfaction and loyalty. In the future it could also, when combined with other technologies and processes, empower organisations to provide round-the-clock support, increased efficiency and data-driven insights to help improve future products and services.

The truth is – the concept of AI systems generating content predates the specific terminology used to describe it, and various related terms and ideas have been present in AI research for decades. Organisations have been reaping the benefits of natural language models for some time now within the contact centre, in order to optimise customer experience whilst boosting productivity. While tools such as ChatGPT can certainly be seen as a development, Chatbots as a whole have also existed for many years, allowing customers to self-serve and find answers to common questions through AI-powered self-service portals.

The reality is, although GenAI tools such as these can be used for diverse purposes – including customer support, lead generation, and natural language processing – the context that they are used must be considered carefully. For example, using AI to personalise content for customers on a large scale can be achieved with minimal consequences. On the other hand, for any high-risk organisation – for example, an NHS trust dealing with a patient, or a local council dealing with a social care requirement- relying on an AI decision– without human intervention – could lead to disaster.

To reap the benefits on offer, it is important to acknowledge that a) a one-size-fits-all approach to GenAI simply won’t cut it and b) you need to have humans in the loop in order for it to be safe and effective. As depicted by IBM’s Ginni Rometty – augmented intelligence is a much more accurate term – rather than replacing people, AI is a technology to augment human intelligence and usher in a new era of partnership between man and machine – blending digital automation with human interaction.

Be aware of the risks…

A lack of human intelligence also poses wider concerns around the viability of tools such as ChatGPT. AI models can be prone to inheriting certain biases present in their training data, which can lead to potentially biased or discriminatory responses to user inputs. Misinformation is another key concern when an AI model is not carefully supervised or fine-tuned. This can happen when an AI model may not fully understand the context of the question resulting in an inappropriate response being delivered.

The biggest risk for organisations, however, is the impact GenAI could have on our data. As the hype around AI continues to grow, so does the apprehension surrounding security and privacy. Ultimately, there is a risk that AI-powered chatbots may inadvertently expose sensitive customer or user data if not properly secured. Businesses need to be cautious about the data they feed into AI models, who and where it is being processed (i.e. whether it is a controlled environment) – ensuring compliance with relevant data protection laws and regulations to protect customer privacy.

Whilst existing regulations such as GDPR are already applicable to GenAI systems, additional regulation surrounding AI will be critical to securing its future. Establishing guidelines and a set of standards for protecting public data will facilitate responsible AI development and implementation within the enterprise, and be crucial to its success.

Making AI safe and applicable

Whilst some of the hype surrounding GenAI can be justified, it is also useful for two reasons. Firstly, such exposure has shone a spotlight on and raised awareness surrounding data sovereignty and privacy concerns when adopting new tools and technologies. Secondly, it has the potential to inspire businesses to think about their own AI usage and consider areas in which they can bring this technology in to transform processes for the better.

To automate intelligently, however, requires businesses to think hard about what they want to achieve and the best way to go about it, before getting swept up in the hype. Making sure AI usage is safe and applicable must remain at the forefront of any transformation journey. Platform-as-a-Service tools such as low-code platforms, which are designed as open and interoperable platforms, can support the implementation of ChatGPT and other GenAI applications by providing pre-built AI components, integration capabilities, and an easy-to-use visual interface for developers and non-technical users to design and deploy AI-powered applications.

For enterprises that can implement AI in this manner, there are certainly benefits to be had. Whilst the reality may be smaller than expected (at this stage) – the impact on customer and employee experience can still be significant if implemented effectively. By adopting platforms that empower the simple and secure integration of AI, businesses can start experimenting with GenAI, taking small but safe steps towards their transformation goals.

 

 

Netcall is a leading provider of AI-powered automation & customer engagement solutions. A UK company quoted on the AIM market of the London Stock Exchange. By enabling customer-facing and IT talent to collaborate, Netcall takes the pain out of big change projects, helping businesses dramatically improve the customer experience, while lowering costs.

Over 600 organisations in financial services, insurance, local government, and healthcare use the Netcall Liberty platform to make life easier for the people they serve. Netcall aims to help organisations radically improve the customer experience.

For additional information on Netcall view their Company Profile

 

error: Content Protected