How to use AI safely in business

By Mike Banbrook October 15, 2024

“By far, the greatest danger of artificial intelligence (AI) is that people conclude too early that they understand it.” — Eliezer Yudkowsky, American computer scientist and researcher.

AI safety research grew 315% in just five years1. But don't be fooled - despite this rapid growth, AI safety research is estimated to comprise only 2% of all research into AI1. This disparity underscores a critical need: while AI has revolutionised industries by automating tasks and providing deep insights, ensuring its safe and ethical use is paramount to harnessing its full potential while mitigating associated risks.

What is AI safety? Setting the stage for responsible AI use

At its core, AI safety is about ensuring that artificial intelligence systems operate reliably, predictably and in alignment with human values and intentions. As AI becomes more integrated into our business processes, the stakes get higher.

Before you even think about implementing AI in your organisation, you need to lay the groundwork. This means establishing a clear AI policy and standards. Think of it as creating a playbook for your team. Without a policy and set of standards, you're essentially flying blind and that's a risk no business can afford to take.

One of the most effective ways to manage AI within your organisation is to develop an ‘AI Code of Conduct’. This isn't just a document that gathers dust on a shelf; it's a living framework that guides how your company interacts with and leverages AI technologies. It should cover everything from data usage and privacy concerns to decision-making processes and ethical considerations.

Now, let's talk about some of the pitfalls you need to watch out for. Generative AI, as powerful as it is, can sometimes be a double-edged sword. It has this knack for creating information that simply isn't correct. Imagine taking two pieces of accurate information and combining them in a way that results in a completely false conclusion. It's like taking 2 + 2 and somehow ending up with 22. This isn't just a theoretical concern; it can lead to real-world problems if left unchecked.

Another issue that often flies under the radar is unintended bias. AI systems learn from the data we feed them. If that data isn't diverse or is skewed in any way, guess what? Your AI will inherit those biases. For instance, if all your training data comes from a specific demographic, your AI might struggle to provide fair and balanced outputs for a broader audience.

Let’s not forget about third-party AI models. It's tempting to plug in a pre-trained model and call it a day, but that's a risky move. You need to ensure that these models have been ethically trained. This means doing your due diligence. Ask questions about the data sources, the training methodologies and the steps taken to mitigate bias. Remember, when you use a third-party model, you're essentially bringing their ethics into your organisation.

By addressing these aspects of AI safety head-on, you're not just mitigating risks; you're setting the stage for AI to become a powerful, trustworthy tool in your business arsenal. It's about being proactive rather than reactive.

How to use AI safely: scenarios and best practices

AI as a helper: low-risk applications

Now, let's talk about the less risky way to dip your toes into the AI waters. We're talking about using AI as a helper, a sort of digital assistant that can streamline your processes without diving into sensitive information. This is your entry-level, low-risk scenario that every business should consider as their starting point.

So, what does this look like in practice? Imagine you're setting up a new system, maybe a customer service platform. Instead of starting from scratch, you can leverage AI to generate ideas and content without feeding it any of your company's private data. It's like having a brainstorming session with a tireless, incredibly knowledgeable colleague who doesn't need coffee breaks.

Let me give you a real-world example. At Convai, we use generative AI to help administrators generate potential customer queries for contact centres. We simply ask our general-purpose AI engine, “tell me some real-world reasons why people might call a contact centre”. We can then follow up with, "give me 10 different ways someone might ask for X”. This approach jumpstarts the process of building an IVR (interactive voice response) solution without exposing any sensitive company or customer data.

Now, here's why this approach ticks all the boxes for safe AI use:

  • No sensitive data exposure: you're not feeding the AI any private or personal information. You're asking general questions that don't require access to your internal data.
  • Complete control: you're in the driver's seat. The AI generates ideas, but you decide which ones to use. It's like having a suggestion box that's always full of fresh ideas.
  • Human oversight: every piece of AI-generated content goes through human review before it's implemented. This ensures the final product aligns with your company's standards and objectives.
  • Low risk of misinformation: since you're not using AI to provide direct answers to customers, there's minimal risk of spreading incorrect information.

This approach is what we in the industry often call a ‘copilot’ mode. The AI is there to assist and augment human capabilities, not to replace them. It's a collaborative process where AI provides the raw material, and human expertise shapes it into something truly valuable.

AI for answer generation: moderate-risk applications

We're moving into what we call the moderate-risk territory: using AI to generate answers to specific queries. This is where things start to get interesting – and a bit more complex.

Imagine this scenario: a customer calls in with a question that isn't covered in your standard FAQ. Maybe they're asking about the best mobile plan for their needs, or they want to know the current interest rate on a specific product. This is where AI can shine, potentially providing quick, accurate responses to these adhoc queries. But here's the rub – and it's a big one. When you start using AI to generate answers in real-time, you're walking a tightrope between efficiency and risk.

Let's break down why:

  • Information leakage: the moment you start feeding customer queries into an AI system, you're potentially exposing sensitive information. Even if you think the query is innocuous, it might contain details that, when combined with other data, could compromise privacy.
  • Unverified external sources: if you're using a general-purpose AI that pulls information from external sources, you're essentially trusting those sources with your customer interactions. That's a leap of faith that might not always pay off.

So, how do we mitigate these risks? This is where the concept of a ‘walled garden’ comes into play. Instead of relying on external AI models, you build a controlled model within your own technological environment. Here's what that may look like:

  • Curated knowledge base: you create an information source the AI can draw from, containing only verified, approved information relevant to your business.
  • Controlled environment: the entire model runs within your own tech infrastructure. This means you're not sending data out into the wild; it's all contained within your secure environment.
  • Limited scope you define exactly what kind of information the AI can access and what kind of answers it can provide. This prevents it from venturing into areas where it might give incorrect or inappropriate responses.

Now, even with these safety measures in place, we're not out of the woods entirely. There's still a risk of the AI misinterpreting information or combining facts in ways that lead to incorrect conclusions. Remember our earlier example of 2+2=22? That's still a possibility, albeit a reduced one.

This is why human oversight remains crucial. We usually work with a two-pronged approach:

  1. Real-time flagging: set up your system to flag any responses it's not 100% confident about for human review before they reach the customer.
  2. Continuous learning: regularly review a sample of AI-generated responses to ensure they meet your standards. Use these reviews to refine and improve your AI model.

By implementing these measures, you're striking a balance between leveraging AI's power to handle complex queries and maintaining control over the information being disseminated. It's not foolproof, but it's a significant step up in terms of capability while still maintaining a strong safety net.

AI as an actor in conversations: higher-risk applications

Picture this: instead of AI working behind the scenes, it's now front and centre, directly engaging with your customers. It's answering queries, providing information and even guiding conversations in real time. Sounds like the future, right? Well, it is – but it's a future that comes with its fair share of challenges.

This scenario represents the highest risk level we've discussed so far. Why? Because now we're not just using AI to support human interactions; we're letting it take the wheel. And as impressive as AI has become, it's not infallible.

Here's what we're up against:

  • Erroneous information: bringing back our 2+2=22 scenario. In this context, that kind of mistake could happen in a live conversation with a customer. The stakes are significantly higher.
  • Misinterpretation of context: AI, for all its sophistication, can still struggle with nuance and context. A misunderstood query could lead to an entirely off-base response.
  • Lack of emotional intelligence: while AI can simulate empathy, it doesn't truly understand human emotions. This can lead to tone-deaf responses in sensitive situations.
  • Potential for misuse: if not properly controlled, an AI actor could be manipulated into saying things that don't align with your brand or values.

So, why would anyone consider this high-wire act? Because when it works, it can be transformative. It can provide 24/7 customer service, handle a massive volume of enquiries simultaneously and offer consistent information across all interactions. But – and this is a big 'but' – safety measures are absolutely crucial.

Here's how we recommend approaching this:

  • Transparency is key: always inform users that they're interacting with an AI. This sets the right expectations and helps users approach the interaction with an appropriate mindset.
  • Clear limitations: be upfront about what the AI can and cannot do. If it's designed to handle basic enquiries but not complex problem-solving, make that clear.
  • Easy escalation: ensure there's a quick and simple way for users to switch to a human operator if they're unsatisfied or if the query is too complex for the AI.
  • Constant monitoring: implement real-time monitoring systems that can flag potentially problematic conversations for immediate human review.
  • Scenario-specific deployment: start by using AI actors in low-stakes scenarios. Maybe it handles initial greetings or basic information queries before handing them to a human for more complex issues.
  • Regular audits: consistently review a sample of AI-led conversations to identify areas for improvement and potential risks.
  • Continuous refinement: use insights from these audits to constantly refine and improve your AI model.

Here's a real-world example of how this might work: Let's say you're a telecom company. You might use an AI actor to handle initial enquiries about plan options or basic troubleshooting. But for anything involving account changes, billing disputes or complex technical issues, the AI would seamlessly hand off to a human agent.

Advanced AI usage: data and insights extraction

Imagine being able to analyse every customer interaction – not just for content, but for sentiment, empathy levels and overall satisfaction. That's the promise of AI in this context. For instance, you could ask your AI system, "on a scale of 1-10, how empathetic was our agent during this call?" or "did the customer seem satisfied by the end of the interaction?" This isn't just about collecting data; it's about understanding the nuances of human communication at scale.

This is advanced territory, and it comes with its own set of challenges and safety considerations:

  • Protected environment: when you're feeding entire customer interactions into an AI system, you need ironclad security. This means running your AI within a tightly controlled, in-house environment to prevent any data leakage.
  • Metadata usage safeguards: remember, the insights generated by AI aren't infallible. You need clear guidelines on how this metadata can be used. For example, using it to determine an agent's bonus? That's a no-go. Using it to identify areas for additional training? Now we're talking.
  • Statistical, not individual: the real power here lies in analysing trends over time, not scrutinising individual interactions. Look at how empathy scores change over months for a team, rather than fixating on a single agent's score from one call.

This approach to AI usage allows you to gain insights that would be impossible to achieve manually, all while maintaining ethical standards and data privacy. It's about seeing the forest, not just the trees, and using that bird's-eye view to drive meaningful improvements in your customer service strategy.

Where AI usage needs to be restricted or monitored

As we've explored the potential of AI, it's crucial to shine a light on areas where caution isn't just advisable – it's imperative. Let's talk about the zones where AI poses higher risks and why keeping a tight rein is non-negotiable.

First up, any application involving sensitive personal data should set off alarm bells. We're talking financial information, health records or anything that could compromise individual privacy if mishandled. The consequences of a misstep here aren't just bad PR – they can lead to serious legal and ethical ramifications.

Another high-risk area? Using AI for critical decision-making processes. Think loan approvals, hiring decisions or medical diagnoses. The potential for bias or errors in these scenarios can have life-altering consequences for individuals. It's one thing to have AI suggest a movie; it's quite another to have it determine someone's creditworthiness.

Let's not forget about AI in content creation and curation. Unrestricted AI in this space can lead to the spread of misinformation, copyright infringement or the generation of inappropriate content. The internet's already a wild west of information; we don't need AI making it wilder.

So, what happens if we throw caution to the wind? The consequences of unrestricted AI use can be severe:

  • Privacy breaches: mishandling of personal data can lead to identity theft or unauthorised profiling.
  • Amplified bias: unchecked AI can perpetuate or even exacerbate existing societal biases.
  • Erosion of trust: if customers lose faith in your AI systems, regaining that trust can be an uphill battle.
  • Legal quagmires: non-compliance with data protection regulations can result in hefty fines and legal challenges.

AI should be a tool that enhances human decision-making, not replace it entirely. In high-stakes scenarios, having a human in the loop provides a crucial layer of judgment, empathy and accountability that AI simply can't replicate.

As AI continues to integrate into various aspects of business, prioritising safe and responsible AI practices is not just important - it’s essential. By understanding and implementing solid AI safety measures, organisations can navigate the complexities of AI technology, ensuring it serves as a powerful, reliable tool that aligns with human values and intentions.

At Convai, we are dedicated to safe AI practices, offering contact centre call routing solutions that prioritise security and ethical use. As a proud member of the Probe Group, we strictly adhere to the Probe Group Responsible AI Policy, ensuring that all our AI applications are designed, developed and deployed with the highest standards of responsibility and transparency. 

This commitment not only enhances customer experience but also significantly benefits employees. By leveraging AI tools that are both effective and ethical, we improve the employee experience (EX), making their work more efficient and rewarding. Our dedication to responsible AI practices underscores our mission to create a safe, supportive and productive environment for both customers and employees.

To delve deeper into how AI can transform the workplace, particularly in enhancing the CX, check out our blog on ‘How Conversational AI makes life easier for employees’. 

References

1. AI Safety Statistics

How AI-powered technology can help transform your contact centre

Watch Demo Book a Demo