Featured article

Customer Service Team Of (a Very Near) Future

2023 has been a tough economic year for most, so welcome, budget pressure! ChatGPT is out (you’ve heard about it, right?). Customer expectations from CX also keep getting higher. If your customer service team lets your quality slide, you are opening the door for your competitor.

So, where is this all going? What will a customer service team and customer service organization look like in the near future?

This is the change that is coming:

 TodayNear Future
Agents20020
Team Leads152
QA Analysts51

 

But how?

First, let’s start with the channel. Email sucks – no further discussion required. Phone is too expensive (concurrency = 1). Shifting as much contact as possible to chat balances between customer preferences and costs to the business.

Now that we are on chat, is it going to be all chatbots? ChatGPT must be able to do it all, right? No human agents, then? Why are we saying 200→20, and not 200→0?

ChatGPT is amazing. Ask it a question and it provides a credible answer. So, why not just use it to automate all conversations? Well, it does not understand what is true and what is not. It does not understand the details of your company’s business. Sometimes, it may sound extremely insensitive. And it is doing all of it with the highest confidence possible, whether it is right or wrong.

Customer conversations that are hard, whether emotionally or in the complexity of the topic, are still best left to a human. Our relationship with ChatGPT should be less of an autopilot and more of a capable apprentice. You can ask it to draft some text or code, but you would not blindly trust its output. Occasionally you will have to correct or guide it, and even with all of its imperfection, it may still save you a lot of time, giving you an unfair advantage over those who do not use it.

Technology does not have to be perfect to be useful. Good products know the limits of technology they are using.

What Comes Next

We will continue deploying chatbots, and they may even handle more conversations than they do today. Chatbots give your customers the option of self-service. They are a conversational interface for interacting with your knowledge base; like a more natural search engine, since it remembers the context and history of your search session or conversation.

Similar to the self-checkout counters in a supermarket, a chatbot gives customers a way to get their question answered without talking to anyone – just get it done and be out of of the door.

Supermarkets have employees who stand on the side, observing the self-checkout counters. If they spot a confused or frustrated customer, they are there to help. It is good for these employees to be visible to the customers. Just like with a chatbot, I feel comfortable knowing that a human will step in to help me when technology fails.

You need to guide your customers to self-service whenever possible; when the questions are relatively simple and emotionally neutral. Give your customers a possible solution and an easy way to break through to a human when needed.

Humans Still Matter

Human agents are not going to disappear. A significant (although, smaller than today) share of contacts will require them.

To make the “200→20” transition possible, each agents will have to be an order of magnitude more productive than today. ChatGPT-like technology will not stop at chatbots. It is the key to creating this radical jump in agent productivity. “AI will not replace you, but a person using AI will”.

Agent productivity can be measured as throughput – the number of conversations an agent can complete in an hour. There are two inputs that drive this number – AHT (average handling time) and concurrency (the number of conversations the agent can have in parallel).

If an average conversation is 15 minutes long and an agent can handle one conversation at a time, the agent’s throughput is four conversations an hour. If the same agent can handle two conversations in parallel, the agent’s throughput doubles to eight conversations an hour.

Agents are not in complete control of AHT, since it includes both the time it takes for the customer to respond to the agent, and the time it takes for the agent to respond to the customer. They can still drive AHT down in two ways. First, by responding faster to every customer message; and second, by finding the optimal solution in every response. If you have a great plan for a conversation, you will handle it more effectively and complete it in less time. An “AI apprentice” can help the agent with both of these, potentially cutting AHT by 50%, but unlikely to shrink it substantially.

Concurrency has much more headroom for improvement than conversation duration. Until now, organizations ran into a pretty low ceiling trying to realize this improvement. Usually, live chat agents can get to concurrency 2, maybe 3, or at most, 4. Requiring higher concurrency causes overload and rushed responses.

Imagine that a conversation window that you visited 10 minutes ago is flashing, alerting you of a new message from the customer. You are clicking on the conversation’s window. In order to respond, first, you need to “reorient” yourself – recall the context and history of the conversation. Then you need to read and understand the new message. Finally, you need to compose your response. This “reorientation” time is a major challenge – context switching is tough for us humans.

An AI co-pilot or apprentice can summarize the conversation, remind the agent of the current context and draft the next action. This will make the context switching easier, raising the concurrency.

Let’s not stop there. The agent does not have to personally tend to every customer’s message in a conversation. Support conversations have predictable sequences and patterns. Instead of suggesting the next message, the AI co-pilot can suggest the course of action for the next few steps. The agent can approve the suggestion, instructing the AI to alert them when the sequence ends or the customer’s reaction differs from what was predicted. For example, when a customer contacts the company about a previously placed order, the AI can handle receiving the necessary details from the customer: “I will ask the customer ‘Could you please provide the order number?’. If they give me a string that does not look like an order number, I will say ‘It is the 10-digit number in the top right corner of your receipt’, and when they give me the number, I will thank them: ‘Thank you! Please allow me 1-2 minutes to look your order up. I really appreciate your patience’”

The role of the agent will change to that of a machine supervisor who must intervene occasionally and apply human common sense.

Human X Machine

This introduces a new model of handing interactions, where the chatbot and human work together. A similar model of labor division is widely accepted in many professions: lawyers and their paralegals, doctors and their medical assistants, professors and their TAs. Like we said, the more complex and emotionally charged interactions are best left to a human, but not all parts of the same conversation are of the same degree of complexity. Understanding this is the key to high concurrency and to the order of magnitude agent productivity jump.

Today, customer service teams have QA managers and analysts who sample a small (very small) percentage of conversations for every agent, compare the agent’s performance against a scorecard and assign a grade to the agent. The process is manual, subjective and expensive. It has not changed in years. QA tools that replaced spreadsheets added convenience to the process. They streamlined workflows and improved reporting and grader’s user experience. However, the process still remained highly manual. For the first time in years, Natural Language AI, and Large Language Models, in particular, open the door to scaleable, high visibility quality management. Products that are coming online can check automatically if agents manage all conversations the way their managers expect them too. The size and structure of internal QA teams is going to change.

Similar transformations are coming to other business processes in CX that have not gone through any significant changes in the last 10 years or so: tagging tickets for contact drivers, analytics, content optimization, etc. QA Managers, CX Operations, CX Managers, Knowledge Management Specialists – no role will remain the same as it is. Each one of them will become AI-enabled and scalable.

This is how a customer service team of 200 agents, 15 team leads and five QA analysts becomes one of 20 agents, two team leads and one QA analyst.

How long will this take? We do not know, but over the last few years, the developments in AI’s ability to handle language have been coming in non-linear leaps, usually much faster than initially predicted.

Change is coming, and it is coming fast. Forward-looking CX leaders must follow its developments, educate themselves and prepare for it.