close
close
Go slow and know when not to use generative AI

(© LeoWolfert – Canva.com)

It’s been over a year since ServiceNow launched its Now Assist for Virtual Agent, a conversational AI tool that can summarize information, perform tasks, help build low-code apps, and automate workflows for business units like IT, HR, and customer service. Now Assist essentially aims to make agents more productive when handling requests by either automating responses for requesters, providing summarized notes to the agent, and/or automating request handling.

The key point about Now Assist is that it embraces ServiceNow’s overarching strategy of being a “platform of platforms,” ​​meaning it theoretically has access to enterprise-wide data through its CMDB. diginomica has spoken to customers who are already using it in this way – however, some knowledge-based changes were necessary to surface the right information.

With this in mind, we spoke with Gretchen Alarcon, VP and GM of Employee Workflows at ServiceNow, to understand how Now Assist and the broader use of generative AI are being used by customers. Alarcon said that the hype around generative AI that was seen in the first few months after ChatGPT was launched has died down a bit and that companies are now shifting their focus to practical aspects:

I think we’re really at the point now where organizations are starting to see where they think they can find value and where they see particular needs. They’re looking very specifically at productivity and engagement measures.

Mid-sized companies, Alarcon continues, are primarily examining how they can use generative AI in their existing provider mix, as they often do not have the capital or skills to develop their own models. Larger companies, however, are experimenting with their own models where they see a need:

Usually it’s something that’s very specific to their business, right? It’s not like, “I want to see how I can use a big language model for service components.” It’s something that’s very specific to the business challenges that they’re facing and they’re trying to align it with their business strategy. So again, it’s a question of what use case really fits best.

A measured approach

Speaking to Alarcon, it’s clear that in their experience, buyers are taking a practical and considered approach when it comes to generative AI. While vendors often give the impression that generative AI is the key to solving all of a company’s problems, in reality there are applications where it is useful and situations where it is not. Companies will likely start small, test employee readiness and consider where they see value. Notably, in many ways this can be seen as an extension of the AI/ML work that companies have already implemented. As Alarcon noted:

If you think about where generative AI comes from, some organizations have already gone through the machine learning process. They’ve already looked at some kind of AI. While this is a new kind of AI, it’s an evolution. So it’s not necessarily going to be a brand new, super disruptive technology that’s going to lead to a whole different area – disruptive, sure, but in terms of how we work and maybe it’ll prove some of the value of what people were hoping for.

Likewise, the press is full of stories about how AI will take over our jobs (especially those of knowledge workers) and how we will all be unemployed in the near future because the bots can do all the heavy lifting. However, Alarcon takes a balanced view here too, saying that jobs will change in the medium term, but not as drastically as we might expect. Rather, the types of work that certain roles perform may need to be reassessed:

I think that “the jobs” themselves are not going to change dramatically in the short to medium term. It’s not going to be that we’re not going to hire agents of a certain type anymore. But the way we do the work is going to change pretty dramatically, right? I think that as companies see the productivity gains, we’re going to see new benchmarks and new expectations about what’s an appropriate workload for an agent.

If you could increase your agent productivity by 30%, what would you do in terms of agent work? Long term. I think we’re going to see organizations start to ask questions: Is Now Assist taking over that role at level zero or even level one in most cases and shifting the agent work to these more complex cases?

We don’t have the data yet to say whether we’re there yet, but I think that’s definitely the trend we’re heading in.

Adding to this insight, Alarcon says that organizations don’t take a “big bang” approach to projects – mainly because they’re testing the reaction of the employees who will be using the tools on a daily basis. They want to see how users respond and then build from there:

(Organizations say), “This is an option for you, you don’t have to use it, but we think it will increase your productivity.” And I think that at some level, making a change like that is really a better way to encourage adoption.

When you should not contact the bot

Alarcon specifically noted that while leaders are thinking about the long-term benefits of generative AI for their organizations, it’s important to consider the impact on employees. And buyers, she said, are very outspoken about when not to deploy generative AI bots. It’s important for companies to bring the conversational interfaces to where employees are working to ensure engagement is happening in the right place, but also maintain a balance of providing human support when needed.

Alarcon provided the example of how automated customer service tools on the phone can be frustrating for customers, who are often presented with a myriad of options and have now learned that if they don’t press any numbers or just press “zero,” they’ll likely be transferred to a live agent. Generative AI conversational bots can be useful, but companies need to think carefully about when and why to use them so that employees actually want to participate in the process:

Generative AI is a big topic in the context of the employee. One of the topics that has come up that is very specific to the employee and that we’re really focused on is: When should a bot not intervene?

I think a lot of thought has gone into what systems we need to connect to. Things like that. But the other side is when an employee starts asking a question or has multiple questions and you realize it’s about sensitive topics – it’s related to health, it’s related to misconduct, it’s related to interacting with a manager – then that’s not great. How quickly can we recognize that a human is needed here without the bot continuing to try to distract? In terms of that sensitivity, we’re just releasing the first features.

ServiceNow has started working on this and offers its customers a recommended framework, focusing on the interactions that companies should pay attention to in both generative AI conversations and human interactions:

What you want to avoid is going through multiple rounds and having a very frustrated employee – so get understanding quickly, with the right guardrails in place. You don’t want employees to use this as “I don’t want to go through your process.”

But that was actually a very compelling conversation, especially for organizations that already have very strict AI guardrails or that are very afraid of losing the human element of employee support – to be able to say: we want you to be able to tune this at the level that you’re comfortable with. I think it helps when an organization says, “This is a tool and I have some control over it to really provide a better experience for my employees.”

And of central importance is building trust in the generative AI tools themselves:

I think that’s the most important thing: the bot will be trusted when people realize that it’s not just a distraction mechanism.

My opinion

Alarcon suggests a very measured and responsible approach. He recognizes that companies that see AI as a panacea and don’t think about bringing their employees along on the journey are likely to fail. Start small, test, identify the value, and understand when it’s better to put a person in charge of a problem. Given the hype, more of these ideas are absolutely necessary.

By Jasper

Leave a Reply

Your email address will not be published. Required fields are marked *