Depending on who you talk to, AI is an opportunity or a threat. Which of those is true in your workplace will depend on how you manage your AI transformation.

Conversations around AI are so often limited to what the technology does (or doesn’t). However, framing AI purely in functional terms inevitably leads to discussions about what the technology replaces. 

This is understandable. The history of technology is characterised by redundancy and replacement. The car replaced the horse and cart. The word processor replaced the typewriter. Both those technological revolutions still required an operator. The task simply became much more efficient. 

Sometimes, instead of providing a new way to complete an old task, a technology completely transforms the nature of the task itself. Workflows change, processes are reassessed, resources are revalued. And, eventually, entire business models start to shift. 

So it’s no surprise employees in many industries are uncertain and wary of AI. We’re still at the stage where everything is very much in flux. Change is happening. But how much change, and in what areas of the business, still has a while yet to play out.

And managing that change is not just a technology problem. Arguably, resolving the various technology questions is the easy bit. Just as with the arrival of the motorcar and the word processor, the key to a successful transformation is the user. 

That’s why any plan to introduce AI into the workplace has to address the cultural impact – on stakeholders, on employees, on customers.

To do this effectively requires you to consider and document three key questions – providing everyone involved with clarity, alignment and, most of all, reassurance. 

1. What is our use case for implementing AI?

Whatever form of AI you hope to use – generative AI, workflow management, voice recognition, etc. – decide on your use case up front with a very clear proposition.

It’s no good implementing AI tools in the hope some magical benefits will materialise. Have a clear understanding of the value you hope to achieve and in which areas of the business. Also be clear about how the change potentially impacts other areas of the business or even the customer experience itself. 

After all, we’ve all encountered automated processes that failed to live up to the promise. 

What’s the point of implementing an AI chatbot, for example, that ultimately frustrates customers and degrades the experience? What your business gains by freeing up staff from handling routine queries might be more than lost in escalated complaints and a higher customer churn. 

So, saving time and/or money isn’t really a use case by itself – particularly when it risks coming at the expense of those your business relies on to keep the doors open.

2. How will we use AI responsibly? 

You can’t absolve your organisation of any consideration of ethics or fairness or correct usage by putting the burden of responsibility onto the end user. It’s not up to your end users to determine the right and wrong ways to use your AI tools. And this is true whether we’re talking about your customers or people working within your organisation. 

We’re all familiar with lengthy terms and conditions that come with most technologies these days. And we know they’re likely full of caveats that seek to limit the provider’s liability should someone misuse the tool – unwittingly or otherwise. I say “likely” because most of us don’t actually read these documents. Who has the time? Click to the end and tick Accept.

And internal staff policies – full of rules about what not to do and backed up with threats of dismissal – don’t always equip users with the required skills to understand why these things matter, let alone how to avoid making an innocent mistake.

Most people aren’t experts on the often complex and nuanced issues surrounding modern technology. For example, I’ve met many business clients over the years who needed to be told they can’t use an image downloaded from Google Image Search in their marketing campaigns or on their websites. 

Your users can’t take responsibility for what they don’t fully understand. Any new AI technology you choose to implement needs to be backed up with education; not only training but also detailed documentation along with the necessary time to read and process. 

3. How will we use AI safely?

Data is the lifeblood of AI. Data is not only one of the most valuable commodities out there, it’s also one of the most regulated. 

In certain highly regulated industries, AI has the potential to create some real cybersecurity headaches. In fact, any organisation that captures and stores sensitive data such as private and personal information needs to think very carefully about how any AI tool might access and process that data. 

Depending on your use case, it might make sense for your AI to access customer records, for example, to provide more targeted and relevant responses. But any access to commercially sensitive or private data doesn’t come without risk.

And then there’s safety in the sense of providing a safe workplace. How might your AI changes impact workplace culture? 

Technology can be a bully. Technology can make people feel insecure in their roles. Changes to workflows and processes can create additional stress, so do your employees feel safe and supported in their work? Even if your use case for AI is not about replacing people at all, do your employees feel their jobs are safe?

Whether those fears are real or imaginary, they still need to be addressed.

The Pounce approach to AI in the workplace

At Pounce, we’re not using AI to replace jobs. That would be neither responsible nor safe. 

Instead, we’re continually testing use cases with various AI tools to find new ways to increase our operational efficiencies – and our net profit – by deploying our resources where they can generate the maximum impact.

For example, a new AI-powered workflow might make one of our SaaS tools redundant, allowing us to cancel that monthly subscription and redirect the budget elsewhere. 

SImilarly, if we use AI to streamline a bunch of time-consuming routine tasks that previously required a person to do, it doesn’t mean that person is redundant. Instead, their time and expertise can be redirected elsewhere – ideally to tasks that generate more value to the business. If retraining is necessary, then great! We’re investing in making a valued team member even more valuable to the business.

The goal is to equip our creatives, project managers and so on with the additional skills and tools to be even more productive and efficient – without compromising quality or job security.

AI is capable of many things, and may evolve to be capable of many more in years to come. But AI can never replace our most valuable asset: our people.