Replacing agents is a trap. Success requires granular clarity and career paths that turn resistance into bottom-up adoption.

Replacing agents is a trap. Success requires granular clarity and career paths that turn resistance into bottom-up adoption.

Vitor Diogo Pinho

Technical Support Engineer

What

Vitor Diogo Pinho

has to say about

What

Vitor Diogo Pinho

has to say about

How Leaders Should Adopt AI in Customer Support?

How Leaders Should Adopt AI in Customer Support?

Leaders must ground their methodology in 3 pillars.The first one is Clarity.Clarity around their AI Customer Support goals in all layers, what they want out of Support Teams (as a group), out of Human Agents (to the individual assessment if possible) and out of the AI Agents and AI Tools. When everyone knows what's expected out of them in their roles towards the company's adoption goals, everything is easier to qualify, quantify, improve, document, recruit, etc. One of the main issues is that the value of AI is generally acknowledged but not parsed with any team's goals and objectives. This missed granularity creates confusion and additional pressure, teams feel they should be using AI in their tasks to align with the main corporate company vision for being an AI first company, but don't know how to translate that into their day to day tasks.The second one is AI career path structure. Customer Support Agents know and see that potential to be replaced because that is amplified by mainstream media and a pressure point to companies as it is the easiest way to generate a ROI from a low skill necessity framework. In my opinion, this is a huge trap. Humans are very much needed to shape the AI Agent - Customer interactions through Agent Prompt Engineering and Company voice alignment, create new Customer Support revenue streams for example with Customer Typology Based Support (versus current generalised support), document novel support material in the Knowledge base that is necessary from new product feature releases, AI Agent Management and Orchestrations, AI compliance, etc, I really could continue for a while, but the goal is to highlight that when people fear that AI will replace them, they'll create resistance to adoption to delay that event. When people see a career path they feel motivated to pursue, they'll run towards it, they'll take personal time to invest in learning new skills, they will create bottom - up adoption and shine as AI Champions.The third one is Optimization. Each company will have their own advantages and their own limitations to quality AI generated results (regardless of them being performance improvements, cost reductions, quality output advancements, or new capabilities). If you don't measure then you don't see. If you don't see, you don't know where to improve. So when an AI gives you a wrong reply, you don't know if the problem is with the outdated knowledge base article it consulted, with the system prompt that is guiding the output to give a lengthy verbose heavy reply, with the ambiguity of the question that prompted the AI agent (and for example, pre-guiding the user in the topic could erase this case).By not knowing when and where to Optimize, you pay a cost in money, time and limit your employees capacity increase.For me, these are the main things leaders should consider for their AI Adoption roadmap, the AI tools and AI services companies are important but having them and not the above will probably get you know where, AI is the worst it will ever be but people can regress if not lead properly. (feel free to do a full stop at "roadmap" if the rest of the sentence doesn't serve you so well, but I wanted to share my honest opinion on my reply.)As for the photo, I grant you permission to use the one on LinkedIn or you can use this one (whatever you prefer):


Get Started with Fini.

Get Started with Fini.

Get Started with Fini.