A Support Team's AI Journey: From 'Please Hold' to 'Already Solved'
Thomas manages customer support for a German fintech. Six agents. About 400 tickets a day. Growing 15% month over month. The math was simple: at this rate, he'd need to hire three more people by Q3.
Or, as his CFO so delicately put it: "What if we tried AI first?"
Thomas was not excited. His exact words to me were: "If you put a chatbot that says 'I'm sorry, I don't understand' between my customers and my team, I will personally come to your office."
Fair. We didn't do that.
What We Actually Did
We didn't put AI in front of customers. We put it behind agents.
When a ticket comes in, AI does three things before a human even sees it:
1. Categorizes it. Billing issue, technical bug, feature request, general question. Sounds simple, but agents were spending 30 seconds per ticket just reading and categorizing. At 400 tickets/day, that's 3+ hours of brain power just deciding what something is.
2. Pulls context. Customer's plan, recent interactions, known issues with their account, relevant knowledge base articles. The agent opens the ticket and sees everything they need. No more "let me pull up your account" while the customer waits.
3. Drafts a response. Not sends—drafts. Based on similar tickets and knowledge base content. The agent can use it, edit it, or throw it away. About 60% of the time, they use it with minor edits. 30% they rewrite significantly. 10% they start from scratch.
The Numbers
Average response time: dropped from 4 hours to 47 minutes. Not because agents work faster—because the queue doesn't pile up.
First-contact resolution: improved from 62% to 78%. Better context means fewer "I need to check and get back to you" responses.
Customer satisfaction (CSAT): went from 3.8/5 to 4.4/5. Turns out, fast and accurate beats slow and personal.
Agent satisfaction: Also up. This surprised us most. Agents said the AI drafts eliminated the "template fatigue" of answering the same password reset email 40 times a day. They could focus on the interesting, complex issues.
What Almost Went Wrong
Week three. A customer wrote in, upset about a failed transaction. AI drafted a response that included: "I understand this can be frustrating." The customer had lost €12,000 due to a system error. "Frustrating" was not the word.
The agent caught it and wrote a proper empathetic response. But it highlighted a critical gap: AI doesn't do emotional calibration well. For high-stakes or emotionally charged tickets, the AI draft was sometimes tone-deaf.
We added a "sensitivity flag." When AI detects high emotion or large monetary amounts, it still provides context and knowledge base links, but it doesn't draft a response. Instead, it shows: "High sensitivity detected. Manual response recommended."
There was also the privacy scare. We realized the AI was surfacing internal notes meant for other departments in its context summaries. Nothing reached customers, but agents were seeing HR-flagged account notes they shouldn't have. Fixed in week four with strict data scoping.
What Thomas Says Now
"I don't need three new hires. I might need one, eventually, as we scale. But the six people I have are doing better work, not just more work. That matters."
The unsexy truth: AI in support isn't about removing humans. It's about removing the parts of the job that make humans slow, bored, and error-prone—so they can do the parts that actually need a human.
Nobody writes "AI helps me answer emails faster" on LinkedIn. But for Thomas's team, that mundane improvement changed everything.