What we learned about AI from automating the "little things" in PSA

Jan 26, 2026

Death By A Thousand Clicks

There's a particular kind of frustration that every technician knows: the spinning wheel. The loading bar. The three-click journey to copy one field from one system to another. None of these tasks are hard. That's what makes them so annoying. They're just... slow. Mechanical.

So we decided to use AI’s help. Not for the big, impressive task — not autonomous incident response or self-healing infrastructure. Just the little things. The death-by-a-thousand-clicks that quietly drains hours from every technician's week.

We time-boxed 8 weeks to test AI automation across five use cases in our PSA environment. Some of it worked beautifully. Some of it failed in ways that taught us more than the successes. Here's what we learned.

The Wins: Where AI Actually Helped

Migrating Data Between Disconnected Systems

Sales team use HubSpot as CRM but service team use PSA. Today, human manually create duplicate entries in two systems, costing two minutes per data entry. We had one goal: automatically move records from HubSpot to our PSA. Classic data migration: tedious, error-prone, and exactly the kind of work that makes you question your career choices.

The AI handled it remarkably well. Not because it's good at copying data — any script can do that. It's good at mapping data. When HubSpot calls something "Company" and the PSA calls it "Organization," the LLM understands they're the same thing. When a phone number is formatted differently in each system, it figures it out. 

The catch: Edge cases. When a required field was missing, the agent would sometimes hallucinate a value rather than skip the record. It would attempt the migration multiple times, occasionally inventing data to fill the gap. To fix this, we use explicit instructions to use "exact match" for certain fields, with clear rules to skip records rather than guess.

Takeaway: LLMs excel at semantic mapping between systems. They struggle to know when to give up.

License Renewal Reminders

Many MSPs would send renewal reminder to clients for annual commitment. This is a templated email that needs to be sent out using data from three different systems: 1) PSA for customer and contact info 2) Pax8 or Sherweb for license details 3) Microsoft for license assignees. In our heads, this was a "three-tab shuffle" — jumping between systems, cross-referencing manually, assembling notifications by hand.

We reframed this a data enrichment problem. The agent queries the main email template first, then query three sources to enrich the template, and finally publishes the draft in the PSA or mail tool. The output is a draft of the email with details all pre-filled. This use case worked beautifully despite the heavy amount of text processed. 

Takeaway: LLM is excellent at marshaling unstructured data together. For example, we ran into an edge case where a customer has 3 renewals next month but 1 more the month after. RPAs in the past struggle to fill in email template with such unstructured data, but LLM understood the requirement perfectly and restructure the email to be perfectly reasonable. 

Ticket & Labor Log Summary

These are bread-and-butter LLM use cases. Feed it context, get a summary. Nothing revolutionary, but the time savings compound. Ticket summarization worked out of the box — a standard text summarization problem that plays to LLM strengths.

Labor logs required more tuning. We don't want a paragraph; we want a sentence that is self contained enough for clients to understand . We iterated on prompts until the output matched what a human would actually write: shorter, more precise, less verbose.

Takeaway: Useful summarization requires understanding your audience's expectations for length and tone. Convenience is key - it needs to be located in exactly where technicians already work. One extra hop would be “too much effort”. 

The Failures: Where We Wasted Our Time

Detecting False Security Alarms

Impossible travel alerts are a constant nuisance. User logs in from New York, then Dubai an hour later. Is it a breach or did they just turn on a VPN? The answer usually exists somewhere — buried in an email thread, a vacation notice, a ticket from last week. We thought AI could find it.

It couldn't. Not reliably.

The problem wasn't intelligence. The problem was scale. Our PSA doesn't have robust semantic search indexing. Neither does our email. Asking the agent to find relevant context meant asking it to sift through massive amounts of unstructured data, hoping to surface the one detail that explains the alert.

It's like finding a needle in a sand beach. The AI sometimes hallucinated; other times, it found seemingly relevant but actually unrelated information. The return on investment wasn't worth the token cost.

Takeaway: AI agents don't solve data retrieval problems. If your systems lack semantic search infrastructure, adding a smarter agent doesn't compensate.

Automating PowerShell Remediation

This seemed like the holy grail: you can perform many microsoft actions directly using powershell. You can run a security incident response at scale. 

We explored it. We decided not to pursue it.

The scripts themselves weren't the problem. PowerShell is powerful — which means PowerShell can break things in powerful ways. The agent could easily cause adverse side effects, and there was no testing environment available to simulate real security incidents safely.

Under a high-stress security event, you need reliability. The amount of work required to make this bulletproof outweighed the benefit.

Takeaway: Some automations fail the risk-reward calculation regardless of technical feasibility. The question isn't "can we automate this?" It's "do we have the guardrails?"

The Pattern That Emerged

Looking across our successes and failures, a pattern became clear:

  • What worked had bounded context, specific inputs, low data volume, clear "done" state

  • What failed required searching unbounded data, high failure cost, no safe test environment

The underlying insight: Agents perform well when context is contained. They struggle when asked to find needles in haystacks or when mistakes carry significant consequences.

We're continuing to explore the boundary between "too small to bother" and "too complex to trust." The middle ground — the genuinely useful automations — is narrower than we expected, but the wins there are real. If every small workflow saves a human a few minutes, over time they can add up to days.

A New Mental Framework

This changed how we evaluate automation opportunities. We used to ask "what's annoying?" Now we ask: Is the data bounded enough to execute the task? Is failure reversible or costly? The tasks that pass both filters are rarely the exciting ones — they're the mundane data syncs and log entries that nobody brags about automating. But that's where the ROI actually lives.

"We don't have documented processes" is something we hear a lot. If you're not sure where to start, look at your email templates - any template is a semi-documented process. It's frequent enough to bother, predictable enough to standardize, and has defined inputs and outputs. That's an automation candidate hiding in plain sight.

The technician's day still has spinning wheels. Just fewer of them.