Skip to content
All posts

We catch the problems before you do

Most integration tools ship and move on. A connector goes live, the tickets start flowing, and from that point the software is largely on its own. If something breaks, the customer finds out first, usually when a sync stops or an error surfaces in their PSA. That's the normal pattern, and most MSPs have come to expect it.

Support Fusion works differently. Every live integration is actively monitored, and when something looks off, we investigate before raising a support ticket becomes necessary. This update from Steve, our CTO, walks through a real example of that in practice - a Freshservice connector showing rate limit errors during bulk ticket updates, and what we did about it.

Watch the walkthrough

 

What was happening

A customer was running a Freshservice integration through Support Fusion. Periodically, they would perform bulk updates on a set of tickets - selecting a group and recategorising them all at once, for example. That's a normal operational action, and it should work without any friction.

What we noticed in our monitoring was a pattern of Error 429 - an API rate limit error - appearing on that customer's integration. Error 429 is an HTTP status code that means "too many requests." In plain terms, the integration was making API calls faster than Freshservice's rate limits would allow. Under that condition, some of those calls would fail, and a sync that should be routine could start producing inconsistent results.

The customer hadn't raised anything. From their perspective, everything looked fine. But our monitoring flagged the pattern, and we started looking into it.

What we found and what we fixed

When our team dug into the code, they found areas where the integration was making more API calls than necessary. The logic was correct in terms of what data was being exchanged, but the way calls were being batched and sequenced was less efficient than it could be.

After the investigation, we made changes that reduced the total number of API calls by around 50%. That's a meaningful reduction - it means the integration handles the same volume of work with roughly half the API footprint, which keeps it well within rate limits even during heavy bulk update operations.

The fix shipped before the customer experienced any actual disruption. The Error 429 rate limit errors were appearing in our monitoring, but they weren't yet causing visible failures on the customer's side. Because we caught it early, the improvement landed quietly - the customer's integration just kept working, slightly better than it had before.

Why this matters for MSPs running live integrations

The reason this example is worth sharing is that it illustrates something that doesn't get talked about much when evaluating integration tools: what happens after go-live.

Building a connector between two platforms is one problem. Keeping it running reliably at scale, across different customer configurations and usage patterns, is a different problem. Bulk updates are a good example of the kind of real-world usage pattern that doesn't always show up in a demo or a test environment but becomes relevant as soon as a customer starts using the integration in anger.

For MSPs, the alternative to a managed platform is either a DIY build or a tool that gets set up and left. Both put the maintenance burden squarely on your team. When the PSA vendor changes their API, or when a usage pattern surfaces an edge case, someone on your side has to find it and fix it. With Support Fusion, that's on us.

This is also why monitoring matters. Catching an Error 429 pattern before it causes failures is only possible if someone is watching. The fix we shipped in this case wasn't reactive - it was proactive, based on data we were already collecting on the health of live integrations.

What to expect from ongoing connector health

This isn't a one-off. Every Support Fusion integration is monitored in production, and we track error patterns, sync latency, and API behaviour across the customer base. When we see something that warrants attention, we investigate. Where there's an improvement to be made, we ship it.

Customers don't need to log a ticket or report a problem for this to happen. In many cases, they won't even know an improvement has been made - they'll just notice that the integration keeps working reliably.

If you're evaluating whether a managed sync platform is worth the cost over a DIY alternative, that ongoing operational care is a significant part of the answer. The initial build is only part of the picture.

See it in action

If you're running Freshservice alongside a PSA like ConnectWise, Autotask, or HaloPSA and want to understand what a managed integration looks like in practice, we're happy to walk through it.

Grab a demo with the team and in 30 minutes you'll have seen everything.