It implies shallow. Bolted on. Good enough for demos, not for production.
That skepticism is earned. Most drop-in tools deserve it.
But it misses something important.
The hardest analytics problems shouldn't be solved by every team, separately, from scratch. They should be solved once. Centrally. And inherited.
Custom instrumentation feels like ownership. But it's actually debt.
You write the tracking code. Then someone changes the page structure. Then the events drift. Then a new engineer inherits logic no one documented. Then the data stops being trustworthy — quietly, gradually, without warning.
Bespoke analytics rots. Not because teams are careless. But because accuracy requires ongoing maintenance that never makes it onto the roadmap.
The edge cases alone are brutal. — Single-page visits that look like bounces — Engagement that happens before any interaction fires — Ad performance that disappears into a separate dashboard — Visibility data that assumes tabs stay open
Handling these correctly isn't interesting work. It's invisible work. And invisible work doesn't get prioritized.
So it doesn't get done.
This is why "layering on top of" matters more than "replacing."
If better data flows into GA, Adobe, or Amplitude — tools the organization already trusts — adoption isn't a change management problem. It's just better signal in familiar places.
Good analytics infrastructure doesn't ask teams to start over. It handles what teams consistently get wrong. Silently. Reliably. Without adding to the roadmap.
Drop-in isn't the problem. Shallow is the problem.
Those aren't the same thing.