Dark
Light

How MCP Tools are Revolutionizing Software Observability

May 2, 2025

Software development is evolving, and the surge of generative AI is speeding up this transformation. After 25 years in the field, I’ve seen many shifts, but few are as significant as the rethinking of how we generate code and monitor our applications. Traditional tools – dashboards and manual alerts – are losing their touch. Today’s Application Performance Monitoring (APM) platforms rely on metrics, logs, and traces, but they must now adjust to a world where intelligent agents perform much of the work.

Enter the Model Context Protocol (MCP). Launched by Anthropic, MCP functions as a communication bridge between AI agents and applications. It provides agents with extra data sources and the ability to take action, shifting the focus from human-centred interfaces to an agent-centric model. Think of it like upgrading from a static dashboard to a dynamic ecosystem where every application shares its insights seamlessly.

Traditionally, products were designed around what product managers assumed users might need, often leading to interfaces that only offer a partial picture. If you’ve ever wrestled with multiple tools to complete a code review, you know how frustrating it can be to piece together different data points. With MCPs, applications share their expertise directly with AI agents. This means agents can pull in data from various domains, streamline analysis, and deliver a full picture straight to you.

Consider a code review scenario: an AI agent can access runtime data, analytics, commit histories, and more. It can then correlate this information, flag issues, and even suggest fixes – all without you having to connect disparate tools. The output might include a summary, key metrics, links, and visualisations, making your job easier while automating routine tasks. You can also set your own rules so that the process meets your standards consistently.

Beyond code reviews, the capability of MCP data extends to automated test generation, identifying performance bottlenecks, preventing breaking changes, and tracking down unused code. However, it’s not enough to simply add an MCP adapter to an existing APM. Effective observability means preparing your data meticulously – structuring, preprocessing, contextualising, and linking it – so the AI can deliver a coherent narrative.

This agent-friendly approach reshapes our understanding of software monitoring. As generative AI begins to run as background processes and integrates into continuous integration, the tools that embrace an agent-centric model will lead the way. Meanwhile, those clinging to direct-to-human models may find themselves outpaced in this new landscape.

Don't Miss