New architectures, tools, networks, and clouds require us to rethink what it is we’re monitoring, how we collect data, and what to do with it once we have it. For instance, monitoring mainframes in private datacenters is a very different challenge from monitoring microservices running on containers in a public cloud.
The thesis underlying many of Adrian’s key points: monitoring must be cheaper than the thing being monitored. The implications of this go beyond the economics of monitoring tools. They extend to monitoring teams and processes. Even as apps get more complicated, as a community we must strive to keep monitoring simple.
Adrian summarized the state of our industry with his wonderfully cynical “Tragic Quadrant” of vendors attempting to both scale horizontally – as the number of nodes to monitor increases – and vertically – as the pace of change within those nodes increases. He left us begging for more with a demonstration of what’s happening at Cockcroft Labs: new approaches to simulating (and monitoring) microservices and serverless transactions.
In August we discussed the theory of monitoring. We’ll get very practical this month with a panel discussion where we’ll hear in-the-trenches stories from monitoring experts at Facebook, LinkedIn, and Slack about how they monitor at petabyte-scale… and what they learned along the way.
See you September 28 at BigPanda!