The first step in incident detection is configuring inbound monitoring and observability integrations with BigPanda to receive events. BigPanda includes about 50 standard integrations with popular tools, and customers can also create custom integrations. This data feeds into the BigPanda IT Knowledge Graph.
This section reviews the number and types of inbound integrations, how many were open-source or proprietary, which monitoring and observability tool vendors and solutions were used the most and generated the most events, their effectiveness, and the top four monitoring and observability tool trends.
Key monitoring and observability tool integration highlights:
of all known inbound observability platform integrations were with Amazon CloudWatch
View Observability Integrations ↓of all known inbound purpose-built monitoring integrations were with SolarWinds
View Monitoring Integrations ↓“Observability is a journey. BigPanda AIOps is a key part of this journey for us. As we scale and grow the business, it’s integral for us to bring in automation and integration with other tools and technologies. Don’t wait to start your AIOps journey once you are overwhelmed with alerts. Start early to get a single pane of glass to understand which monitoring tools you really need.”
–Vice President of Information Technology, Manufacturing Enterprise
The number of inbound integrations per organization ranged from one to 198, with a median of 20.
of organizations had 10+ inbound integrations
Number of inbound integrations with BigPanda
This section reviews the known inbound integrations by category and license type.
Each known inbound integration vendor or solution was grouped into one of three categories:
of the known inbound integrations were with observability platforms
Percentage of inbound integrations by inbound integration category
Percentage of vendors or solutions by inbound integration category
Percentage of organizations by inbound integration category
Percentage of events by inbound integration category
View the most integrated vendors and solutions, as well as the effectiveness of each monitoring and observability vendor or solution.
Nearly three-quarters (73%) of the known integrations were with proprietary vendors, such as Cisco AppDynamics, Datadog, LogicMonitor, New Relic, and VMware vRealize Operations (vROps).
The remaining 27% were with open-source solutions, such as the ELK Stack (Elasticsearch, Logstash, and Kibana), Grafana, Jenkins, Prometheus, and Sensu.
of the known inbound integrations were with open-source solutions
Percentage of inbound integrations with BigPanda that were proprietary or open-source
The following table compares the percentage of proprietary versus open-source inbound integrations by vendor or solution type.
Altogether, 104 known vendors and solutions had inbound integrations with the BigPanda platform.
Therefore, the number of integrations didn’t necessarily correlate with the number of events generated per vendor or solution.
Percentage of integrations and events for each vendor or solution (top 10 by number of integrations)
Nearly two-thirds (61%) of the known inbound integrations were between BigPanda and 20 observability platforms, which were responsible for 22% of the events.
of all known inbound observability platform integrations were with Amazon CloudWatch
Percentage of integrations and events generated for each observability platform vendor or solution (by number of integrations)
A third (33%) of the known inbound integrations were between BigPanda and 72 purpose-built monitoring tools, which were responsible for 50% of the events.
of all known inbound purpose-built monitoring integrations were with SolarWinds
Percentage of integrations and events for each purpose-built monitoring tool vendor or solution (top 10 by number of integrations)
View the effectiveness of popular purpose-built monitoring tools.
Just 6% of the known inbound integrations were between BigPanda and 12 non-monitoring tools, which were responsible for 27% of the events.
of all known inbound non-monitoring integrations were with Cribl
Percentage of integrations and events for each non-monitoring tool vendor or solution (by number of integrations)
This section compares the quality (actionability rate) and coverage (percentage of actioned incidents) of the incidents generated by each monitoring and observability vendor or solution to identify high-quality tools and noisy tools that need improvement. It includes a matrix with four quadrants:
2. High-quality, low-coverage: These optimized, high-performance tools in the upper-left quadrant generate fewer incidents but maintain a high rate of actionable incidents. Ideal for targeted use cases, they deliver substantial value when deployed and may be candidates for broader adoption.
1. High-quality, high-coverage: These signal-rich, low-noise tools in the upper-right quadrant are widely deployed and consistently deliver actionable incidents. They balance signal volume and strength, making them key assets in effective observability strategies.
3. Low-quality, low-coverage: These underutilized tools in the bottom-left quadrant are less prevalent and show lower signal quality, demonstrating opportunities to evolve through better integration, improved configuration, or rationalization. They may be in early adoption phases or used for narrower scopes.<
4. Low-quality, high-coverage: These scalable but noisy tools in the bottom-right quadrant contribute significantly to incident volume with fewer actionable insights. While widely used, they may benefit from tuning or configuration improvements to reduce noise and increase operational value.
Monitoring and observability tool effectiveness matrix (bubble size increases with customer usage)
The data shows key trends and insights for purpose-built monitoring tools and observability platforms.
No tool wholly owns the top-right quadrant—the observability landscape remains fragmented with no clear leader.
Open-source tools remain low-impact with limited adoption.
Some high-coverage tools fall short on signal quality.
Purpose-built monitoring tools tend to align as either specialists or stragglers.
For organizations investing in observability, the challenge is identifying which tools deserve broader deployment and which require refinement or reevaluation.