A freshly deployed Wazuh SIEM with 30-50 agents will comfortably generate 200,000 or more alerts per day. At that volume, your security analysts are not monitoring — they are drowning. Alert fatigue is not just an inconvenience; it is an operational security risk. When everything is an alert, nothing is.
This guide documents a systematic approach to reducing Wazuh alert volume by over 99% without losing meaningful security signal. The techniques are drawn from a production environment managing a mixed fleet of Linux servers, LXC containers, Windows workstations, and network appliances.
Step 1: Identify Your Top Noise Sources
Before writing a single suppression rule, you need data. Query your Wazuh indexer (Elasticsearch or OpenSearch) for the top rule IDs by volume over the past 7 days:
GET wazuh-alerts-/_search
{
"size": 0,
"query": { "range": { "timestamp": { "gte": "now-7d" } } },
"aggs": {
"top_rules": {
"terms": { "field": "rule.id", "size": 50, "order": { "_count": "desc" } }
}
}
}
In a typical mixed environment, you will find the same usual suspects dominating the list:
| Rank | Rule ID | Description | Daily Volume |
|——|———|————-|————-|
| 1 | 80710-80799 | SELinux AVC denials | ~85,000 |
| 2 | 5503 | Promiscuous mode changes | ~42,000 |
| 3 | 9044 | Agent queue flooding warnings | ~35,000 |
| 4 | 18152-18154 | Windows Firewall drops (internal) | ~28,000 |
| 5 | 5715 | SSH authentication success (internal) | ~22,000 |
| 6 | 80700-80705 | Puppet agent config runs | ~15,000 |
| 7 | 5402 | Reverse DNS lookup failures | ~12,000 |
These seven categories alone can account for 95% of total alert volume. The remaining 5% contains the actual security-relevant events you care about.
Step 2: Understand Why These Fire
SELinux AVC spam: Containers and virtualized workloads generate massive AVC denial volumes. SELinux policy on the host denies operations that are perfectly normal inside a container — accessing /proc/ pseudo-files, binding to network ports, reading shared libraries. These are policy tuning issues, not security events.
Promiscuous mode changes: Bridge interfaces on hypervisors and container hosts toggle promiscuous mode constantly as VMs and containers start, stop, and migrate. Each toggle fires a Wazuh alert. On a busy host running 20 containers, this can generate thousands of alerts per day.
Agent queue flooding: When the Wazuh manager cannot process events fast enough, agents log queue warnings. These are operational health metrics, not security alerts. They belong in your infrastructure monitoring (Prometheus/Grafana), not your SIEM alert stream.
Windows Firewall drops from internal hosts: Default Windows Firewall policy drops unsolicited inbound traffic and logs it. On an internal network with broadcast traffic, mDNS, LLMNR, and NetBIOS, this generates a firehose of drops that are architecturally normal.
SSH auth success from internal hosts: Successful SSH logins from your management network are expected operational activity. Configuration management tools (Puppet, Ansible, Salt) authenticate via SSH dozens of times per hour. These are not security events — SSH failures from external IPs are.
Step 3: Write Suppression Rules
Wazuh custom rules go in /var/ossec/etc/rules/local_rules.xml on the manager. The suppression technique is to create a child rule that matches the noisy parent rule with additional context and sets level="0", which effectively silences it.
Rule ID allocation strategy: Reserve a dedicated block for your organization. The range 100000-109999 is available for custom rules. Establish a convention — for example, 100100-100199 for suppression rules, 100200-100299 for custom detection rules, 100300-100399 for correlation rules. Document this allocation in your team runbook to prevent collisions when multiple engineers write rules.
Here are the suppression rules for the top noise sources:
<group name="local,noise_suppression">
<!-- Suppress SELinux AVC denials from container hosts -->
<rule id="100100" level="0">
<if_group>audit</if_group>
<match>type=AVC</match>
<description>Suppressed: SELinux AVC (container host noise)</description>
</rule>
<!-- Suppress promiscuous mode on bridge/veth interfaces -->
<rule id="100101" level="0">
<if_sid>5503</if_sid>
<match>device entered promiscuous mode|device left promiscuous mode</match>
<description>Suppressed: Promiscuous mode toggle on virtual interface</description>
</rule>
<!-- Suppress agent event queue warnings -->
<rule id="100102" level="0">
<if_sid>9044</if_sid>
<description>Suppressed: Agent queue flooding (operational metric)</description>
</rule>
<!-- Suppress Windows Firewall drops from internal subnets -->
<rule id="100103" level="0">
<if_sid>18152,18153,18154</if_sid>
<srcip>10.0.0.0/8,172.16.0.0/12,192.168.0.0/16</srcip>
<description>Suppressed: Windows FW drop from internal network</description>
</rule>
<!-- Suppress SSH auth success from management network -->
<rule id="100104" level="0">
<if_sid>5715</if_sid>
<srcip>10.0.1.0/24</srcip>
<description>Suppressed: SSH login from management subnet</description>
</rule>
<!-- Suppress Puppet agent routine operations -->
<rule id="100105" level="0">
<if_group>syslog</if_group>
<match>puppet-agent|Puppet</match>
<match>Applied catalog|Finished catalog run|Caching catalog</match>
<description>Suppressed: Puppet routine catalog application</description>
</rule>
<!-- Suppress reverse DNS lookup failures -->
<rule id="100106" level="0">
<if_sid>5402</if_sid>
<description>Suppressed: Reverse DNS lookup failure</description>
</rule>
</group>
After editing, validate and restart:
/var/ossec/bin/wazuh-logtest # Test rules interactively
systemctl restart wazuh-manager
Step 4: Handle LXC Container Audit Incompatibilities
LXC containers deserve special attention. Unprivileged containers share the host kernel’s audit subsystem but cannot configure it. This creates a specific class of noise:
- Audit rules configured on the host fire for events inside containers, generating alerts with container PIDs and paths that make no sense from the host’s perspective
- Containers that run
auditdinternally generate duplicate events — one from the host kernel, one from the container’s audit daemon - SELinux/AppArmor policy violations inside containers appear as host-level security events
The cleanest solution is to add container hosts to a dedicated Wazuh agent group with a tailored agent.conf that excludes audit log monitoring or applies container-specific decoders. Alternatively, suppress by matching the container’s agent name pattern in your custom rules:
<rule id="100107" level="0">
<if_group>audit</if_group>
<hostname>^ct-</hostname>
<description>Suppressed: Audit event from LXC container</description>
</rule>
Step 5: Preserve Critical Signal
Suppression must be surgical. For every rule you silence, verify that you are not accidentally suppressing a variant that carries security value:
- Suppress SSH success from
10.0.1.0/24(management network) but keep SSH success from any other source - Suppress Windows FW drops from RFC 1918 ranges but keep drops from public IPs (which would indicate unexpected external traffic reaching internal hosts)
- Suppress SELinux AVCs broadly only if you have a separate SELinux audit pipeline; otherwise, suppress only the specific AVC types you have validated as benign
Create a verification checklist: after deploying suppression rules, manually inject a test event for each suppressed category from an out-of-scope* source (e.g., SSH login from a non-management IP) and confirm it still generates an alert.
Step 6: Tune Puppet and Configuration Management Noise
Configuration management tools are particularly noisy in Wazuh because they operate on a regular schedule, modifying files, restarting services, and writing to syslog. Every Puppet catalog application generates a cluster of file integrity monitoring (FIM) alerts for files that Puppet manages.
The right approach is to exclude Puppet-managed file paths from Wazuh FIM in ossec.conf:
<syscheck>
<ignore>/etc/puppet/ssl</ignore>
<ignore>/var/lib/puppet</ignore>
<ignore>/etc/cron.d/puppet</ignore>
<!-- Add other Puppet-managed paths as identified -->
</syscheck>
This is preferable to suppressing FIM alerts globally, which would blind you to unauthorized file modifications.
Before and After
After deploying these rules and FIM exclusions across a 40-agent environment:
| Metric | Before | After | Reduction |
|——–|——–|——-|———–|
| Total daily alerts | 240,000 | 1,200 | 99.5% |
| Level 7+ alerts/day | 18,000 | 850 | 95.3% |
| Level 10+ alerts/day | 340 | 310 | 8.8% |
| Analyst review time | ~4 hrs | ~25 min | 89.6% |
Note the level 10+ reduction is minimal — this confirms the suppression is targeting noise, not signal. High-severity alerts (brute force attempts, rootkit detection, critical file modifications) pass through unaffected.
Ongoing Maintenance
Alert tuning is not a one-time project. Schedule a monthly review:
- Re-run the top-rules-by-volume query. New noise sources will emerge as your environment changes.
- Review suppressed rule hit counts. If a suppression rule is firing 50,000 times daily, investigate whether the underlying issue can be fixed at the source rather than just silenced at the SIEM.
- Audit your custom rules file in version control. Every suppression rule should have a comment explaining the justification and a date of last review.
- Test detection coverage. Run a periodic purple-team exercise to confirm your remaining alerts catch the attack patterns you care about.
The goal is a SIEM where every alert is worth reading. At 1,200 alerts per day across a 40-agent fleet, an analyst can meaningfully review every single one — and that is when a SIEM transitions from a compliance checkbox to an actual security tool.
