Building a Wazuh-Powered SOC on a Budget: From Log Collection to Automated Response

Standing up a Security Operations Center does not require a six-figure SIEM license. Wazuh — the open-source fork of OSSEC — provides log collection, file integrity monitoring, vulnerability detection, and active response across every major operating system. Paired with TheHive for case management, OpenCTI for threat intelligence, and Matrix for real-time alerting, you can build a fully operational SOC on commodity hardware.

This guide covers the end-to-end deployment: agent rollout, custom rule development, alert tuning, active response, and integration with the broader security toolchain.

Architecture Overview

The Wazuh stack consists of three components:

  • Wazuh Manager: Receives and analyzes events from agents. Runs decoders, rules, and active response.
  • Wazuh Indexer: Stores alerts in a search-optimized index (OpenSearch-based).
  • Wazuh Dashboard: Web interface for alert triage and visualization.

For a budget deployment, all three can run on a single server with 8 GB RAM and 4 cores. In production, separate the indexer onto its own machine once you exceed 50 agents or 5,000 events per second.

# docker-compose.yml (simplified)
services:
  wazuh-manager:
    image: wazuh/wazuh-manager:4.9.0
    ports:
      - "1514:1514/tcp"   # Agent registration
      - "1515:1515/tcp"   # Agent communication
      - "55000:55000/tcp" # API
    volumes:
      - wazuh_etc:/var/ossec/etc
      - wazuh_logs:/var/ossec/logs
      - wazuh_rules:/var/ossec/etc/rules
      - wazuh_decoders:/var/ossec/etc/decoders

  wazuh-indexer:
    image: wazuh/wazuh-indexer:4.9.0
    environment:
      - "OPENSEARCH_JAVA_OPTS=-Xms2g -Xmx2g"
    volumes:
      - indexer_data:/var/lib/wazuh-indexer

  wazuh-dashboard:
    image: wazuh/wazuh-dashboard:4.9.0
    ports:
      - "443:5601"
    depends_on:
      - wazuh-indexer

Agent Deployment Across Platforms

Linux (Debian/Ubuntu)

curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --dearmor -o /usr/share/keyrings/wazuh.gpg
echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/4.x/apt/ stable main" > /etc/apt/sources.list.d/wazuh.list
apt-get update && apt-get install -y wazuh-agent

# Configure manager address
sed -i 's/MANAGER_IP/siem.internal.example.com/' /var/ossec/etc/ossec.conf

systemctl enable wazuh-agent && systemctl start wazuh-agent

Windows

Download the MSI from the Wazuh repository and deploy via Group Policy or SCCM:

wazuh-agent-4.9.0-1.msi /q WAZUH_MANAGER="siem.internal.example.com" WAZUH_AGENT_GROUP="windows-servers"

FreeBSD

pkg install wazuh-agent
# Edit /var/ossec/etc/ossec.conf with manager address
sysrc wazuh_agent_enable=YES
service wazuh-agent start

Agent Groups

Groups control which configuration and rules apply to each agent. Structure them by OS and role:

<!-- /var/ossec/etc/shared/linux-webservers/agent.conf -->
<agent_config os="linux">
  <localfile>
    <log_format>apache</log_format>
    <location>/var/log/apache2/access.log</location>
  </localfile>
  <localfile>
    <log_format>apache</log_format>
    <location>/var/log/apache2/error.log</location>
  </localfile>
  <syscheck>
    <directories check_all="yes" realtime="yes">/var/www</directories>
    <directories check_all="yes" realtime="yes">/etc/apache2</directories>
  </syscheck>
</agent_config>

Assign agents to groups during registration or via the API:

/var/ossec/bin/agent_groups -a -i 003 -g linux-webservers

Custom Decoder and Rule Development

Wazuh processes logs in two phases: decoders extract structured fields from raw log lines, and rules evaluate those fields to generate alerts.

Writing a Custom Decoder

Suppose your application logs authentication events in this format:

2026-03-15T10:22:41Z AUTH service=api action=login user=jsmith result=failure ip=198.51.100.45

Create a decoder to extract these fields:

<!-- /var/ossec/etc/decoders/local_decoder.xml -->
<decoder name="custom-auth">
  <prematch>^\d{4}-\d{2}-\d{2}T\S+ AUTH </prematch>
  <regex>service=(\S+) action=(\S+) user=(\S+) result=(\S+) ip=(\S+)</regex>
  <order>service_name, auth_action, srcuser, auth_result, srcip</order>
</decoder>

Writing Custom Rules

Rules reference decoded fields to generate alerts at specific severity levels (0-15):

<!-- /var/ossec/etc/rules/local_rules.xml -->

<!-- Baseline: single failed login (low severity) -->
<rule id="100100" level="5">
  <decoded_as>custom-auth</decoded_as>
  <field name="auth_result">^failure$</field>
  <description>Authentication failure: $(srcuser) from $(srcip)</description>
  <group>authentication_failed,</group>
</rule>

<!-- Escalation: 5 failures from same IP in 2 minutes -->
<rule id="100101" level="10" frequency="5" timeframe="120">
  <if_matched_sid>100100</if_matched_sid>
  <same_source_ip/>
  <description>Brute force attempt: 5+ failures from $(srcip)</description>
  <group>authentication_failures,brute_force,</group>
</rule>

<!-- Critical: 10 failures across different users (credential stuffing) -->
<rule id="100102" level="13" frequency="10" timeframe="300">
  <if_matched_sid>100100</if_matched_sid>
  <same_source_ip/>
  <different_srcuser/>
  <description>Credential stuffing: $(srcip) targeting multiple accounts</description>
  <group>credential_stuffing,</group>
</rule>

<!-- Privilege escalation: sudo to root by non-admin -->
<rule id="100110" level="12">
  <if_sid>5402</if_sid>
  <user>^(?!admin1|admin2)</user>
  <match>COMMAND=/bin/bash</match>
  <description>Non-admin user $(srcuser) escalated to root shell</description>
  <group>privilege_escalation,</group>
</rule>

Rule ID ranges: Wazuh reserves 1-99999. Use 100000+ for custom rules. Establish a numbering convention: 100100-100199 for authentication, 100200-100299 for file integrity, 100300-100399 for network events, and so on.

Alert Tuning: Separating Signal from Noise

A fresh Wazuh deployment generates thousands of alerts per day, most of them noise. Tuning is the single most important post-deployment activity.

The Tuning Methodology

  1. Baseline for one week without suppression. Let alerts accumulate.
  2. Identify the top 20 rules by volume. These are your noise candidates.
  3. Categorize each: true positive (keep), false positive (suppress), or informational (reduce level).
  4. Write suppression rules for false positives.
  5. Repeat weekly as new log sources are added.

Suppression Rules

A rule with level="0" effectively suppresses its parent. This is cleaner than modifying stock rules, which would be overwritten on upgrades:

<!-- Suppress known scanner health checks -->
<rule id="100900" level="0">
  <if_sid>31101</if_sid>
  <url>^/health$|^/readyz$|^/livez$</url>
  <description>Suppressed: health check endpoint access</description>
</rule>

<!-- Suppress SSH key renegotiation from known jump host -->
<rule id="100901" level="0">
  <if_sid>5706</if_sid>
  <srcip>10.0.0.5</srcip>
  <description>Suppressed: SSH rekey from bastion host</description>
</rule>

<!-- Reduce cron noise from level 7 to level 3 -->
<rule id="100902" level="3">
  <if_sid>5901</if_sid>
  <match>^/usr/bin/certbot</match>
  <description>Expected cron: certificate renewal</description>
</rule>

The goal is not zero alerts. The goal is that every alert at level 10 or above represents something a human should investigate.

File Integrity Monitoring (FIM)

Wazuh’s syscheck module detects unauthorized changes to critical system files:

<!-- ossec.conf syscheck section -->
<syscheck>
  <frequency>3600</frequency>
  <scan_on_start>yes</scan_on_start>

  <!-- Critical system files -->
  <directories check_all="yes" realtime="yes">/etc/passwd,/etc/shadow,/etc/sudoers</directories>
  <directories check_all="yes" realtime="yes">/etc/ssh/sshd_config</directories>

  <!-- Web application files -->
  <directories check_all="yes" report_changes="yes">/var/www/app</directories>

  <!-- Ignore expected changes -->
  <ignore>/var/www/app/storage/logs</ignore>
  <ignore>/var/www/app/cache</ignore>
  <ignore type="sregex">..swp$|..tmp$</ignore>
</syscheck>

The report_changes="yes" directive stores diffs of changed files — invaluable for incident response when you need to see exactly what an attacker modified.

Active Response: Automated Blocking

Active response lets Wazuh take defensive action when rules fire. Use it carefully — a false positive triggering an IP block can cause an outage.

<!-- ossec.conf active response section -->
<active-response>
  <command>firewall-drop</command>
  <location>local</location>
  <rules_id>100101,100102</rules_id>
  <timeout>3600</timeout>
  <repeated_offenders>1800,3600,86400</repeated_offenders>
</active-response>

<!-- Custom response: disable compromised account -->
<command>
  <name>disable-user</name>
  <executable>disable-user.sh</executable>
  <timeout_allowed>yes</timeout_allowed>
</command>

<active-response>
  <command>disable-user</command>
  <location>local</location>
  <rules_id>100102</rules_id>
  <timeout>0</timeout> <!-- No auto-revert: manual unlock required -->
</active-response>

The disable-user.sh script:

#!/bin/bash
# /var/ossec/active-response/bin/disable-user.sh

LOCAL=$(dirname $0)
USER=$1
ACTION=$2

if [ "$ACTION" = "add" ]; then
    usermod -L "$USER" 2>/dev/null
    logger -t wazuh-ar "Account locked: $USER (credential stuffing response)"
fi
# No "delete" action -- manual unlock required after investigation

Safety guardrails: Always whitelist critical infrastructure IPs from firewall-drop rules. Always require manual intervention for account lockouts (no auto-unlock timer). Test active response rules in a staging environment first.

Integration: TheHive, OpenCTI, and Matrix

TheHive Integration (Case Management)

When Wazuh generates a high-severity alert, automatically create a case in TheHive:

#!/usr/bin/env python3
# /var/ossec/integrations/custom-thehive.py

import json
import sys
import requests

THEHIVE_URL = "https://thehive.internal.example.com"
THEHIVE_API_KEY = "YOUR_API_KEY_HERE"

def create_case(alert):
    case = {
        "title": f"[Wazuh] {alert['rule']['description']}",
        "description": json.dumps(alert, indent=2),
        "severity": min(alert['rule']['level'] // 4 + 1, 4),
        "tlp": 2,
        "tags": alert['rule'].get('groups', []),
        "source": "wazuh",
        "sourceRef": alert['id']
    }

    response = requests.post(
        f"{THEHIVE_URL}/api/v1/case",
        headers={"Authorization": f"Bearer {THEHIVE_API_KEY}"},
        json=case,
        verify=True
    )
    return response.status_code

alert = json.loads(open(sys.argv[1]).read())
create_case(alert)

Enable the integration in ossec.conf:

<integration>
  <name>custom-thehive.py</name>
  <level>10</level>
  <alert_format>json</alert_format>
</integration>

OpenCTI Integration (Threat Intelligence)

Feed Indicators of Compromise from OpenCTI into Wazuh’s CDB lists for real-time matching:

#!/bin/bash
# Runs hourly via cron to pull IOCs from OpenCTI
OPENCTI_URL="https://cti.internal.example.com"
OPENCTI_TOKEN="YOUR_TOKEN_HERE"

# Pull malicious IPs
curl -s -H "Authorization: Bearer $OPENCTI_TOKEN" 
  "$OPENCTI_URL/graphql" 
  -d '{"query":"{ indicators(filters:{key:"pattern_type",values:["ipv4-addr"]}) { edges { node { observable_value } } } }"}' 
  | jq -r '.data.indicators.edges[].node.observable_value' 
  | awk '{print $1":"}' > /var/ossec/etc/lists/threat-intel-ips

# Reload CDB lists
/var/ossec/bin/wazuh-control reload

Then create a rule that matches connections to threat-intel IPs:

<rule id="100200" level="14">
  <if_sid>5710</if_sid>
  <list field="srcip" lookup="address_match_key">etc/lists/threat-intel-ips</list>
  <description>Connection from threat-intel flagged IP: $(srcip)</description>
  <group>threat_intel,ioc_match,</group>
</rule>

Matrix Alerting

For real-time notification, pipe critical alerts to a Matrix room:

#!/usr/bin/env python3
# /var/ossec/integrations/custom-matrix.py

import json
import sys
import requests

MATRIX_URL = "https://matrix.internal.example.com"
ROOM_ID = "!your_room_id:example.com"
ACCESS_TOKEN = "YOUR_BOT_TOKEN"

def send_alert(alert):
    severity = alert['rule']['level']
    icon = "u26a0ufe0f" if severity < 12 else "U0001f6a8"

    message = (
        f"{icon} Wazuh Alert (Level {severity})nn"
        f"Rule: {alert['rule']['description']}n"
        f"Agent: {alert.get('agent', {}).get('name', 'N/A')}n"
        f"Source: {alert.get('data', {}).get('srcip', 'N/A')}n"
        f"Time: {alert['timestamp']}"
    )

    requests.put(
        f"{MATRIX_URL}/_matrix/client/v3/rooms/{ROOM_ID}/send/m.room.message/{alert['id']}",
        headers={"Authorization": f"Bearer {ACCESS_TOKEN}"},
        json={"msgtype": "m.text", "body": message, "format": "org.matrix.custom.html",
              "formatted_body": message.replace("n", "<br>")}
    )

alert = json.loads(open(sys.argv[1]).read())
send_alert(alert)

Scaling Considerations

As your SOC grows beyond 100 agents, consider these adjustments:

  • Separate the indexer onto dedicated hardware with SSD storage. Indexing is the first bottleneck.
  • Use agent groups aggressively to limit which rules evaluate which logs. A database server does not need web application rules.
  • Archive old indices to cold storage after 90 days. Most investigations focus on the last 30 days.
  • Monitor the manager’s queue. If /var/ossec/var/run/wazuh-analysisd.state shows events_dropped > 0, you need more CPU or to split into a cluster.

The beauty of this stack is that every component is open source and every integration point is a documented API. Start small, tune aggressively, and scale horizontally when the data demands it.

Scroll to Top