Introducing Wazuh MCP Server: Bridging SIEM and AI for Smarter Security Operations

SOCFortress

SOCFortress

3 min read

·

Jul 12, 2025

https://github.com/socfortress/wazuh-mcp-server

Why We Built This

Security teams deal with an overwhelming amount of data daily. SIEMs like Wazuh are powerful, but getting insights out of them often requires writing custom scripts, complex API calls, or deep knowledge of the platform.

At the same time, Large Language Models (LLMs) like GPT-4 have shown that natural language can be a powerful way to interact with data. The challenge has been connecting these two worlds in a secure, reliable, and production-ready way.

That’s why we built the Wazuh MCP Server.

What Is the Wazuh MCP Server?

The Wazuh MCP Server is a Model Context Protocol (MCP) server that acts as a secure bridge between Wazuh and LLMs. It exposes Wazuh’s API capabilities as “tools” that an AI model can call, enabling natural language interactions with security data.

Instead of writing raw API requests, you can now ask questions like:

“Show me all running processes on agent 000.”

…and get structured data back from Wazuh, ready for analysis, reporting, or automated decision-making.

How It Works

Under the hood, the Wazuh MCP Server provides:

  • Production-Ready Deployment
  • Pip-installable
  • LLM Integration
  • Works with frameworks like LangChain
  • Exposes tools for common Wazuh operations:
  • Listing agents
  • Getting running processes
  • Retrieving network ports
  • Etc

Example Use Case

Imagine a SOC analyst working with GPT-4 integrated via LangChain. Instead of writing Python code to fetch agents from Wazuh, the analyst could simply say:

“List all active agents and their IP addresses.”

The LLM calls the MCP tool behind the scenes, hits the Wazuh API, and returns the data — all without the analyst ever leaving a natural language interface.

Get SOCFortress’s stories in your inbox

Join Medium for free to get updates from this writer.

This significantly reduces friction for threat hunting, investigations, and reporting.

Quick Start

You can install the server directly from GitHub:

python -m venv .venv && source .venv/bin/activate
pip install git+https://github.com/socfortress/wazuh-mcp-server.git

Configure your environment:

WAZUH_PROD_URL=https://your-wazuh-manager:55000
WAZUH_PROD_USERNAME=your-username
WAZUH_PROD_PASSWORD=your-password

WAZUH_PROD_SSL_VERIFY=false

Then run the server:

python -m wazuh_mcp_server

The server listens on http://127.0.0.1:8000 by default and is ready to receive requests from your LLM integration.

Benefits for Security Teams

Here’s why this matters:

Natural Language Operations

Reduce barriers for analysts to extract insights from Wazuh.

Faster Investigations

Automate complex queries that used to require coding or manual steps.

AI-Driven Security

Integrate LLMs safely into your SOC without compromising security or access controls.

Open Source and Extensible

Freely available, with room for contributions and custom integrations.

Get Started

If you’re curious to try it out or explore how AI can enhance your security operations:

🔗 Check out the project here: Wazuh MCP Server on GitHub

Need Help?

The functionality discussed in this post, and so much more, are available via the SOCFortress platform. Let SOCFortress help you and your team keep your infrastructure secure.

Website: https://www.socfortress.co/

Contact Us: https://www.socfortress.co/contact_form.html

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *