ELK Stack Tutorial: Build a Security Monitoring System with Docker

Build an ELK stack security monitoring system with Docker in 45 minutes. Learn to deploy Elasticsearch, Logstash, and Kibana for real-time threat detection, tracking authentication failures, network activity, and suspicious behaviour.

ELK Stack Tutorial: Build a Security Monitoring System with Docker
Cybersecurity Monitoring Stack with ELK

Quick Overview

What You'll Learn: How to setup ELK Stack for security monitoring using Docker
Time Required: 30-45 minutes
Skill Level: Intermediate
Key Outcomes: Real-time security dashboard tracking authentication failures, network activity, and system threats

Quick Start: https://github.com/cyberdesserts/elk_stack

Perfect for: Security engineers, DevSecOps professionals, and anyone building a home security lab


What is the ELK Stack for Security Monitoring?

Setting up an ELK stack for cybersecurity monitoring gives you enterprise-grade threat detection without the enterprise price tag. ELK, Elasticsearch, Logstash, and Kibana is a powerful open-source stack used for real-time security monitoring, log analysis, and threat detection.

I've used the ELK stack extensively for everything from honeypot monitoring to production security operations. This tutorial shows you how to setup ELK stack using Docker in under 45 minutes, complete with security telemetry scripts that track authentication failures, network threats, and suspicious file activity.

The ELK stack shares similarities with Splunk, which tends to be used in larger enterprise-grade environments for added support, ease of use, and features. However, ELK's open-source nature and flexibility make it ideal for learning, development, and even production deployments when properly configured and hardened.

The motivation behind this guide was creating something easy to follow that I can refer back to and develop over time. Having built ELK stack security monitoring solutions ranging from virtual systems and Raspberry Pi to EC2 instances, it's proved invaluable for capturing things like syslog data, especially while working on proof of concepts. There's much more you can do to harden the system before moving to production, but this provides a solid foundation to learn the basics and start experimenting.

In future articles, I'll explore more advanced data collection and analysis techniques. If you find this tutorial useful, please comment, like, and share. I'd love to hear your ideas and how you are using the ELK stack.

What You'll Build: A Security Monitoring Dashboard

We'll build a practical cybersecurity monitoring solution using the ELK stack running on Docker, with custom telemetry collection scripts for macOS (Windows version also available in the Git repo). By the end, you'll have a functioning security monitoring dashboard that collects real-time data about:

  • Authentication failures and login attempts
  • Network activity and connections
  • System load and resource usage
  • File system changes in sensitive directories
  • Process monitoring for suspicious activity

Core Components of Your ELK Stack Security Monitoring System

  • Elasticsearch: Stores and indexes your security telemetry data
  • Logstash: Processes and parses syslog data from your security scripts
  • Kibana: Visualises security metrics and creates interactive dashboards
  • Custom telemetry scripts: Collects macOS security indicators in real-time

The entire ELK stack runs locally using Docker Compose, making it perfect for development, testing, or small-scale deployments. You can use Docker Desktop on Windows or Mac to get started. I was using a Mac, so you may need to adjust some commands, I've tried to account for platform differences where possible.

Prerequisites for Setting Up ELK Stack

Before we begin this ELK stack tutorial, make sure you have:

  • Docker Desktop installed and running on your macOS, Windows, or Linux machine
  • Basic command line knowledge
  • curl and netcat (nc) available (standard on macOS)
  • Administrator access for privileged port binding
  • Visual Studio Code or your preferred editor (optional but recommended)
  • AI coding assistant like Claude Code (helpful for fleshing out the project and troubleshooting)

How to Setup ELK Stack with Docker (Step-by-Step)

Quick Start

You can access all project files and documentation on Github and safe a bunch of time, for completeness I have documented the code here and added detailed instructions in the Github project.

Docker Compose Configuration for ELK Stack

First, let's create our ELK stack using Docker Compose. This configuration sets up all three components with proper networking and persistence. Save this as docker-compose.yml in your project's root folder:

version: "3.8"

services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch-wolfi:9.1.3
    container_name: es01
    environment:
      - discovery.type=single-node
      - xpack.security.enabled=false
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
    ports:
      - "9200:9200"
    volumes:
      - esdata:/usr/share/elasticsearch/data
    networks:
      - elastic-net

  kibana01:
    image: docker.elastic.co/kibana/kibana-wolfi:9.1.3
    container_name: kibana01
    ports:
      - "5601:5601"
    environment:
      - ELASTICSEARCH_HOSTS=http://es01:9200
    depends_on:
      - es01
    networks:
      - elastic-net

  logstash01:
    image: docker.elastic.co/logstash/logstash-wolfi:9.1.3
    container_name: logstash01
    ports:
      - "514:514/tcp"
      - "514:514/udp"
    volumes:
      - ./logstash/pipeline:/usr/share/logstash/pipeline:ro
    environment:
      - "LS_JAVA_OPTS=-Xms512m -Xmx512m"
    depends_on:
      - es01
    networks:
      - elastic-net

volumes:
  esdata:
    driver: local

networks:
  elastic-net:
    driver: bridge

What this configuration does:

  • Sets up Elasticsearch on port 9200 for data storage
  • Configures Kibana on port 5601 for visualisation
  • Establishes Logstash on port 514 for syslog data collection
  • Creates persistent storage so your security logs survive container restarts
  • Disables security features for learning (you'll enable these in production)

Configuring Logstash for Security Logging

Create the Logstash pipeline configuration at ./logstash/pipeline/logstash.conf:

input {
  syslog {
    port => 514
    host => "0.0.0.0"
    type => "syslog"
  }
}

filter {
  grok {
    match => {
      "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\\[%{POSINT:syslog_pid}\\])?: %{GREEDYDATA:syslog_message}"
    }
  }
  
  date {
    match => [ "syslog_timestamp", "MMM dd HH:mm:ss", "MMM d HH:mm:ss" ]
    target => "@timestamp"
  }
}

output {
  elasticsearch {
    hosts => ["http://es01:9200"]
    index => "syslog-%{+YYYY.MM.dd}"
  }
  
  stdout {
    codec => rubydebug
  }
}

This Logstash configuration:

  • Accepts syslog data on port 514 (standard syslog port)
  • Parses syslog messages using grok patterns
  • Extracts timestamp, hostname, program name, and message content
  • Sends parsed data to Elasticsearch with daily indices
  • Outputs to console for debugging

Starting Your ELK Stack Security Monitoring System

# Create the logstash pipeline directory
mkdir -p ./logstash/pipeline

# Start the ELK stack (requires sudo for port 514)
sudo docker compose up -d

# Verify all containers are running
docker compose ps

# Check the logs to ensure everything started correctly
docker logs logstash01

Expected output: You should see all three containers (es01, kibana01, logstash01) in "Up" status.

Creating Security Telemetry Collection Scripts

Now let's create a focused cybersecurity telemetry script that collects the most important security indicators from macOS systems. This is where the ELK stack for cybersecurity really shines, collecting real-time threat data.

The MVP Security Monitoring Script for macOS

Our script focuses on five key security metrics that matter most for threat detection:

  1. Authentication Failures: Failed login attempts (brute force detection)
  2. Network Activity: Active TCP connections (command & control detection)
  3. Process Monitoring: Network-active processes (malware detection)
  4. File System Changes: New files in staging areas (ransomware detection)
  5. System Load: CPU usage patterns (crypto mining detection)

Save this as cyber_security_mvp.sh:

#!/bin/bash
# Cybersecurity MVP Telemetry Script
# Focuses on key security indicators for Mac systems

HOST="localhost"
PORT="514"
HOSTNAME=$(hostname -s)

function send_syslog() {
    local program=$1
    local message=$2
    timestamp=$(date '+%b %d %H:%M:%S')
    
    # Using facility 13 (security), severity 6 (info)
    syslog_msg="<110>$timestamp $HOSTNAME $program: $message"
    echo "$syslog_msg" | nc -u -w 1 $HOST $PORT
    echo "Security telemetry: $program - $message"
}

echo "Collecting cybersecurity telemetry..."

# 1. Failed Login Attempts (recent)
failed_logins=$(log show --predicate 'eventMessage contains "authentication failure"' --last 1h 2>/dev/null | wc -l | xargs)
send_syslog "auth-monitor" "Failed login attempts last hour: $failed_logins"

# 2. New Network Connections (active TCP connections)
active_connections=$(netstat -an | grep ESTABLISHED | wc -l | xargs)
send_syslog "network-monitor" "Active TCP connections: $active_connections"

# 3. Suspicious Process Activity (processes with network connections)
network_processes=$(lsof -i -n | grep -v LISTEN | wc -l | xargs)
send_syslog "process-monitor" "Processes with network activity: $network_processes"

# 4. File System Changes (new files in /tmp in last hour)
tmp_files=$(find /tmp -type f -mtime -1h 2>/dev/null | wc -l | xargs)
send_syslog "file-monitor" "New files in /tmp last hour: $tmp_files"

# 5. System Load (potential indicator of crypto mining or DoS)
load_avg=$(uptime | awk -F'load averages:' '{print $2}' | awk '{print $1}' | xargs)
send_syslog "load-monitor" "1-minute load average: $load_avg"

echo "Cybersecurity telemetry sent to ELK stack"

Testing Your Security Telemetry Collection

# Make the script executable
chmod +x cyber_security_mvp.sh

# Run it once to test
./cyber_security_mvp.sh

# Check if data reached Elasticsearch
curl "localhost:9200/syslog-$(date '+%Y.%m.%d')/_search?pretty&size=5"

You should see JSON output showing your security telemetry data has been indexed in Elasticsearch. If you see errors, check the troubleshooting section below.

Note: The script runs once. Later, you'll want to set it up to collect data continuously via cron job or scheduled task.

How to Configure Kibana for Real-Time Threat Detection

Now that your ELK stack is collecting security data, let's configure Kibana to visualise and analyse it.

Creating Data Views in Kibana

  1. Access Kibana: Navigate to http://localhost:5601
  2. Go to Stack Management: Menu → Stack Management → Data Views
  3. Create Data View:
    • Name: Security Logs
    • Index pattern: syslog-*
    • Timestamp field: @timestamp
    • Click "Save data view to Kibana"

Viewing Your Security Data

  1. Navigate to Discover: Menu → Analytics → Discover
  2. Select your data view: Choose "Security Logs" from the dropdown
  3. Add useful fields as columns:
    • syslog_hostname
    • syslog_program
    • syslog_message
    • @timestamp

You should now see your security telemetry flowing in real-time!

Security-Focused Queries for Threat Detection

Use these KQL (Kibana Query Language) queries in the Discover search bar to focus on specific security events:

# Authentication failures - potential brute force attacks
syslog_program:"auth-monitor"

# High network activity - potential data exfiltration
syslog_program:"network-monitor" AND message:>50

# Suspicious file activity - potential ransomware
syslog_program:"file-monitor" AND NOT message:"0"

# High system load - potential crypto mining
syslog_program:"load-monitor" AND message:>2.0

# Combine conditions - high network + high load
syslog_program:("network-monitor" OR "load-monitor") AND message:>50

Pro tip: Save these queries as "Saved Searches" in Kibana for quick access during incident response.

Automating Security Data Collection

For continuous security monitoring with your ELK stack, you need automated data collection. Here's how to set it up.

Setting Up Cron Jobs for macOS/Linux

For continuous monitoring, set up automated collection:

# Edit your crontab
crontab -e

# Add this line to run every 5 minutes
*/5 * * * * /path/to/your/cyber_security_mvp.sh >/dev/null 2>&1

# Or every 15 minutes for less frequent monitoring
*/15 * * * * /path/to/your/cyber_security_mvp.sh >/dev/null 2>&1

Replace /path/to/your/ with the actual path to your script.

Setting Up Scheduled Tasks for Windows

For Windows users, use Task Scheduler:

  1. Open Task Scheduler
  2. Create Basic Task
  3. Set trigger (e.g., every 5 minutes)
  4. Action: Start a program
  5. Program: powershell.exe
  6. Arguments: -File C:\path\to\cyber_security_mvp.ps1

(Windows PowerShell version available in the GitHub repository)

Ensuring Data Persistence

Your security data persists across Docker restarts because we're using named volumes. However, Kibana configurations (dashboards, visualisations) will need to be recreated unless you add persistent storage for Kibana as well.

To verify data persistence:

# Stop the ELK stack
docker compose down

# Start it again
docker compose up -d

# Verify data is still there
curl "localhost:9200/_cat/indices/syslog-*?v"

You should see your indices with their document counts intact.

Key Security Metrics Every Cybersecurity Professional Should Track

Understanding what constitutes "normal" versus "suspicious" activity is critical for effective security monitoring with ELK stack. Here's what each metric reveals about your system's security posture.

What Each Metric Tells You

1. Failed Login Attempts

  • Normal baseline: 0-2 per hour
  • Suspicious threshold: >10 per hour
  • Threat indicator: Potential brute force attack or password spraying
  • Response: Investigate source IPs, consider temporary IP blocking

2. Active TCP Connections

  • Normal baseline: 10-50 for typical usage
  • Suspicious threshold: >100 connections
  • Threat indicator: Potential botnet activity, data exfiltration, or DDoS
  • Response: Review process list, check for unknown applications

3. Network-Active Processes

  • Normal baseline: 5-20 processes
  • Suspicious threshold: Sudden spikes or unknown process names
  • Threat indicator: Malware with command & control communication
  • Response: Identify unfamiliar processes, check against threat intelligence

4. New Files in /tmp Directory

  • Normal baseline: 0-5 new files per hour
  • Suspicious threshold: >20 files in short timespan
  • Threat indicator: Potential ransomware staging, dropper malware
  • Response: Examine file types and origins, scan with antivirus

5. System Load Average

  • Normal baseline: <1.0 on most systems
  • Suspicious threshold: >3.0 consistently over 15+ minutes
  • Threat indicator: Crypto mining malware or resource exhaustion attack
  • Response: Check top processes, investigate CPU-intensive applications

Establishing Your Security Baseline

The key to effective threat detection with ELK stack security monitoring is understanding your normal baseline:

  1. Run your telemetry for 7 days to establish patterns
  2. Document typical ranges for each metric during business hours vs. off-hours
  3. Identify anomalies - deviations of >50% from baseline warrant investigation
  4. Tune your thresholds based on your environment's unique characteristics

According to NIST's Computer Security Incident Handling Guide, establishing baselines is essential for effective incident detection and response.

Creating Alerts and Advanced Detection

While beyond the scope of this ELK stack tutorial, you can extend this setup with advanced alerting and correlation:

Alerting Options

  • Elastic Stack Alerting (Watcher): Built-in alerting engine (requires license)
  • Custom scripts with thresholds: Simple bash scripts that check Elasticsearch queries
  • Integration with SIEM tools: Forward ELK data to dedicated SIEM platforms
  • Webhook notifications: Send alerts to Slack, Teams, PagerDuty, or email

Advanced Capabilities to Explore

  • Log correlation rules: Detect attack patterns across multiple log sources
  • Threat intelligence integration: Enrich logs with IOC (Indicators of Compromise) data
  • Machine learning anomaly detection: Use Elastic ML to identify unusual patterns
  • MITRE ATT&CK mapping: Tag events with MITRE ATT&CK techniques
  • Integration with other security tools: Connect firewalls, IDS/IPS, EDR solutions

Troubleshooting Common ELK Stack Issues

Logstash Not Starting

# Check for configuration syntax errors
docker logs logstash01

# Verify config file exists and is readable
docker exec logstash01 cat /usr/share/logstash/pipeline/logstash.conf

# Common issue: Incorrect indentation in logstash.conf
# Solution: Validate YAML-style syntax, ensure no tabs

No Data Appearing in Kibana

# Check if indices exist in Elasticsearch
curl "localhost:9200/_cat/indices?v"

# Test syslog connectivity directly
echo "test" | nc -u localhost 514

# Check Logstash is actually processing data
docker logs logstash01 --tail 20

# Verify your script is sending data
./cyber_security_mvp.sh
# You should see "Security telemetry: ..." messages

Permission Issues with Port 514

Port 514 is a privileged port (below 1024) and requires root access:

# Solution 1: Use sudo for Docker Compose
sudo docker compose up -d

# Solution 2: Modify docker-compose.yml to use higher ports
ports:
  - "1514:514/tcp"
  - "1514:514/udp"

# Then update your script to send to port 1514
PORT="1514"

Elasticsearch Container Crashes

# Check logs for memory issues
docker logs es01

# Common issue: Insufficient memory
# Solution: Increase Docker memory allocation in Docker Desktop preferences
# Or reduce heap size in docker-compose.yml:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"

Data Not Persisting After Restart

# Verify named volumes are created
docker volume ls | grep esdata

# If missing, recreate with:
docker compose down
docker volume create elk-stack_esdata
docker compose up -d

Frequently Asked Questions About ELK Stack Security Monitoring

Is ELK stack good for cybersecurity?

Yes, the ELK stack is excellent for cybersecurity monitoring and log analysis. I've used it extensively for real-time threat detection, security incident response, and forensic investigations. It provides centralised logging and analysis capabilities similar to enterprise tools like Splunk but with open-source flexibility and no per-GB licensing costs. Many security teams use ELK as their primary SIEM (Security Information and Event Management) platform or as a complementary tool alongside commercial solutions.

The ELK stack excels at ingesting diverse log sources, from firewalls and intrusion detection systems to authentication logs and application data, making it invaluable for comprehensive security monitoring.

What's the difference between ELK stack and SIEM?

The ELK stack is a log management and analytics platform that can function as a SIEM when configured with proper security use cases, correlation rules, and alerting. Traditional SIEM solutions come with pre-built security correlation rules, compliance reporting, and threat intelligence integration out of the box.

ELK's advantage is flexibility: you customise detection logic for your specific environment rather than relying on vendor-defined rules. Think of ELK as a powerful foundation you build upon, while commercial SIEMs are more turnkey solutions. Many organisations use ELK for security monitoring because it allows them to:

  • Write custom detection rules tailored to their infrastructure
  • Integrate with any data source via Logstash
  • Visualise security data exactly how they need it
  • Avoid expensive per-GB licensing models

How much does it cost to run ELK stack for security monitoring?

The ELK stack itself is completely free and open-source, no licensing fees. Your costs come entirely from infrastructure:

  • Local/development: Free using Docker Desktop (this tutorial)
  • Cloud VM: $5-20/month for small-scale monitoring (AWS, DigitalOcean, Linode)
  • Production deployment: $100-1000+/month depending on data volume and retention requirements
  • Enterprise scale: Thousands per month for high-volume environments with redundancy

Compare this to commercial SIEM solutions that often charge $10-100+ per GB ingested or $50,000+ annually for enterprise licenses. Many organisations save 70-90% using ELK stack for security monitoring versus commercial alternatives.

Can I use ELK stack for compliance logging?

Absolutely. The ELK stack supports all the requirements for compliance logging across frameworks like GDPR, HIPAA, PCI-DSS, SOC 2, and ISO 27001. Key compliance capabilities include:

  • Audit trails: Immutable log storage with timestamp integrity
  • Retention policies: Configure index lifecycle management for required retention periods
  • Access controls: Enable X-Pack security features for role-based access
  • Encryption: TLS for data in transit, encryption at rest for sensitive data
  • Log integrity: Elasticsearch's distributed architecture prevents tampering

Important: This tutorial disables security features for learning purposes. For compliance use cases, you must enable authentication, authorisation, TLS encryption, and audit logging before production deployment. I'll cover hardening ELK stack for production in a future article.

How does ELK stack compare to Splunk?

Both are excellent platforms for security monitoring, but they serve different needs:

ELK Stack:

  • Open-source, free licensing
  • Highly customisable, requires more configuration
  • Best for: Organisations with technical teams who want flexibility
  • Lower total cost of ownership
  • Community-driven innovation

Splunk:

  • Commercial product with support
  • More out-of-box features and integrations
  • Best for: Enterprises wanting turnkey solution with vendor support
  • Higher costs but less initial setup time
  • Enterprise-grade support and training

In my experience, ELK stack is perfect for learning security monitoring concepts, building custom detection logic, and organisations that want to control costs while maintaining flexibility. Many security teams run both, using ELK for development and specific use cases, and Splunk for production SIEM.

What's Next: Expanding Your ELK Stack Security Monitoring

This ELK stack tutorial provides a solid foundation for cybersecurity monitoring. You've learned how to setup ELK stack with Docker and collect real-time security telemetry. Here's how to expand your capabilities:

Additional Security Metrics to Collect

  • USB device insertions: Detect unauthorised hardware connections
  • Process creations: Monitor for suspicious process spawning
  • DNS queries: Identify command & control communication patterns
  • PowerShell execution: Detect fileless malware on Windows
  • SSH key changes: Monitor for credential manipulation
  • Privilege escalation attempts: Track sudo/admin command usage
  • Browser history: Identify phishing and malicious websites

Creating Custom Dashboards

Build specialised visualisations for different security scenarios:

  • Authentication dashboard: Login failures, successful logins by source IP, geographic distribution
  • Network monitoring dashboard: Connection timelines, top talkers, port usage patterns
  • Threat hunting dashboard: Anomaly detection, unusual process behavior, file system changes
  • Executive dashboard: High-level security metrics, trend analysis, compliance reporting

Setting Up Automated Alerting

Move beyond passive monitoring to active response:

  1. Threshold-based alerts: Trigger when metrics exceed baselines
  2. Pattern detection: Alert on specific attack signatures
  3. Anomaly detection: Use machine learning to identify unusual behavior
  4. Correlation rules: Detect multi-stage attacks across log sources

Integrating with Threat Intelligence

Enrich your security monitoring with external intelligence:

  • IOC feeds: Automatically check IPs/domains against threat databases
  • MITRE ATT&CK mapping: Tag events with attack techniques
  • CVE correlation: Link vulnerabilities to exploitation attempts
  • OSINT integration: Incorporate open-source intelligence

Hardening for Production

Before deploying to production environments:

  • Enable X-Pack security: Authentication, authorisation, encryption
  • Implement TLS: Encrypt all communications between ELK components
  • Configure backups: Automated snapshots of your security data
  • Set up high availability: Multi-node clusters for redundancy
  • Implement log retention: Automated index lifecycle management
  • Add audit logging: Track who accessed what security data

I'll cover these advanced topics in future articles. Subscribe to receive updates when new tutorials are published.

Repository and Code

All the code and configurations used in this ELK stack tutorial are available in the GitHub repository:

GitHub Repository: elk-cybersecurity-monitoring

The repository includes:

  • Complete Docker Compose configuration
  • Logstash pipeline configurations with parsing rules
  • Security telemetry collection scripts for macOS and Windows
  • Additional monitoring scripts for advanced use cases
  • Platform-specific setup instructions and troubleshooting guides
  • Example Kibana dashboards (import ready)

⭐ Star the repository if you find it useful!

Conclusion

This hands-on ELK stack tutorial demonstrates the fundamentals of cybersecurity telemetry collection and analysis. You've learned how to setup ELK stack using Docker, configure security logging pipelines, and collect real-time threat data from your systems. While designed for learning and experimentation, the concepts and techniques covered here form the foundation of enterprise-grade security monitoring solutions.

The practical approach of collecting real security metrics from your own system provides invaluable experience in understanding what normal system behaviour looks like, a critical skill for effective threat detection. As you experiment with different data sources and visualisation techniques, you'll develop the expertise needed to design and implement sophisticated security monitoring solutions.

According to the CIS Controls, continuous monitoring and log management are foundational security practices. The ELK stack gives you the tools to implement these controls effectively, whether you're protecting a home lab or enterprise infrastructure.

Remember: the best way to learn cybersecurity monitoring is by doing. Start with this simple ELK stack setup, explore the data it generates, and gradually add complexity as your understanding grows. The security landscape constantly evolves, and having a solid foundation in data collection and analysis will serve you well regardless of which specific tools or platforms you use in the future.

I continue to learn about the ELK stack and hope to learn from others who are far more knowledgeable. I appreciate feedback to improve the quality and content to share with the community. If you subscribe to the blog, this is a massive boost for me to continue writing more content like this.

Have questions or suggestions about this ELK stack tutorial? Feel free to comment below or open an issue in the GitHub repository.

Like this article? Join the Beta Lab tutorial guide for future content: https://lab.cyberdesserts.com


Additional Resources

Tags: ELK stack tutorial, security monitoring, cybersecurity, Elasticsearch, Logstash, Kibana, Docker, threat detection, log analysis, SIEM

Last Updated: October 2025