OpenSearch

2026-03-13

From Elastic to OpenSearch: Connecting Filebeat and Logstash to OpenSearch

A quick, 4-minute guide for Elastic Stack users on routing Filebeat through Logstash to OpenSearch. Includes production-ready configs, Docker Compose, and verification steps.

Reading time: 2 minutes
Co-Founder
If you’ve been running the Elastic Stack for years, OpenSearch will feel familiar—same REST style, similar DSL, and compatible tools. The main wrinkle is ingestion: modern Beats aren’t supported to talk directly to OpenSearch. The simplest, most robust path is to keep Filebeat and insert Logstash as the bridge to OpenSearch.
This post shows you exactly how to wire up Filebeat → Logstash → OpenSearch, with production-minded defaults and a quick Docker Compose to try it locally.

Why Filebeat → Logstash → OpenSearch?

  • Keep your existing shippers. No invasive changes to hosts already running Filebeat.
  • Modern & supported. Use current Filebeat versions while leveraging the OpenSearch-maintained Logstash output.
  • Flexibility. Parse, enrich, route, and secure traffic at the Logstash layer without touching your agents.

Step 1: Configure Filebeat

Point Filebeat at Logstash (not Elasticsearch). That’s it.
copy
# filebeat.yml
filebeat.inputs:
  - type: log
    enabled: true
    paths:
      - /var/log/*.log

# Send to Logstash
output.logstash:
  hosts: ["logstash.example.com:5044"]

# (Optional) TLS for Beats → Logstash
# output.logstash.ssl.certificate_authorities: ["/etc/filebeat/ca.crt"]
# output.logstash.ssl.verification_mode: full
Tip: If you already run Filebeat modules (nginx, system, etc.), keep them—no changes needed beyond the output.

Step 2: Configure Logstash

Create /etc/logstash/conf.d/beats-to-opensearch.conf:
copy
input {
  beats {
    port => 5044
    # ssl => true
    # ssl_certificate => "/etc/logstash/certs/server.crt"
    # ssl_key         => "/etc/logstash/certs/server.key"
  }
}

filter {
  # Example: lightly parse syslog if using the system module
  if [fileset][module] == "system" {
    grok {
      match => { "message" => "%{SYSLOGBASE}" }
      tag_on_failure => ["_syslog_grok_fail"]
    }
    date {
      match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
      target => "@timestamp"
    }
  }
}

output {
  opensearch {
    hosts => ["https://opensearch.example.com:9200"]
    user  => "admin"
    password => "admin"
    index => "logs-%{+YYYY.MM.dd}"
    ssl => true
    # Recommended in prod:
    # ssl_certificate_verification => true
    # cacert => "/etc/logstash/certs/root-ca.pem"

    # Optional: control action+pipeline
    # action => "index"
    # pipeline => "ingest-pipeline-name"
  }

  # Optional: keep a local backup on failure
  # if "_opensearch_failures" in [tags] {
  #   file { path => "/var/log/logstash/opensearch_failed.ndjson" codec => json_lines }
  # }
}
Install the OpenSearch output plugin on Logstash:
copy
sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-opensearch
Access control: Ensure the OpenSearch user has create_index, write, and (if you manage templates from Logstash) manage_index_templates.

Step 3: Verify End-to-End

  1. Start Logstash, then Filebeat.
  2. Check Logstash logs for successful Beats and OpenSearch connections:
    copy
    tail -f /var/log/logstash/logstash-plain.log
    
  3. Confirm indices in OpenSearch:
    copy
    curl -u admin:admin https://opensearch.example.com:9200/_cat/indices?v
    
  4. In OpenSearch Dashboards, create an index pattern (e.g., logs-*) and explore your data.

Optional: Docker Compose Quick Start

Use this minimal stack to test locally:
copy
version: '3.8'
services:
  opensearch:
    image: opensearchproject/opensearch:2.15.0
    environment:
      - discovery.type=single-node
      - OPENSEARCH_SECURITY_ENABLED=false
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9200"]
      interval: 10s
      timeout: 5s
      retries: 50
    ports:
      - "9200:9200"

  logstash:
    image: docker.elastic.co/logstash/logstash:8.15.0
    depends_on:
      - opensearch
    volumes:
      - ./pipeline:/usr/share/logstash/pipeline
    ports:
      - "5044:5044"
    command: [ "logstash", "-f", "/usr/share/logstash/pipeline/beats-to-opensearch.conf" ]

  filebeat:
    image: docker.elastic.co/beats/filebeat:8.15.0
    user: root
    depends_on:
      - logstash
    volumes:
      - ./filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
      - /var/log:/var/log:ro
Bring it up:
copy
docker compose up
Place the earlier filebeat.yml at ./filebeat.yml and the Logstash pipeline at ./pipeline/beats-to-opensearch.conf.

Hardening & Scaling Tips

  • TLS everywhere. Use mutual TLS between Filebeat and Logstash; validate certificates end-to-end.
  • Back-pressure & resilience. Tune Logstash pipeline workers and persistent queues for bursty sources.
  • Index management. Standardize naming (e.g., logs-YYYY.MM.DD) and manage lifecycle via OpenSearch ISM.
  • Parsing at the edge. Enable Filebeat modules for common formats; keep heavy transforms in Logstash.
  • Migration strategy. Run ES and OpenSearch in parallel for a cutover window by duplicating outputs in Logstash.

Takeaway

You don’t need to rewrite your ingestion layer to adopt OpenSearch. Keep Filebeat on hosts, route to Logstash, and speak OpenSearch at the output. It’s familiar, flexible, and production-ready—perfect for Elastic veterans who want an open, future-proof stack.
Ready to get started?!
Let's work together to navigate your OpenSearch journey. Send us a message and talk to the team today!
Get in touch