← Back to all products

Log Management Toolkit

$29

ELK/EFK stack configurations, log rotation scripts, structured logging templates, and log analysis queries.

📁 18 files🏷 v1.0.0
ShellJSONMarkdownYAMLConfigDockerNginx

📁 File Structure 18 files

log-management-toolkit/ ├── .env.example ├── LICENSE ├── README.md ├── docker-compose.yml ├── elasticsearch/ │ └── elasticsearch.yml ├── filebeat/ │ ├── filebeat.yml │ └── modules.d/ │ ├── nginx.yml │ └── system.yml ├── guides/ │ └── log-management-strategy.md ├── kibana/ │ └── export/ │ └── dashboards.ndjson ├── logstash/ │ ├── patterns/ │ │ └── custom-patterns │ └── pipeline/ │ ├── app-json.conf │ ├── main.conf │ └── nginx-access.conf └── scripts/ ├── backup-kibana.sh ├── rotate-indices.sh └── setup.sh

📖 Documentation Preview README excerpt

Log Management Toolkit

Production-ready ELK stack deployment with structured logging pipelines and operational dashboards.

Complete Elasticsearch, Logstash, Kibana, and Filebeat configurations for centralized log management. Includes ready-to-use parsing pipelines for Nginx, application JSON logs, and system logs, plus operational scripts for index management, backups, and a comprehensive log strategy guide.

What You Get

  • Docker Compose stack — One-command ELK deployment with resource limits
  • 5 Logstash pipelines — Parse Nginx, JSON apps, syslog, and custom formats
  • Filebeat configuration — Ship logs from any server to your stack
  • Kibana dashboards — Pre-built visualizations for immediate insight
  • 3 operational scripts — Setup, index rotation, and Kibana backup
  • Strategy guide — Log levels, retention, alerting, and compliance

File Tree


log-management-toolkit/
├── README.md
├── LICENSE
├── manifest.json
├── .env.example
├── docker-compose.yml
├── elasticsearch/
│   └── elasticsearch.yml          # ES node configuration
├── logstash/
│   ├── pipeline/
│   │   ├── main.conf              # Main routing pipeline
│   │   ├── nginx-access.conf      # Nginx access log parser
│   │   └── app-json.conf          # JSON application log parser
│   └── patterns/
│       └── custom-patterns        # Custom grok patterns
├── filebeat/
│   ├── filebeat.yml               # Filebeat shipper config
│   └── modules.d/
│       ├── nginx.yml              # Nginx module config
│       └── system.yml             # System log module config
├── kibana/
│   └── export/
│       └── dashboards.ndjson      # Pre-built dashboards
├── scripts/
│   ├── setup.sh                   # Stack deployment script
│   ├── rotate-indices.sh          # Index lifecycle management
│   └── backup-kibana.sh           # Kibana saved objects backup
└── guides/
    └── log-management-strategy.md # Logging strategy & best practices

Getting Started

1. Configure environment


cp .env.example .env
# Edit .env with your settings (passwords, retention, memory limits)
vim .env

2. Deploy the stack

... continues with setup instructions, usage examples, and more.

📄 Code Sample .yml preview

docker-compose.yml # ============================================================================= # docker-compose.yml — ELK Stack Deployment # Part of: Log Management Toolkit by Datanest Digital # License: MIT | https://datanest.dev # # Deploy: docker compose up -d # Status: docker compose ps # Logs: docker compose logs -f # Stop: docker compose down # Destroy: docker compose down -v (WARNING: deletes all data) # ============================================================================= services: # ---- Elasticsearch: Storage & Search Engine ---- elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION:-8.12.0} container_name: elk-elasticsearch hostname: elasticsearch environment: - node.name=es-node-01 - cluster.name=${ES_CLUSTER_NAME:-log-management} - discovery.type=single-node - ELASTIC_PASSWORD=${ELASTIC_PASSWORD:-changeme} - xpack.security.enabled=true - xpack.security.http.ssl.enabled=false - xpack.license.self_generated.type=basic - bootstrap.memory_lock=true - ES_JAVA_OPTS=${ES_JAVA_OPTS:--Xms1g -Xmx1g} ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65536 hard: 65536 volumes: - ./elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro - elasticsearch-data:/usr/share/elasticsearch/data ports: - "${BIND_ADDRESS:-127.0.0.1}:9200:9200" networks: - elk-network healthcheck: test: ["CMD-SHELL", "curl -s -u elastic:${ELASTIC_PASSWORD:-changeme} http://localhost:9200/_cluster/health | grep -q '\"status\":\"green\"\\|\"status\":\"yellow\"'"] interval: 30s timeout: 10s retries: 10 start_period: 60s restart: unless-stopped # ... 91 more lines ...