Send alerts from your logs stack for common scenarios
Use this guide when you want practical ElastAlert 2 examples you can copy and adapt quickly.
For the UI flow, follow the same sequence as Overview: create or open a rule, run Test only, then choose Update to apply it.
Scenario 1: Detect a sudden increase in matching events (spike)
Use this when a field value starts appearing much more often than normal.
name: spike-detection-example
type: spike
index: "*-*"
timeframe:
hours: 1
spike_height: 2
spike_type: up
filter:
- query:
query_string:
query: "myFieldName:true"
alert:
- email
email:
- "alerts@example.com"What to tune
spike_heightcontrols sensitivity. Larger values reduce noise.timeframecontrols how much recent data is compared against previous windows.filtershould target only the data stream you care about.
See also: spike rule type reference.
Scenario 2: Alert on errors for one host or one application
Use this when you run many systems and need targeted alerts per source.
name: host-specific-error-alert
type: any
index: "*-*"
filter:
- query:
query_string:
query: "agent.name:my-server-01 AND (log.level:error OR level:error)"
alert:
- slack
slack_webhook_url: "https://hooks.slack.com/services/REPLACE/REPLACE/REPLACE"
realert:
minutes: 10What to tune
- Replace
agent.namewith the host or service field used in your data. - Add
query_keyif you want bucketing per field value. - Increase
realertto reduce duplicate notifications.
See also: any rule type reference.
Scenario 3: Detect when expected logs stop arriving (flatline)
Use this for heartbeat, web request, or pipeline interruption detection.
name: heartbeat-flatline-alert
type: flatline
index: "*-*"
threshold: 100
timeframe:
minutes: 10
use_count_query: true
doc_type: _doc
filter:
- query:
query_string:
query: "message:heartbeat*"
alert:
- email
email:
- "alerts@example.com"What to tune
thresholdis the minimum expected event count in the window.timeframeshould reflect your expected log cadence.- For low-volume streams, reduce
thresholdto avoid false positives.
See also: flatline rule type reference.
Run and troubleshoot
- Use Test only to validate parsing and query matches.
- Use Update to apply once results look correct.
- If behaviour is unexpected, check ElastAlert execution output in the elastalert index via stack Diagnostic logs (
/logs-settings/logfile).
If you are not seeing expected alerts, start with the rule validation guide before changing large parts of your YAML.