Guides
Scenarios

Send alerts from your logs stack for common scenarios

Use this guide when you want practical ElastAlert 2 examples you can copy and adapt quickly.

For the UI flow, follow the same sequence as Overview: create or open a rule, run Test only, then choose Update to apply it.

Scenario 1: Detect a sudden increase in matching events (spike)

Use this when a field value starts appearing much more often than normal.

name: spike-detection-example
type: spike
index: "*-*"
timeframe:
  hours: 1
spike_height: 2
spike_type: up
filter:
  - query:
      query_string:
        query: "myFieldName:true"
alert:
  - email
email:
  - "alerts@example.com"

What to tune

  • spike_height controls sensitivity. Larger values reduce noise.
  • timeframe controls how much recent data is compared against previous windows.
  • filter should target only the data stream you care about.

See also: spike rule type reference.

Scenario 2: Alert on errors for one host or one application

Use this when you run many systems and need targeted alerts per source.

name: host-specific-error-alert
type: any
index: "*-*"
filter:
  - query:
      query_string:
        query: "agent.name:my-server-01 AND (log.level:error OR level:error)"
alert:
  - slack
slack_webhook_url: "https://hooks.slack.com/services/REPLACE/REPLACE/REPLACE"
realert:
  minutes: 10

What to tune

  • Replace agent.name with the host or service field used in your data.
  • Add query_key if you want bucketing per field value.
  • Increase realert to reduce duplicate notifications.

See also: any rule type reference.

Scenario 3: Detect when expected logs stop arriving (flatline)

Use this for heartbeat, web request, or pipeline interruption detection.

name: heartbeat-flatline-alert
type: flatline
index: "*-*"
threshold: 100
timeframe:
  minutes: 10
use_count_query: true
doc_type: _doc
filter:
  - query:
      query_string:
        query: "message:heartbeat*"
alert:
  - email
email:
  - "alerts@example.com"

What to tune

  • threshold is the minimum expected event count in the window.
  • timeframe should reflect your expected log cadence.
  • For low-volume streams, reduce threshold to avoid false positives.

See also: flatline rule type reference.

Run and troubleshoot

  1. Use Test only to validate parsing and query matches.
  2. Use Update to apply once results look correct.
  3. If behaviour is unexpected, check ElastAlert execution output in the elastalert index via stack Diagnostic logs (/logs-settings/logfile).

If you are not seeing expected alerts, start with the rule validation guide before changing large parts of your YAML.