Skip to content

health_monitor job from bosh/280.0.9

Github source: d9ffbae7c or master branch

Properties

director

address

Address of the Bosh Director to connect to

port

Port of the Bosh Director to connect to

Default
25555

env

http_proxy

HTTP proxy that the health monitor should use

https_proxy

HTTPS proxy that the health monitor should use

no_proxy

List of comma-separated hosts that should skip connecting to the proxy in the health monitor

hm

consul_event_forwarder

events

Whether or not to use the events api

Default
false
heartbeats_as_alerts

Should we treat all heartbeats as alerts as well?

Default
false
host

Location of Consul Cluster or agent

namespace

A namespace for handling multiples of the same release

params

Params for url can be used for passing ACL token

port

Consul Port

Default
8500
protocol

http/https

Default
http
ttl

A ttl time for ttl checks, if set ttl checks will be used

ttl_note

A note for ttl checks

Default
Automatically Registered by Bosh-Monitor

consul_event_forwarder_enabled

Enable Consul Plugin

Default
false

datadog

api_key

API Key for DataDog

application_key

Health Monitor Application Key for DataDog

custom_tags

Tags, as key/value pairs, to add to all metrics sent to DataDog. See https://docs.datadoghq.com/tagging/

Example
|+
  env: prod
  region: eu
pagerduty_service_name

Service name to alert in PagerDuty upon HM events

datadog_enabled

Enable DataDog plugin

Default
false

director_account

ca_cert

Certificate to verify UAA endpoint

Default
""
client_id

UAA client id to access Bosh Director

Default
""
client_secret

UAA client secret to access Bosh Director

Default
""
password

Password to access Bosh Director

Default
""
user

User to access Bosh Director

Default
""

em_threadpool_size

EM thread pool size

Default
20

email_interval

Interval (in seconds) to deliver alerts by email (optional)

email_notifications

Enable email notifications plugin

Default
false

email_recipients

Email of recipients (Array)

graphite

address

Graphite address

max_retries

Max attempts to connect to the Graphite service; use -1 for infinite retries

Default
35
port

Graphite port

prefix

Prefix that will be added to all metrics sent to Graphite

graphite_enabled

Enable Graphite plugin

Default
false

http

port

TCP port Health Monitor daemon listens on

Default
25923

intervals

agent_timeout

Interval (in seconds) to consider an agent has timed out

Default
60
analyze_agents

Interval (in seconds) to analyze the status of agents

Default
60
analyze_instances

Interval (in seconds) to analyze the status of instances for missing VMs

Default
60
log_stats

Interval (in seconds) to log Health Monitor statistics

Default
60
poll_director

Interval (in seconds) to get the list of managed VMs from Bosh Director

Default
60
poll_grace_period

Interval (in seconds) between discovering managed VMs and analyzing their status

Default
30
prune_events

Interval (in seconds) to prune received events

Default
30
rogue_agent_alert

Interval (in seconds) to consider an agent as rogue (an agent that is no part of any deployment)

Default
120

loglevel

Level of log messages (fatal, error, warn, info, debug)

Default
info

pagerduty

http_proxy

HTTP proxy to connect to PagerDuty (optional)

service_key

PagerDuty service API key

pagerduty_enabled

Enable PagerDuty plugin

Default
false

resurrector

minimum_down_jobs

If the total number of down jobs in a deployment is below this threshold, the resurrector will always request a down job be recreated

Default
5
percent_threshold

Percentage of total jobs in a deployment that must be down for the resurrector to to stop sending recreate-job requests. Used in ‘meltdown’ situations so resurrector will not try to recreate the world.

Default
0.2
time_threshold

Time (in seconds) for which an alert in the resurrector is considered ‘current’; alerts older than this are ignored when deciding to recreate a job.

Default
600

resurrector_enabled

Enable VM resurrector plugin

Default
false

riemann

host

Riemann host

port

Riemann port

Default
5555

riemann_enabled

Enable Riemann plugin

Default
false

smtp

auth

SMTP Authentication type (optional, only “plain” is supported)

domain

SMTP EHLO domain (typically server’s fully qualified domain name, e.g. “bosh.cfapps.io”)

from

Email of sender, e.g. “[email protected]

host

Address of the SMTP server to connect to (e.g. “smtp.gmail.com”)

password

Password for SMTP Authentication (optional, use in conjuction with hm.smtp.auth)

port

Port of the SMTP server to connect to (e.g. 25, 465, or 587)

tls

Use STARTTLS (optional)

user

User for SMTP Authentication (optional, use in conjuction with hm.smtp.auth)

syslog_event_forwarder_enabled

Removed. Please co-locate the syslog-release instead to forward your logs.

tsdb

address

Address of TSDB to connect to

max_retries

Max attempts to connect to the TSDB service; use -1 for infinite retries

Default
35
port

Port of TSDB to connect to

tsdb_enabled

Enable TSDB plugin

Default
false

nats

address

Address of the NATS message bus to connect to

port

Port of the NATS message bus port to connect to

Default
4222

tls

ca

CA cert to trust when communicating with NATS server

health_monitor
certificate

Certificate for establishing mutual TLS with NATS server. The Common-Name for the certificate should be “default.hm.bosh-internal”

private_key

Private Key for establishing mutual TLS with NATS

Templates

Templates are rendered and placed onto corresponding instances during the deployment process. This job's templates will be placed into /var/vcap/jobs/health_monitor/ directory (learn more).

  • bin/health_monitor (from health_monitor)
  • config/bpm.yml (from bpm.yml)
  • config/health_monitor.yml (from health_monitor.yml.erb)
  • config/nats_client_certificate.pem (from nats_client_certificate.pem.erb)
  • config/nats_client_private_key (from nats_client_private_key.erb)
  • config/nats_server_ca.pem (from nats_server_ca.pem.erb)
  • config/uaa.pem (from uaa.pem.erb)

Packages

Packages are compiled and placed onto corresponding instances during the deployment process. Packages will be placed into /var/vcap/packages/ directory.