dd-agent job from datadog-agent/2.6.690
Datadog Agent
Github source:
1fb2d4b
or
master branch
Properties¶
dd
¶
additional_api_key_1
¶The datadog API key to use while submitting requests for additional endpoint url 1
additional_api_key_2
¶The datadog API key to use while submitting requests for additional endpoint url 2
additional_api_key_3
¶The datadog API key to use while submitting requests for additional endpoint url 3
additional_api_key_4
¶The datadog API key to use while submitting requests for additional endpoint url 4
additional_api_key_5
¶The datadog API key to use while submitting requests for additional endpoint url 5
additional_api_url_1
¶Additional Endpoint URL 1 for datadog
- Default
https://app.datadoghq.com/api/v1/series
additional_api_url_2
¶Additional Endpoint URL 2 for datadog
- Default
https://app.datadoghq.com/api/v1/series
additional_api_url_3
¶Additional Endpoint URL 3 for datadog
- Default
https://app.datadoghq.com/api/v1/series
additional_api_url_4
¶Additional Endpoint URL 4 for datadog
- Default
https://app.datadoghq.com/api/v1/series
additional_api_url_5
¶Additional Endpoint URL 5 for datadog
- Default
https://app.datadoghq.com/api/v1/series
address_tag
¶Include the address tag
- Default
true
agent_config
¶Add any additional agent config options here. (Warning: any options you add here will override the options set previously.)
- Default
{}
api_key
¶Datadog API key
bosh_tags
¶Enable autogenerated bosh tags
- Default
true
bosh_tags_prefix
¶Prefix for autogenerated bosh tags
- Default
bosh_
check_runners
¶The Agent runs workers in parallel to execute checks. By default the number of workers is set to 1. If set to 0 the agent will automatically determine the best number of runners needed based on the number of checks running. This would optimize the check collection time but may produce CPU spikes.
- Default
1
check_timings
¶The collector will capture a metric for check run times
- Default
false
cmd_port
¶The port on which the IPC api listens. (Set to a different port if there’s a collision)
- Default
15001
collect_ec2_tags
¶Collect AWS EC2 custom tags as agent tags (requires an IAM role associated with the instance)
- Default
false
collect_security_groups
¶Incorporate security-groups into tags collected from AWS EC2
- Default
false
create_dd_check_tags
¶Add dd_check:checkname tag per running check
- Default
false
custom_emitters
¶List of emitters to be used in addition to the standard one
- Example
- /usr/local/my-code/emitters/rabbitmq.py:RabbitMQEmitter
dogstatsd_port
¶Statsd listening port
- Default
18125
dogstatsd_target
¶By default dogstatsd will post aggregate metrics to the Agent, but you can define a different endpoint here
dogstreams
¶List of logs to parse and optionally custom parsers to use
- Example
- /path/to/log1:/path/to/my/parsers_module.py:custom_parser - /path/to/log2
enable_gohai
¶Enable gohai metadata collection (default is yes)
- Default
true
enable_metadata_collection
¶Metadata collection should always be enabled, except if you are running several agents/dsd instances per host. In that case, only one agent should have it on. WARNING: disabling it on every agent will lead to display and billing issues
- Default
true
exclude_process_args
¶Remove the ‘ww’ flag from ps catching the arguments of processes
- Default
false
expvar_port
¶The port that the agent reports expvar metrics over. (Set to a different port if there’s a collision)
- Default
15000
forwarder_num_workers
¶The number of workers used by the forwarder. Please note each worker will open an outbound HTTP connection towards Datadog’s metrics intake at every flush.
- Default
1
friendly_hostname
¶Use a friendly hostname. If this is enabled, along with the UUID option, uuid will take precedence.
- Default
true
ganglia_host
¶Ganglia host where gmetad is running
- Default
127.0.0.1
ganglia_port
¶Ganglia port where gmetad is running
- Default
8651
gce_updated_hostname
¶Use unique hostname for GCE hosts, see http://dtdg.co/1eAynZk
- Default
true
generate_disk_config
¶Generate disk configuration, disk.yaml
- Default
true
generate_disk_config_all_partitions
¶Include all partitions in the system
- Default
true
generate_disk_config_tag_by_filesystem
¶Add tags to mountpoints by filesystem type
- Default
true
generate_monit_processes
¶Add monit processes to process check
- Default
true
generate_network_config
¶Automatically generate network monitoring integration, network.yaml
- Default
true
generate_network_config_connection_state
¶Report metrics including the state of the nics
- Default
true
generate_network_config_excluded_interfaces
¶List of the network interfaces to exclude from reporting metrics
- Default
- lo - lo0
generate_ntp_config
¶Generate NTP monitoring ntp.yaml
- Default
true
generate_ntp_config_host
¶NTP host
- Default
0.ubuntu.pool.ntp.org
generate_ntp_config_min_collection_interval
¶Metrics collection interval
- Default
300
generate_ntp_config_offset_threshold
¶Max offset threshold in seconds
- Default
60
generate_processes
¶Automatically generate process monitoring integration, process.yaml
- Default
true
generate_system_processes
¶Add basic system processes to process check
- Default
true
graphite_listen_port
¶Start a graphite listener on this port
- Default
17124
histogram_aggregates
¶List of histogram aggregates functions
- Default
- max - median - avg - count
histogram_percentiles
¶List of histogram percentiles
- Default
- "0.95"
hostname
¶Force the hostname to whatever you want. Default is autogenerated
integrations
¶Agent integration configuration. Each key will have “.yaml” appended to it and the value dumped a file
- Default
{}
ip_tag
¶Include the ip tag
- Default
true
listen_port
¶Change port the Agent is listening to
- Default
17123
log_format_json
¶Set this option to “yes” to output logs in JSON format
- Default
false
log_level
¶Logging level
- Default
INFO
non_local_traffic
¶Allow non-local traffic to this Agent, required when using it as a proxy for other agents
- Default
false
process_agent_enabled
¶Enable the process agent.
- Default
false
proxy
¶Proxy settings to connect to the Internet
- Example
host: proxy password: pass port: 8080 user: user
http
¶The proxy for http endpoints
https
¶The proxy for https endpoints
no_proxy
¶Domains that the agent proxy should skip
site
¶The site of the Datadog intake to send Agent data to. Defaults to ‘datadoghq.com’, set to ‘datadoghq.eu’ to send data to the EU site.
skip_ssl_validation
¶Skip SSL validation for the Datadog url
- Default
false
statsd_forward_host
¶Forward packets received by the dogstatsd server to another statsd server
statsd_forward_port
¶Forward packets received by the dogstatsd server to another statsd server port
- Default
8125
statsd_metric_namespace
¶Define a namespace for statsd metrics, metric.name will instead become namespace.metric.name
tags
¶List of tags which will be applied to the data sent from this agent
- Default
[]
trace_agent_enabled
¶Enable the trace agent.
- Default
false
url
¶The host of the Datadog intake server to send Agent data to
use_dogstatsd
¶Should the dogstatsd agent be started for statsd metrics collection
- Default
false
use_ganglia
¶Enable Ganglia support for collecting metrics
- Default
false
use_graphite
¶Enable a graphite endpoint
- Default
false
use_jmxfetch
¶Should the jmxfetch agent be started
- Default
false
use_uuid_hostname
¶By default we use a friendly hostname, this might cause problems with some setups. Set this to use the UUID instead. (In an environment where there are multiple hosts of the same type on different deployments or in different foundries, this will be necessary.) If dd.hostname is set, it will take precedence over the uuid hostname.
utf8_decoding
¶Dogstatsd supports plain ASCII packets, this enables support for UTF8 metric names
- Default
false
Templates¶
Templates are rendered and placed onto corresponding
instances during the deployment process. This job's templates
will be placed into /var/vcap/jobs/dd-agent/
directory
(learn more).
bin/agent_ctl
(frombin/agent_ctl
)bin/pre-start
(frombin/pre-start
)bin/process_agent_ctl
(frombin/process_agent_ctl
)bin/trace_agent_ctl
(frombin/trace_agent_ctl
)config/confd.sh
(fromconfig/confd.sh.erb
)config/datadog.yaml
(fromconfig/datadog.yaml.erb
)data/properties.sh
(fromdata/properties.sh.erb
)
Packages¶
Packages are compiled and placed onto corresponding
instances during the deployment process. Packages will be
placed into /var/vcap/packages/
directory.