ingestor_syslog job from logsearch/211.0.3
This job runs Logstash process which ingests data by standard Syslog protocol, make some processing, then forward it to Elasticsearch cluster
Github source:
03b1ebc
or
master branch
Properties¶
logstash
¶
env
¶a list of arbitrary key-value pairs to be passed on as process environment variables. eg: FOO: 123
- Default
[]
heap_percentage
¶The percentage value used in the calculation to set the heap size.
- Default
46
heap_size
¶sets jvm heap sized
jvm_options
¶additional jvm options
- Default
[]
log_level
¶The default logging level (e.g. WARN, DEBUG, INFO)
- Default
info
metadata_level
¶Whether to include additional metadata throughout the event lifecycle. NONE = disabled, DEBUG = fully enabled
- Default
NONE
plugins
¶Array of hashes describing logstash plugins to install
- Default
[]- Example
- name: logstash-output-cloudwatchlogs version: 2.0.0
queue
¶
checkpoint
¶
acks
¶The maximum number of acked events before forcing a checkpoint.
- Default
1024
interval
¶The interval in milliseconds when a checkpoint is forced on the head page.
- Default
1000
writes
¶The maximum number of written events before forcing a checkpoint.
- Default
1024
max_bytes
¶The total capacity of the queue in number of bytes.
- Default
1024mb
max_events
¶The maximum number of unread events in the queue.
- Default
0
page_capacity
¶The page data files size. The queue data consists of append-only data files separated into pages.
- Default
250mb
type
¶Internal queuing model, “memory” for legacy in-memory based queuing and “persisted” for disk-based acked queueing.
- Default
persisted
logstash_ingestor
¶
filters
¶Filters to execute on the ingestors
- Default
""
health
¶
disable_post_start
¶Skip post-start health checks?
- Default
false
interval
¶Logstash syslog health check interval (seconds)
- Default
5
timeout
¶Logstash syslog health check number of attempts (seconds)
- Default
300
relp
¶
port
¶Port to listen for RELP messages
- Default
2514
syslog
¶
port
¶Port to listen for syslog messages
- Default
5514
transport
¶Transport protocol to use
- Default
tcp
use_keepalive
¶Instruct the socket to use TCP keep alives
syslog_tls
¶
port
¶Port to listen for syslog-TLS messages (omit to disable)
skip_ssl_validation
¶Verify the identity of the other end of the SSL connection against the CA.
- Default
false
ssl_cert
¶Syslog-TLS SSL certificate (file contents, not a path) - required if logstash_ingestor.syslog_tls.port set
ssl_key
¶Syslog-TLS SSL key (file contents, not a path) - required if logstash_ingestor.syslog_tls.port set
use_keepalive
¶Instruct the socket to use TCP keep alives
logstash_parser
¶
debug
¶Debug level logging
- Default
false
deployment_dictionary
¶A list of files concatenated into one deployment dictionary file. Each file contains a hash of job name-deployment name keypairs for @source.deployment lookup.
- Default
- /var/vcap/packages/logsearch-config/deployment_lookup.yml
elasticsearch
¶
data_hosts
¶The list of elasticsearch data node IPs
document_id
¶Use a specific, dynamic ID rather than an auto-generated identifier.
idle_flush_time
¶How frequently to flush events if the output queue is not full.
index
¶The specific, dynamic index name to write events to.
- Default
logstash-%{+YYYY.MM.dd}
index_type
¶The specific, dynamic index type name to write events to.
- Default
'%{@type}'
routing
¶The routing to be used when indexing a document.
enable_json_filter
¶Toggles the if_it_looks_like_json.conf filter rule
- Default
false
filters
¶The configuration to embed into the logstash filters section. Can either be a set of parsing rules as a string or a list of hashes in the form of [{name: path_to_parsing_rules.conf}]
- Default
""
inputs
¶A list of input plugins, with a hash of options for each of them. Please refer to example below.
- Example
inputs: - options: host: 192.168.1.1 password: c1oudbunny user: logsearch plugin: rabbitmq
message_max_size
¶Maximum log message length. Anything larger is truncated (TODO: move this to ingestor?)
- Default
1.048576e+06
outputs
¶A list of output plugins, with a hash of options for each of them. Please refer to example below.
- Default
- options: {} plugin: elasticsearch- Example
inputs: - options: collection: logs database: logsearch uri: 192.168.1.1 plugin: mongodb
timecop
¶
reject_greater_than_hours
¶Logs with timestamps greater than this many hours in the future won’t be parsed and will get tagged with fail/timecop
- Default
1
reject_less_than_hours
¶Logs with timestamps less than this many hours in the past won’t be parsed and will get tagged with fail/timecop
- Default
24
wait_for_templates
¶A list of index templates that need to be present in ElasticSearch before the process starts
- Default
- index_template
workers
¶The number of worker threads that logstash should use (default: auto = one per CPU)
- Default
auto
Templates¶
Templates are rendered and placed onto corresponding
instances during the deployment process. This job's templates
will be placed into /var/vcap/jobs/ingestor_syslog/
directory
(learn more).
bin/ingestor_syslog.sh
(frombin/ingestor_syslog
)bin/post-start
(frombin/post-start.erb
)bin/pre-start
(frombin/pre-start
)config/bpm.yml
(fromconfig/bpm.yml.erb
)config/filters_override.conf
(fromconfig/filters_override.conf.erb
)config/filters_post.conf
(fromconfig/filters_post.conf.erb
)config/filters_pre.conf
(fromconfig/filters_pre.conf.erb
)config/input_and_output.conf
(fromconfig/input_and_output.conf.erb
)config/jvm.options
(fromconfig/jvm.options.erb
)config/logstash.yml
(fromconfig/logstash.yml.erb
)config/syslog_tls.crt
(fromconfig/syslog_tls.crt.erb
)config/syslog_tls.key
(fromconfig/syslog_tls.key.erb
)
Packages¶
Packages are compiled and placed onto corresponding
instances during the deployment process. Packages will be
placed into /var/vcap/packages/
directory.