[go: up one dir, main page]

DEV Community

Kaoutar
Kaoutar

Posted on

Logstash Fundamentals

Logstash is the next generation logging framework, it functions as a centralized framework for log collecting, processing, storage, and search, it can normalize data from several sources and dynamically combine them into the destinations of your choosing.
With a wide range of input, filter, and output plugins, Logstash enables any sort of event to be enhanced and altered with many native codecs that simplify the ingestion process. By utilizing more data, both in terms of volume and diversity, Logstash offers insights.
Files, Syslog, TCP/UDP, stdin, and many other input methods are just a few of the input sources that Logstash may accept. In order to change the events in the gathered logs, a wide variety of filters may be used.

Logstash Plug-ins

One or more pipelines are used to arrange the processing. One or more plug-ins receive or gather data in each pipeline, which is subsequently added to an internal queue that is typically minimal and only kept temporarily in memory, but you may customize it to be bigger and permanently saved to disk to increase dependability and resilience.

Logstash Instance

The processing threads read the data from the waiting file in micro-lots and process it using one of the configured plug-in filters sequentially. Logstash is ready for use and has a wide variety of plug-ins that focus on particular sorts of processing. This is how data is analyzed,
processed, and expanded.
Once the data has been processed, the processing threads send the data to the appropriate plug-ins for output, which are responsible for formatting and transferring the data (for example, to Elasticsearch).

Key features

  • Event Object
    It is Logstash's primary object and contains all of the data flow for the pipeline. This object is used by Logstash to retain the incoming data and add any new fields produced during the filtering process.

  • Pipeline
    It consists of Logstash's data flow phases from input to output. The pipeline receives the input data and processes it in the form of an event. then transmits to an output location in the preferred format for the user or end system.

  • Input
    The data must first pass through this step of the Logstash pipeline before it can be processed further. To obtain data from many systems, Logstash provides a variety of plugins. File, Syslog, Redis, and Beats are some of the plugins that are utilized the most frequently.

  • Filter
    The intermediate step of Logstash is where the real event processing happens. To assist the developer in parsing and transforming the events into a desired format, Logstash provides a variety of plugins. Grok, Mutate, Drop, Clone, and Geoip are some of the filter plugins that are most often used.

  • Output
    The output events from Logstash may now be formatted into the structure needed by the destination systems at this last step of the pipeline. Finally, it uses plugins to deliver the output event to the target when processing is finished. Elasticsearch, File, Graphite, Statsd, and other plugins are some of the ones that are most frequently utilized.

Kibana Fundamentals
Elasticsearch Fundamentals

Top comments (0)