8.6.2. Configuring the export data to ETL Logstash via the syslog protocol

8.6.2.1. Introduction

This procedure describes how to configure the connection to Logstash.
A pipeline developed by Gatewatcher makes it possible to retrieve the JSON content of the exported logs so that it can then be manipulated as desired with the Logstash filters.
Configuring the connection between the GCenter and the Logstash ETL requires the following steps:

Note

See the presentation of Connection to Syslog servers.
See the presentation of the exported data described in Data use.
The graphical interface of the data export function is described in `Data export` screen.

8.6.2.2. Prerequisites

  • User: member of Administrator group


8.6.2.3. Preliminary operations


8.6.2.4. Procedure to access the `Data export` window

../../_images/GCE103_MENUBAR.PNG
  • In the GCenter interface, click on the `Administration` menu (3).

  • Click on the `Log export` command from the `Data` submenu.
    The `Data export` window is displayed.

8.6.2.5. Procedure to set the data export #1 settings

../../_images/GCE103_DATAEXPORT-1.PNG
  • Click the `Data export #1` button (6).
    The `Settings for data export #1` area (2) is displayed.
  • Enter the necessary parameters.
    The list of items is detailed in `Data export` screen.

Item

Name

Value to enter

4

  • `Enabled` selector

  • Activated

5

  • `Name`

  • $SYSLOG_NAME

9

  • `Hostname/IP address`

  • $LOGSTASH_IP

10

  • `Syslog RFC`

  • 3164

11

  • `Facility`

Syslog header `facility`
default kernel; header will be removed by the reception pipeline

12

  • `Protocol`

  • $PROTOCOL

13

  • `Port`

  • $LOGSTASH_PORT

14

  • `Interface`

Choose the GCenter interface used for Syslog export
$GCENTER_IFACE

15

  • `Severity`

Value of `severity` in the Syslog header
emergency by default; the header will be deleted by the reception pipeline

16

  • `Formatting`

  • Choice between the log formatting:

    - ECS log format 1.0.0. for the Elastic Common Schema (ECS) format
    - Legacy retro-compatibility 2.5.3.102 for standard syslog export

26

`Custom fields and values`

Zone to Custom fields and values
This zone contents:

27

  • `Enabled` selector

  • Activates the feature. Disabled by default.

32

`Log selection`

Zone to select the log to be exported.
This zone contents:

31

  • `All logs`

  • Check box to select all logs
    The list of the log types is displayed. The log types are `alerts` (30), `protocols` (29), `system_logs` (28)
    The alerts types are detailed in the note below.
    The protocols types are detailed in the note below: these protocols are the Sigflow protocols.
    The system_logs has only one choice: notification.

33

`Filter by IP address or subnet`

Allows to select the events source with its IP address or subnet. By default, all data is sent to the remote server if the field is empty.

35

`Gcap involved in events`

Zone to select data to sent: all data from the GCap paired and selected to the GCenter is sent to the remote server
This zone contents:

34

  • `All (current and futures)`

  • Check box to select all known GCaps
    The list of the GCaps is displayed. Each GCap can be selected independently.

17

`Ip addresses`

Filter by IP or networks. By default, all data is sent to the remote server if the field is empty

Note

`Select All` selects all the protocols listed: a protocol that is not selected will not be exported.
If the GCap is newer than the GCenter, some protocols may be missing.
To export everything, disable this filter with `Deselect all`.

Note

The `TLS` zone and `Verify CA` enables the encryption of the flow generated by the GCenter.
Logstash’s "syslog" input is not compatible with data encryption.
This feature cannot be used.
  • Validate using the `Save changes` button (18).
    The following message indicates that the update has been completed: `Updated with success`.

8.6.2.6. Procedure to be performed on the server

  • Configure the flow receiving pipeline from the GCenter.


8.6.2.6.1. Pipeline Logstash

The input used is Syslog.
In order to be compatible with any Syslog header, a grok pattern is specified.
The JSON content of the log is in the syslog_message field.
yaml
input {
  syslog {
    port => $LOGSTASH_PORT
    type => syslog
    grok_pattern => '^<%{NUMBER:syslog_priority}>(?:1 |)(?:%{SYSLOGTIMESTAMP:syslog_timestamp}|%{TIMESTAMP_ISO8601:syslog_timestamp}) %{SYSLOGHOST:syslog_hostname} (?:gatewatcher\[-\]:|gatewatcher - - \[-\]) %{GREEDYDATA:syslog_message}\n$'
  }
}
Only the syslog_message field is preserved and is converted to JSON.
The original field (syslog_message) and the field specific to elasticsearch (@version) are then removed.
yaml
filter {
  prune {
    whitelist_names => [ "syslog_message" ]
  }

  json {
    source => "syslog_message"
  }

  mutate {
    remove_field => [ "@version","syslog_message" ]
  }
}
Any output can then be used.
In this example, the logs are described directly on the disk as files:
yaml
output {
  file {
    path => '/usr/share/logstash/data/output/%{[type]}-%{+YYYY.MM.dd}.log'
    codec => json_lines
  }
}