Transferring Data Using Fluentd
This article will introduce you how to transfer the collected/accumulated data in Hinemos to external devices using the Hub feature.
The tool we will be using is “Fluentd”.
<Data Flow from Hinemos to Fluentd and Other Tools>
The collected data can be transferred to Fluentd and other external big data infrastructures.
(Image from http://www.hinemos.info/ja/hinemos/feature/collectandstore)
・An environment with Hinemos Manager and Hinemos Client installed *1 *2
・An environment with Fluentd installed *3
*1 Web Client or Rich Client
*2 This feature can be used without installing Hinemos Agent
*3 See here for details regarding the installation of Fluentd
Configure the environment where Fluentd is installed so it accepts HTTP connection.
Next, configure the transfer setting.
・Tranfer Data Type
This time, we will collect the data of “Monitor(Event) History”.
Other data which you can transfer to include “Job History”, “Performance Data”, etc.
Specify the IP address of the destination and add debug at the end.
Select “Transfer in real time” to obtain data with minimum delay.
・Enable the Setting
Check the checkbox to enable the setting.
*The checkbox is empty by default
That is all for the transfer setup.
Setup a resource monitor setting and collect the data of the CPU usage; this data will be transferred.
Confirm that the monitor setting has been executed successfully.
Next, check the log of Fluentd.
The same message notified in the monitor result has been successfully outputted to the log of Fluentd.
<About each Transfer Interval>
・Transfer in regular interval
You can specify the transfer interval from 1 hour, 3 hours, 6 hours, 12 hours, and 24 hours.
・Transfer after storage
You can specify after how many days the data will be transferred from 10 days, 20 days, 30 days, 60 days, and 90 days.
・Transfer in real time
Data will be transferred in between each monitoring
That is all for the transfer setting with Hinemos and Fluentd!
The data sent to Fluentd can be transferred to Elasticsearch also.
For details, please see this article.