0. Download splunk install package
http://www.splunk.com/download/splunk-6.1.2-213098-linux-2.6-x86_64.rpm
1. Install enterprise :
npm -i splunk-6.1.2-213098-linux-2.6-x86_64.rpm
2. start splunk:
cd /opt/splunk/bin
./splunk start --accept-license
./splunk enable boot-start -user root
3. Install universal forwarder:
npm -i splunkforwarder-6.1.2-213098-linux-2.6-x86_64.rpm
4. start splunk forwarder:
cd /opt/splunkforwarder/bin
./splunk start --accept-license
./splunk enable boot-start -user root
5. Change forwarder admin password:
cd /opt/splunkforwarder/bin
./splunk edit user admin -password <new password> -role admin admin: changeme <changed to forwardme>
6. configure universal forwarder act as a deplyment client:
./splunk set deploy-poll 127.0.0.1:8089
7. configure universal forwarder to forward a specific receiving indexer:
./splunk add forward-server 127.0.0.1:9997 admin:forwardme
8. configure forwarder inputs.conf:
cd /opt/splunkforwarder/etc/system/local
gedit inputs.conf
[monitor:<the directory you would like to monitor> ] //my sample: /home/aimqa/Desktop/SG_JobsResults
disabled=false
sourcetype=<your sourcetype name that you need to set up on server> //my sample: sg_production
9. Additional setting:
if you want to clone your data to the end sever, you may clone data to another server by:
cd /opt/splunkforwarder/etc/system/local
gedit outputs.conf
before edit, you should get the outputs.conf like this:
[tcpout]
defaultGroup=<target_group>
[tcpout:<target_group>]
server=<receiving_server1>:<port> <attribute1> = <val1> <attribute2> = <val2>
to set up date clone, modify the outputs.conf to:
[tcpout]
defaultGroup=<target_group1>,<target_group2>
[tcpout:<target_group1>]
server=<receiving_server1>:<port>
[tcpout:<target_group2>]
server=<receiving_server2>:<port>
10. add more monitor directory to added to different sourcetype:cd /opt/splunkforwarder/etc/system/local
gedit inputs.conf
add another line:
[monitor:<the directory you would like to monitor> ] //my sample: /home/aimqa/Desktop/SG_JobsResults
11. Now you could set up a data monitor to grasp data you want to monitor and added it to the monitor directory so that data is forwarded to server for deeper search.I used crontab on linux to repeat query data and populate to monitor directory:
/sbin/service crond stop
crontab -e
0 * * * * /bin/sh <your bash file .sh>
//before I start the timely update log, I queried all history data and forwarded it to server, then I start crond to query data once an hour at sharp time
/sbin/service crond start