服务端    Kibana
服务端或者独立 JDK  ElasticSearch  redis2.6(数据队列) Logstash(匹配分析数据)
客户端 JDK   Logstash(导入数据用,安装一样,分离开减轻客户端压力)
下载地址 https://www.elastic.co/downloads
https://mirrors.tuna.tsinghua.edu.cn/ELK/  国内源

ElasticSearch和Logstash依赖于JDK,所以需要安装JDK:
yum -y install java-1.8.0-openjdk java-1.8.0-openjdk-devel
java -version

redis 源码编译安装2.6以上版本 

Logstash安装
http://www.open-open.com/lib/view/open1473661753307.html
Logstash默认的对外服务的端口是9292。
rpm -ivh logstash-2.0.0-1.noarch.rpm
测试/opt/logstash/bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'
输入hello world


vim /etc/logstash/conf.d/agent.conf

input {
   file {
     type => "ugo_nginx_access"   ##日志文件类型,自定义。好区分,类似于分组这种概念
     path => "/export1/log/access_20150407+00.log"  ##日志文件路径。
   }
   file {
     type => "nginx_access"
     path => "/usr/local/nginx/logs/python-access.log"
   }
}
output {
  #将收集到的日志存储到了redis中。
  redis {
    host => "103.41.54.16"   
    port => 6379
    data_type => "list"
    key => "logstash"
  }
}

启动 /opt/logstash/bin/logstash agent -f /usr/local/logstash/conf/agent.conf   有时候服务启动没有数据

server
grok匹配测试  http://grokdebug.herokuapp.com/

/opt/logstash/bin/logstash agent -f /usr/local/logstash/conf/fserver.conf
input {
    redis {
        host => "127.0.0.1"
        port => "6379"
        data_type => "list"
        key => "logstash"
        type => "redis-input"
    }
}

filter {
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG} %{QS:x_forwarded_for}" }   nginx日志匹配
    }
    geoip {
      source => "clientip"
      target => "geoip"
      database => "/opt/logstash/GeoLiteCity.dat"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
    }

    mutate {
      convert => [ "[geoip][coordinates]", "float" ]
      convert => [ "response","integer" ]
      convert => [ "bytes","integer" ]
      replace => { "type" => "nginx_access" }
      remove_field => "message"
    }

    date {
      match => [ "timestamp","dd/MMM/yyyy:HH:mm:ss Z"]

    }
    mutate {
      remove_field => "timestamp"

    }


}
output {
    elasticsearch {
        hosts => ["127.0.0.1:9200"]
        manage_template => true   可以使用模板匹配地区,geoip,
        # http://blog.csdn.net/yanggd1987/article/details/50469113
        index => "logstash-nginx-access-%{+YYYY.MM.dd}"
    }
    stdout {codec => rubydebug}
}


 ##这个命令的意义就是,删除 logstash 2015年4月的所有文件。
curl -XDELETE 'http://10.1.1.99:9200/logstash-2015.04.*

ElasticSearch 安装
ElasticSearch默认的对外服务的HTTP端口是9200,节点间交互的TCP端口是9300。
rpm -ivh elasticsearch-2.0.0.rpm
vim /etc/elasticsearch/elasticsearch.yml 添加
node.name: node-1
network.host: 0.0.0.0
path.data: /data/elasticsearch/
http.port: 9200

mkdir -pv /data/elasticsearch
chown -R elasticsearch.elasticsearch /data/elasticsearch/
/etc/init.d/elasticsearch start

测试ElasticSearch服务是否正常,预期返回200的状态码:
curl -X GET http://localhost:9200
head插件
wget https://github.com/mobz/elasticsearch-head/archive/master.zip
unzip master.zip
mv elasticsearch-head-master/ /usr/share/elasticsearch/plugins/head/
http://112.126.80.182:9200/_plugin/head/

初始化数据集,为了保证上报事件的时间戳被正常识别为日期格式
curl -XPUT http://localhost:9200/logstash-qos -d '       索引地址固定的,需要先map,在导入数据
{
 "mappings" : {
  "_default_" : {
   "properties" : {
    "timestamp":{"type":"date"}
   }
  }
 }
}';


Kibana安装
tar -zxf kibana-4.2.0-linux-x64.tar.gz
vim ./kibana/config/kibana.yml 修改
elasticsearch.url: http://192.168.1.23:9200
运行./kibana/bin/kibana

kibana 分析报错:http://elasticsearch.cn/question/232





扩展
http://www.99ya.net/archives/523.html(亿万级日志数据实时分析平台)

results matching ""

    No results matching ""