欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

使用docker-compose 一键部署你的分布式调用链跟踪框架skywalking

程序员文章站 2022-05-25 15:51:51
一旦你的程序docker化之后,你会遇到各种问题,比如原来采用的本地记日志的方式就不再方便了,虽然你可以挂载到宿主机,但你使用 --scale 的话,会导致 记录日志异常,所以最好的方式还是要做日志中心化,另一个问题,原来一个请求在一个进程中的痉挛失败,你可以在日志中巡查出调用堆栈,但是docker ......

 

  一旦你的程序docker化之后,你会遇到各种问题,比如原来采用的本地记日志的方式就不再方便了,虽然你可以挂载到宿主机,但你使用 --scale 的话,会导致

记录日志异常,所以最好的方式还是要做日志中心化,另一个问题,原来一个请求在一个进程中的痉挛失败,你可以在日志中巡查出调用堆栈,但是docker化之后,

原来一个进程的东西会拆成几个微服务,这时候最好就要有一个分布式的调用链跟踪,类似于wcf中的svctraceview工具。

 

一:搭建skywalking

  gihub地址是: 从文档中大概看的出来,大体分三个部分:存储,收集器,探针,存储这里就选用推荐的

elasticsearch。收集器准备和es部署在一起,探针就有各自语言的实现了,总之这里就有三个docker container: es,kibana,skywalking, 如果不用容器编排工具

的话就比较麻烦。

      下面是本次搭建的一个目录结构:

使用docker-compose 一键部署你的分布式调用链跟踪框架skywalking 

1.  elasticsearch.yml

    es的配置文件,不过这里有一个坑,就是一定要将 network.publish_host: 0.0.0.0 ,否则skywalking会连不上 9300端口。

network.publish_host: 0.0.0.0
transport.tcp.port: 9300
network.host: 0.0.0.0

 

2. elasticsearch.dockerfile

    在up的时候,将这个es文件copy到 容器的config文件夹下。

from elasticsearch:5.6.4

expose 9200 9300

copy elasticsearch.yml /usr/share/elasticsearch/config/

 

3. application.yml

   skywalking的配置文件,这里也有一个坑:连接es的地址中,配置的 clustername一定要修改成和es的clustername保持一致,否则会连不上,这里容器之间用link

进行互联,所以es的ip改成elasticsearch就可以了,其他的ip改成0.0.0.0 。 

 

使用docker-compose 一键部署你的分布式调用链跟踪框架skywalking
# licensed to the apache software foundation (asf) under one
# or more contributor license agreements.  see the notice file
# distributed with this work for additional information
# regarding copyright ownership.  the asf licenses this file
# to you under the apache license, version 2.0 (the
# "license"); you may not use this file except in compliance
# with the license.  you may obtain a copy of the license at
#
#     http://www.apache.org/licenses/license-2.0
#
# unless required by applicable law or agreed to in writing, software
# distributed under the license is distributed on an "as is" basis,
# without warranties or conditions of any kind, either express or implied.
# see the license for the specific language governing permissions and
# limitations under the license.

#cluster:
#  zookeeper:
#    hostport: localhost:2181
#    sessiontimeout: 100000
naming:
  jetty:
    host: 0.0.0.0
    port: 10800
    contextpath: /
cache:
#  guava:
  caffeine:
remote:
  grpc:
    host: 0.0.0.0
    port: 11800
agent_grpc:
  grpc:
    host: 0.0.0.0
    port: 11800
    #set these two setting to open ssl
    #sslcertchainfile: $path
    #sslprivatekeyfile: $path

    #set your own token to active auth
    #authentication: xxxxxx
agent_jetty:
  jetty:
    host: 0.0.0.0
    port: 12800
    contextpath: /
analysis_register:
  default:
analysis_jvm:
  default:
analysis_segment_parser:
  default:
    bufferfilepath: ../buffer/
    bufferoffsetmaxfilesize: 10m
    buffersegmentmaxfilesize: 500m
    bufferfilecleanwhenrestart: true
ui:
  jetty:
    host: 0.0.0.0
    port: 12800
    contextpath: /
storage:
  elasticsearch:
    clustername: elasticsearch
    clustertransportsniffer: true
    clusternodes: elasticsearch:9300
    indexshardsnumber: 2
    indexreplicasnumber: 0
    highperformancemode: true
    ttl: 7
#storage:
#  h2:
#    url: jdbc:h2:~/memorydb
#    username: sa
configuration:
  default:
#     namespace: xxxxx
# alarm threshold
    applicationapdexthreshold: 2000
    serviceerrorratethreshold: 10.00
    serviceaverageresponsetimethreshold: 2000
    instanceerrorratethreshold: 10.00
    instanceaverageresponsetimethreshold: 2000
    applicationerrorratethreshold: 10.00
    applicationaverageresponsetimethreshold: 2000
# thermodynamic
    thermodynamicresponsetimestep: 50
    thermodynamiccountofresponsetimesteps: 40
view code

 

4.  skywalking.dockerfile

      接下来就是 skywalking的 下载安装,使用dockerfile流程化。

from centos:7

label username="hxc@qq.com"

workdir /app

run yum install -y wget && \
    yum install -y java-1.8.0-openjdk

add http://mirrors.hust.edu.cn/apache/incubator/skywalking/5.0.0-rc2/apache-skywalking-apm-incubating-5.0.0-rc2.tar.gz /app

run tar -xf apache-skywalking-apm-incubating-5.0.0-rc2.tar.gz && \
    mv apache-skywalking-apm-incubating skywalking

run ls /app

#copy文件
copy application.yml /app/skywalking/config/application.yml

workdir /app/skywalking/bin

user root

run  echo "tail -f /dev/null" >> /app/skywalking/bin/startup.sh

cmd ["/bin/sh","-c","/app/skywalking/bin/startup.sh" ]

 

5. docker-compose.yml

    最后就是将这三个容器进行编排,要注意的是,因为收集器会将数据放入到es中,所有一定要将es的data挂载到宿主机的大硬盘下,否则你的空间会不足的。

version: '3.1'

services:

  #elastic 镜像
  elasticsearch:
    build:
      context: .
      dockerfile: elasticsearch.dockerfile
    # ports:
    #   - "9200:9200"
    #   - "9300:9300"
    volumes:
       - "/data/es2:/usr/share/elasticsearch/data"

  #kibana 可视化查询,暴露 5601
  kibana:
    image: kibana
    links:
      - elasticsearch
    ports:
      - 5601:5601
    depends_on:
      - "elasticsearch"
      
  #skywalking
  skywalking:
    build:
      context: .
      dockerfile: skywalking.dockerfile
    ports:
      - "10800:10800"
      - "11800:11800"
      - "12800:12800"
      - "8080:8080"
    links:
      - elasticsearch
    depends_on:
      - "elasticsearch"

 

二:一键部署

      要部署在docker中,你还得需要安装docker-ce 和 docker-compose,大家可以参照官方安装一下。

 

1. docker-ce 的安装

sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine

sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2

sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

sudo yum install docker-ce

 

然后启动一下docker 服务,可以看到版本是18.06.1

[root@localhost ~]# service docker start
redirecting to /bin/systemctl start  docker.service
[root@localhost ~]# docker -v
docker version 18.06.1-ce, build e68fc7a

 

2. docker-compose的安装

sudo curl -l "https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

sudo chmod +x /usr/local/bin/docker-compose

 

3.  最后在centos上执行 docker-compopse up --build 就可以了,如果不想terminal上运行,可以加 -d 使用后台执行。

 

使用docker-compose 一键部署你的分布式调用链跟踪框架skywalking
[root@localhost docker]# docker-compose up --build
creating network "docker_default" with the default driver
building elasticsearch
step 1/3 : from elasticsearch:5.6.4
 ---> 7a047c21aa48
step 2/3 : expose 9200 9300
 ---> using cache
 ---> 8d66bb57b09d
step 3/3 : copy elasticsearch.yml /usr/share/elasticsearch/config/
 ---> using cache
 ---> 02b516c03b95
successfully built 02b516c03b95
successfully tagged docker_elasticsearch:latest
building skywalking
step 1/12 : from centos:7
 ---> 5182e96772bf
step 2/12 : label username="hxc@qq.com"
 ---> using cache
 ---> b95b96a92042
step 3/12 : workdir /app
 ---> using cache
 ---> afdf4efe3426
step 4/12 : run yum install -y wget &&     yum install -y java-1.8.0-openjdk
 ---> using cache
 ---> 46be0ca0f7b5
step 5/12 : add http://mirrors.hust.edu.cn/apache/incubator/skywalking/5.0.0-rc2/apache-skywalking-apm-incubating-5.0.0-rc2.tar.gz /app

 ---> using cache
 ---> d5c30bcfd5ea
step 6/12 : run tar -xf apache-skywalking-apm-incubating-5.0.0-rc2.tar.gz &&     mv apache-skywalking-apm-incubating skywalking
 ---> using cache
 ---> 1438d08d18fa
step 7/12 : run ls /app
 ---> using cache
 ---> b594124672ea
step 8/12 : copy application.yml /app/skywalking/config/application.yml
 ---> using cache
 ---> 10eaf0805a65
step 9/12 : workdir /app/skywalking/bin
 ---> using cache
 ---> bc0f02291536
step 10/12 : user root
 ---> using cache
 ---> 4498afca5fe6
step 11/12 : run  echo "tail -f /dev/null" >> /app/skywalking/bin/startup.sh
 ---> using cache
 ---> 1c4be7c6b32a
step 12/12 : cmd ["/bin/sh","-c","/app/skywalking/bin/startup.sh" ]
 ---> using cache
 ---> ecfc97e4c97d
successfully built ecfc97e4c97d
successfully tagged docker_skywalking:latest
creating docker_elasticsearch_1 ... done
creating docker_skywalking_1    ... done
creating docker_kibana_1        ... done
attaching to docker_elasticsearch_1, docker_kibana_1, docker_skywalking_1
elasticsearch_1  | [2018-09-17t23:51:47,611][info ][o.e.n.node               ] [] initializing ...
elasticsearch_1  | [2018-09-17t23:51:47,729][info ][o.e.e.nodeenvironment    ] [fc_boh1] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda3)]], net usable_space [5gb], net total_space [22.1gb], spins? [possibly], types [xfs]
elasticsearch_1  | [2018-09-17t23:51:47,730][info ][o.e.e.nodeenvironment    ] [fc_boh1] heap size [1.9gb], compressed ordinary object pointers [true]
elasticsearch_1  | [2018-09-17t23:51:47,731][info ][o.e.n.node               ] node name [fc_boh1] derived from node id [fc_boh1ns_uw6jky_46ibg]; set [node.name] to override
elasticsearch_1  | [2018-09-17t23:51:47,732][info ][o.e.n.node               ] version[5.6.4], pid[1], build[8bbedf5/2017-10-31t18:55:38.105z], os[linux/3.10.0-327.el7.x86_64/amd64], jvm[oracle corporation/openjdk 64-bit server vm/1.8.0_151/25.151-b12]
elasticsearch_1  | [2018-09-17t23:51:47,732][info ][o.e.n.node               ] jvm arguments [-xms2g, -xmx2g, -xx:+useconcmarksweepgc, -xx:cmsinitiatingoccupancyfraction=75, -xx:+usecmsinitiatingoccupancyonly, -xx:+alwayspretouch, -xss1m, -djava.awt.headless=true, -dfile.encoding=utf-8, -djna.nosys=true, -djdk.io.permissionsusecanonicalpath=true, -dio.netty.nounsafe=true, -dio.netty.nokeysetoptimization=true, -dio.netty.recycler.maxcapacityperthread=0, -dlog4j.shutdownhookenabled=false, -dlog4j2.disable.jmx=true, -dlog4j.skipjansi=true, -xx:+heapdumponoutofmemoryerror, -des.path.home=/usr/share/elasticsearch]
skywalking_1     | skywalking collector started successfully!
elasticsearch_1  | [2018-09-17t23:51:49,067][info ][o.e.p.pluginsservice     ] [fc_boh1] loaded module [aggs-matrix-stats]
elasticsearch_1  | [2018-09-17t23:51:49,067][info ][o.e.p.pluginsservice     ] [fc_boh1] loaded module [ingest-common]
elasticsearch_1  | [2018-09-17t23:51:49,067][info ][o.e.p.pluginsservice     ] [fc_boh1] loaded module [lang-expression]
elasticsearch_1  | [2018-09-17t23:51:49,067][info ][o.e.p.pluginsservice     ] [fc_boh1] loaded module [lang-groovy]
elasticsearch_1  | [2018-09-17t23:51:49,067][info ][o.e.p.pluginsservice     ] [fc_boh1] loaded module [lang-mustache]
elasticsearch_1  | [2018-09-17t23:51:49,067][info ][o.e.p.pluginsservice     ] [fc_boh1] loaded module [lang-painless]
elasticsearch_1  | [2018-09-17t23:51:49,067][info ][o.e.p.pluginsservice     ] [fc_boh1] loaded module [parent-join]
elasticsearch_1  | [2018-09-17t23:51:49,068][info ][o.e.p.pluginsservice     ] [fc_boh1] loaded module [percolator]
elasticsearch_1  | [2018-09-17t23:51:49,068][info ][o.e.p.pluginsservice     ] [fc_boh1] loaded module [reindex]
elasticsearch_1  | [2018-09-17t23:51:49,069][info ][o.e.p.pluginsservice     ] [fc_boh1] loaded module [transport-netty3]
elasticsearch_1  | [2018-09-17t23:51:49,069][info ][o.e.p.pluginsservice     ] [fc_boh1] loaded module [transport-netty4]
elasticsearch_1  | [2018-09-17t23:51:49,069][info ][o.e.p.pluginsservice     ] [fc_boh1] no plugins loaded
skywalking_1     | skywalking web application started successfully!
elasticsearch_1  | [2018-09-17t23:51:51,950][info ][o.e.d.discoverymodule    ] [fc_boh1] using discovery type [zen]
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:51:53z","tags":["status","plugin:kibana@5.6.11","info"],"pid":12,"state":"green","message":"status changed from uninitialized to green - ready","prevstate":"uninitialized","prevmsg":"uninitialized"}
elasticsearch_1  | [2018-09-17t23:51:53,456][info ][o.e.n.node               ] initialized
elasticsearch_1  | [2018-09-17t23:51:53,457][info ][o.e.n.node               ] [fc_boh1] starting ...
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:51:53z","tags":["status","plugin:elasticsearch@5.6.11","info"],"pid":12,"state":"yellow","message":"status changed from uninitialized to yellow - waiting for elasticsearch","prevstate":"uninitialized","prevmsg":"uninitialized"}
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:51:53z","tags":["status","plugin:console@5.6.11","info"],"pid":12,"state":"green","message":"status changed from uninitialized to green - ready","prevstate":"uninitialized","prevmsg":"uninitialized"}
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:51:53z","tags":["error","elasticsearch","admin"],"pid":12,"message":"request error, retrying\nhead http://elasticsearch:9200/ => connect econnrefused 172.21.0.2:9200"}
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:51:53z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"unable to revive connection: http://elasticsearch:9200/"}
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:51:53z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"no living connections"}
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:51:53z","tags":["status","plugin:metrics@5.6.11","info"],"pid":12,"state":"green","message":"status changed from uninitialized to green - ready","prevstate":"uninitialized","prevmsg":"uninitialized"}
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:51:53z","tags":["status","plugin:elasticsearch@5.6.11","error"],"pid":12,"state":"red","message":"status changed from yellow to red - unable to connect to elasticsearch at http://elasticsearch:9200.","prevstate":"yellow","prevmsg":"waiting for elasticsearch"}
elasticsearch_1  | [2018-09-17t23:51:53,829][info ][o.e.t.transportservice   ] [fc_boh1] publish_address {172.21.0.2:9300}, bound_addresses {0.0.0.0:9300}
elasticsearch_1  | [2018-09-17t23:51:53,870][info ][o.e.b.bootstrapchecks    ] [fc_boh1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:51:53z","tags":["status","plugin:timelion@5.6.11","info"],"pid":12,"state":"green","message":"status changed from uninitialized to green - ready","prevstate":"uninitialized","prevmsg":"uninitialized"}
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:51:53z","tags":["listening","info"],"pid":12,"message":"server running at http://0.0.0.0:5601"}
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:51:53z","tags":["status","ui settings","error"],"pid":12,"state":"red","message":"status changed from uninitialized to red - elasticsearch plugin is red","prevstate":"uninitialized","prevmsg":"uninitialized"}
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:51:56z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"unable to revive connection: http://elasticsearch:9200/"}
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:51:56z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"no living connections"}
elasticsearch_1  | [2018-09-17t23:51:57,094][info ][o.e.c.s.clusterservice   ] [fc_boh1] new_master {fc_boh1}{fc_boh1ns_uw6jky_46ibg}{tnmew5hyqm6o4aiqpu0uwa}{172.21.0.2}{172.21.0.2:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
elasticsearch_1  | [2018-09-17t23:51:57,129][info ][o.e.h.n.netty4httpservertransport] [fc_boh1] publish_address {172.21.0.2:9200}, bound_addresses {0.0.0.0:9200}
elasticsearch_1  | [2018-09-17t23:51:57,129][info ][o.e.n.node               ] [fc_boh1] started
elasticsearch_1  | [2018-09-17t23:51:57,157][info ][o.e.g.gatewayservice     ] [fc_boh1] recovered [0] indices into cluster_state
elasticsearch_1  | [2018-09-17t23:51:57,368][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_alarm_list_month] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:51:57,557][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[application_alarm_list_month][0]] ...]).
elasticsearch_1  | [2018-09-17t23:51:57,685][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [memory_pool_metric_month] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:51:57,742][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[memory_pool_metric_month][0]] ...]).
elasticsearch_1  | [2018-09-17t23:51:57,886][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [service_metric_day] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:51:57,962][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[service_metric_day][0]] ...]).
elasticsearch_1  | [2018-09-17t23:51:58,115][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_metric_hour] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:51:58,176][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[application_metric_hour][1]] ...]).
elasticsearch_1  | [2018-09-17t23:51:58,356][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_metric_month] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:51:58,437][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[application_metric_month][0]] ...]).
elasticsearch_1  | [2018-09-17t23:51:58,550][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [instance_mapping_month] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:51:58,601][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[instance_mapping_month][1]] ...]).
elasticsearch_1  | [2018-09-17t23:51:58,725][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [service_reference_metric_day] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:51:58z","tags":["warning"],"pid":12,"kibanaversion":"5.6.11","nodes":[{"version":"5.6.4","http":{"publish_address":"172.21.0.2:9200"},"ip":"172.21.0.2"}],"message":"you're running kibana 5.6.11 with some different versions of elasticsearch. update kibana or elasticsearch to the same version to prevent compatibility issues: v5.6.4 @ 172.21.0.2:9200 (172.21.0.2)"}
elasticsearch_1  | [2018-09-17t23:51:58,886][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[service_reference_metric_day][0]] ...]).
elasticsearch_1  | [2018-09-17t23:51:59,023][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [instance_reference_alarm] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:51:59,090][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[instance_reference_alarm][0]] ...]).
elasticsearch_1  | [2018-09-17t23:51:59,137][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [instance_alarm_list] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:51:59,258][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[instance_alarm_list][0]] ...]).
elasticsearch_1  | [2018-09-17t23:51:59,355][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_mapping_minute] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:51:59,410][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[application_mapping_minute][1], [application_mapping_minute][0]] ...]).
elasticsearch_1  | [2018-09-17t23:51:59,484][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [response_time_distribution_minute] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:51:59,562][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[response_time_distribution_minute][0]] ...]).
elasticsearch_1  | [2018-09-17t23:51:59,633][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:51:59,709][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[application][0]] ...]).
elasticsearch_1  | [2018-09-17t23:51:59,761][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [service_reference_metric_hour] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:51:59,839][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[service_reference_metric_hour][1]] ...]).
elasticsearch_1  | [2018-09-17t23:51:59,980][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [memory_metric_minute] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:00,073][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[memory_metric_minute][1], [memory_metric_minute][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:00,159][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_alarm_list_day] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:00,258][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[application_alarm_list_day][1]] ...]).
elasticsearch_1  | [2018-09-17t23:52:00,350][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [service_reference_alarm] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:00,463][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[service_reference_alarm][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:00,535][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_metric_day] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:00,599][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[application_metric_day][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:00,714][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [segment] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:00,766][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[segment][1], [segment][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:00,801][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [global_trace] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:00,865][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[global_trace][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:00,917][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [service_alarm_list] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:00,985][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[service_alarm_list][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:01,034][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [service_metric_month] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:01,116][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[service_metric_month][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:01,207][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [instance_reference_metric_day] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:01,289][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[instance_reference_metric_day][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:01,388][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_component_month] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:01,491][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[application_component_month][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:01,527][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_metric_minute] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:01,613][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[application_metric_minute][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:01,723][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_reference_metric_day] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:01,761][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[application_reference_metric_day][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:01,844][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_reference_metric_month] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:01,914][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[application_reference_metric_month][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:01,986][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [cpu_metric_minute] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:02,067][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[cpu_metric_minute][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:02,096][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [gc_metric_hour] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:02,174][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[gc_metric_hour][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:02,221][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [response_time_distribution_month] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:02,281][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[response_time_distribution_month][1]] ...]).
elasticsearch_1  | [2018-09-17t23:52:02,337][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [service_reference_metric_minute] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:02,421][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[service_reference_metric_minute][1]] ...]).
elasticsearch_1  | [2018-09-17t23:52:02,558][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [instance_mapping_minute] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:02,590][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[instance_mapping_minute][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:02,637][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [cpu_metric_month] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:02,694][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[cpu_metric_month][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:02,732][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [instance_metric_month] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:02,787][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[instance_metric_month][1]] ...]).
elasticsearch_1  | [2018-09-17t23:52:02,849][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [instance_mapping_hour] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:02,916][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[instance_mapping_hour][1]] ...]).
elasticsearch_1  | [2018-09-17t23:52:03,030][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [service_alarm] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:03,062][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[service_alarm][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:03,096][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [memory_pool_metric_minute] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:03,123][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[memory_pool_metric_minute][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:03,154][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_component_day] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:03,180][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[application_component_day][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:03,199][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [service_name] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:03,221][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[service_name][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:03,240][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [cpu_metric_day] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:03,268][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[cpu_metric_day][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:03,286][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [instance_reference_metric_hour] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:03,333][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[instance_reference_metric_hour][1]] ...]).
elasticsearch_1  | [2018-09-17t23:52:03,376][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [network_address] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:03,422][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[network_address][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:03,440][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [cpu_metric_hour] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:03,466][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[cpu_metric_hour][1]] ...]).
elasticsearch_1  | [2018-09-17t23:52:03,487][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_mapping_day] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:03,510][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[application_mapping_day][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:03,527][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [gc_metric_day] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:03,553][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[gc_metric_day][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:03,572][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [memory_metric_day] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:03,590][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[memory_metric_day][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:03,613][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [instance_reference_metric_minute] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:03,633][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[instance_reference_metric_minute][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:03,683][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_reference_metric_minute] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:03,731][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[application_reference_metric_minute][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:03,767][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_component_minute] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:03,796][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[application_component_minute][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:03,809][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [instance_alarm] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:03,829][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[instance_alarm][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:03,847][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [instance_metric_hour] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:03,867][info ][o.e.c.r.a.allocationservice] [fc_boh1] cluster health status changed from [yellow] to [green] (reason: [shards started [[instance_metric_hour][0]] ...]).
elasticsearch_1  | [2018-09-17t23:52:03,908][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_reference_alarm] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:52:03z","tags":["status","plugin:elasticsearch@5.6.11","info"],"pid":12,"state":"yellow","message":"status changed from red to yellow - no existing kibana index found","prevstate":"red","prevmsg":"unable to connect to elasticsearch at http://elasticsearch:9200."}
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:52:03z","tags":["status","ui settings","info"],"pid":12,"state":"yellow","message":"status changed from red to yellow - elasticsearch plugin is yellow","prevstate":"red","prevmsg":"elasticsearch plugin is red"}
elasticsearch_1  | [2018-09-17t23:52:03,953][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [.kibana] creating index, cause [api], templates [], shards [1]/[1], mappings [_default_, index-pattern, server, visualization, search, timelion-sheet, config, dashboard, url]
elasticsearch_1  | [2018-09-17t23:52:04,003][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [instance_reference_alarm_list] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:04,111][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [service_reference_metric_month] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:04,228][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [memory_pool_metric_day] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:04,285][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [segment_duration] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:04,342][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [service_metric_minute] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:04,395][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [response_time_distribution_hour] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:04,433][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_component_hour] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:04,475][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [memory_metric_month] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:04,512][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_alarm_list_hour] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:04,575][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_alarm] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:04,617][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [instance] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:04,675][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_reference_metric_hour] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:04,742][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [instance_metric_minute] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:04,841][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [memory_metric_hour] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:04,920][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [instance_reference_metric_month] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:04,987][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [service_reference_alarm_list] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:05,045][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [gc_metric_month] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:52:05z","tags":["status","plugin:elasticsearch@5.6.11","info"],"pid":12,"state":"green","message":"status changed from yellow to green - kibana index ready","prevstate":"yellow","prevmsg":"no existing kibana index found"}
kibana_1         | {"type":"log","@timestamp":"2018-09-17t23:52:05z","tags":["status","ui settings","info"],"pid":12,"state":"green","message":"status changed from yellow to green - ready","prevstate":"yellow","prevmsg":"elasticsearch plugin is yellow"}
elasticsearch_1  | [2018-09-17t23:52:05,106][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_reference_alarm_list] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:05,143][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [memory_pool_metric_hour] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:05,180][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_alarm_list_minute] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:05,225][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [instance_mapping_day] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:05,268][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [instance_metric_day] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:05,350][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_mapping_month] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:05,392][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [application_mapping_hour] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:05,434][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [response_time_distribution_day] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:05,470][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [service_metric_hour] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
elasticsearch_1  | [2018-09-17t23:52:05,544][info ][o.e.c.m.metadatacreateindexservice] [fc_boh1] [gc_metric_minute] creating index, cause [api], templates [], shards [2]/[0], mappings [type]
view code

 

从上图中可以看到 es,kibana,skywalking都启动成功了,你也可以通过docker-compose ps 看一下是否都起来了,netstat 看一下宿主机开放了哪些端口。

[root@localhost docker]# docker ps
container id        image                  command                  created             status              ports                                                                                                  names
9aa90401ca16        kibana                 "/docker-entrypoint.…"   2 minutes ago       up 2 minutes        0.0.0.0:5601->5601/tcp                                                                                 docker_kibana_1
c551248e32af        docker_skywalking      "/bin/sh -c /app/sky…"   2 minutes ago       up 2 minutes        0.0.0.0:8080->8080/tcp, 0.0.0.0:10800->10800/tcp, 0.0.0.0:11800->11800/tcp, 0.0.0.0:12800->12800/tcp   docker_skywalking_1
765d38469ff1        docker_elasticsearch   "/docker-entrypoint.…"   2 minutes ago       up 2 minutes        9200/tcp, 9300/tcp                                                                                     docker_elasticsearch_1
[root@localhost docker]# netstat -tlnp
active internet connections (only servers)
proto recv-q send-q local address           foreign address         state       pid/program name    
tcp        0      0 192.168.122.1:53        0.0.0.0:*               listen      2013/dnsmasq        
tcp        0      0 0.0.0.0:22              0.0.0.0:*               listen      1141/sshd           
tcp        0      0 127.0.0.1:631           0.0.0.0:*               listen      1139/cupsd          
tcp        0      0 127.0.0.1:25            0.0.0.0:*               listen      1622/master         
tcp6       0      0 :::8080                 :::*                    listen      38262/docker-proxy  
tcp6       0      0 :::10800                :::*                    listen      38248/docker-proxy  
tcp6       0      0 :::22                   :::*                    listen      1141/sshd           
tcp6       0      0 ::1:631                 :::*                    listen      1139/cupsd          
tcp6       0      0 :::11800                :::*                    listen      38234/docker-proxy  
tcp6       0      0 ::1:25                  :::*                    listen      1622/master         
tcp6       0      0 :::12800                :::*                    listen      38222/docker-proxy  
tcp6       0      0 :::5601                 :::*                    listen      38274/docker-proxy  
[root@localhost docker]# 

 

 然后就可以看一些8080端口的可视化ui,默认用户名密码admin,admin,一个比较耐看的ui就出来了。

使用docker-compose 一键部署你的分布式调用链跟踪框架skywalking

 

三: net下的探针

  从nuget上拉取一个skywalking.aspnetcore探针进行代码埋点,github地址:https://github.com/openskywalking/skywalking-netcore

使用docker-compose 一键部署你的分布式调用链跟踪框架skywalking

 

   在startup类中进行注入,在页面请求中进行一次cnblogs.com的请求操作,然后仔细观察一下调用链跟踪是一个什么样子?

using system;
using system.collections.generic;
using system.linq;
using system.threading.tasks;
using microsoft.aspnetcore.builder;
using microsoft.aspnetcore.hosting;
using microsoft.aspnetcore.http;
using microsoft.extensions.dependencyinjection;
using skywalking.extensions;
using skywalking.aspnetcore;
using system.net;

namespace webapplication1
{
    public class startup
    {
        // this method gets called by the runtime. use this method to add services to the container.
        // for more information on how to configure your application, visit https://go.microsoft.com/fwlink/?linkid=398940
        public void configureservices(iservicecollection services)
        {
            services.addskywalking(option =>
            {
                // application code is showed in sky-walking-ui
                option.applicationcode = "10001 测试站点";

                //collector agent_grpc/grpc service addresses.
                option.directservers = "192.168.23.183:11800";
            });
        }

        // this method gets called by the runtime. use this method to configure the http request pipeline.
        public void configure(iapplicationbuilder app, ihostingenvironment env)
        {
            if (env.isdevelopment())
            {
                app.usedeveloperexceptionpage();
            }

            app.run(async (context) =>
            {
                webclient client = new webclient();

                var str = client.downloadstring("http://cnblogs.com");

                await context.response.writeasync(str);
            });
        }
    }
}

 

使用docker-compose 一键部署你的分布式调用链跟踪框架skywalking

 

         可以看到这张图还是蛮漂亮的哈,也方便我们快速的跟踪代码,发现问题,找出问题, 还有更多的功能期待你的挖掘啦。 好了,本篇就说到这里,希望对你有帮助。

 

demo下载