consul部署
1.https://www.consul.io/downloads.html下载linux64位的
下载的是zip文件,可以在window下解压后,直接拖过去
2.运行consul
./consul agent -dev -client 192.168.p.p(服务器的ip)
出现下面结果即启动成功
3.访问http://192.168.p.p:8500
注册服务
https://www.consul.io/intro/getting-started/services.html官网教程
服务定义
1.首先,为Consul配置创建一个目录。 Consul将所有配置文件加载到配置目录中,因此Unix系统上的一个通用约定是将目录命名为/etc/consul.d(.d后缀意味着“该目录包含一组配置文件”)。
sudo mkdir /etc/consul.d
接下来,我们将编写一个服务定义配置文件。 假设我们有一个名为“web”的服务在端口80上运行。另外,我们给它一个标签,我们可以使用它作为查询服务的附加方式:
echo '{"service": {"name": "web", "tags": ["rails"], "port": 80}}' | sudo tee /etc/consul.d/web.json
现在,重新启动代理程序,提供配置目录:
cd /opt
./consul agent -dev -config-dir=/etc/consul.d
==> Starting Consul agent...
==> Consul agent running!
Version: 'v1.2.2'
Node ID: 'f532e531-85e3-8426-8510-6aee9ee2b500'
Node name: 'localhost.localdomain'
Datacenter: 'dc1' (Segment: '<all>')
Server: true (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
2018/08/26 21:26:50 [DEBUG] agent: Using random ID "f532e531-85e3-8426-8510-6aee9ee2b500" as node ID
2018/08/26 21:26:50 [WARN] agent: Node name "localhost.localdomain" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.
2018/08/26 21:26:50 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:f532e531-85e3-8426-8510-6aee9ee2b500 Address:127.0.0.1:8300}]
2018/08/26 21:26:50 [INFO] serf: EventMemberJoin: localhost.localdomain.dc1 127.0.0.1
2018/08/26 21:26:50 [INFO] serf: EventMemberJoin: localhost.localdomain 127.0.0.1
2018/08/26 21:26:50 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2018/08/26 21:26:50 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state (Leader: "")
2018/08/26 21:26:50 [INFO] consul: Adding LAN server localhost.localdomain (Addr: tcp/127.0.0.1:8300) (DC: dc1)
2018/08/26 21:26:50 [INFO] consul: Handled member-join event for server "localhost.localdomain.dc1" in area "wan"
2018/08/26 21:26:50 [DEBUG] agent/proxy: managed Connect proxy manager started
2018/08/26 21:26:50 [WARN] agent/proxy: running as root, will not start managed proxies
2018/08/26 21:26:50 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2018/08/26 21:26:50 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
2018/08/26 21:26:50 [INFO] agent: started state syncer
2018/08/26 21:26:50 [WARN] raft: Heartbeat timeout from "" reached, starting election
2018/08/26 21:26:50 [INFO] raft: Node at 127.0.0.1:8300 [Candidate] entering Candidate state in term 2
2018/08/26 21:26:50 [DEBUG] raft: Votes needed: 1
2018/08/26 21:26:50 [DEBUG] raft: Vote granted from f532e531-85e3-8426-8510-6aee9ee2b500 in term 2. Tally: 1
2018/08/26 21:26:50 [INFO] raft: Election won. Tally: 1
2018/08/26 21:26:50 [INFO] raft: Node at 127.0.0.1:8300 [Leader] entering Leader state
2018/08/26 21:26:50 [INFO] consul: cluster leadership acquired
2018/08/26 21:26:50 [INFO] consul: New leader elected: localhost.localdomain
2018/08/26 21:26:50 [INFO] connect: initialized CA with provider "consul"
2018/08/26 21:26:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small
2018/08/26 21:26:50 [INFO] consul: member 'localhost.localdomain' joined, marking health alive
2018/08/26 21:26:50 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2018/08/26 21:26:50 [INFO] agent: Synced service "web"
2018/08/26 21:26:50 [DEBUG] agent: Node info in sync
2018/08/26 21:26:52 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2018/08/26 21:26:52 [DEBUG] agent: Service "web" in sync
2018/08/26 21:26:52 [DEBUG] agent: Node info in sync
2018/08/26 21:26:52 [DEBUG] agent: Service "web" in sync
2018/08/26 21:26:52 [DEBUG] agent: Node info in sync
2018/08/26 21:27:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small
2018/08/26 21:28:08 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2018/08/26 21:28:08 [DEBUG] agent: Service "web" in sync
2018/08/26 21:28:08 [DEBUG] agent: Node info in sync
2018/08/26 21:28:30 [DEBUG] dns: request for name web.service.consul. type A class IN (took 1.864898ms) from client 127.0.0.1:60925 (udp)
2018/08/26 21:28:50 [DEBUG] manager: Rebalanced 1 servers, next active server is localhost.localdomain.dc1 (Addr: tcp/127.0.0.1:8300) (DC: dc1)
2018/08/26 21:28:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small
2018/08/26 21:29:23 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2018/08/26 21:29:23 [DEBUG] agent: Service "web" in sync
2018/08/26 21:29:23 [DEBUG] agent: Node info in sync
2018/08/26 21:29:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small
2018/08/26 21:30:40 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2018/08/26 21:30:40 [DEBUG] agent: Service "web" in sync
2018/08/26 21:30:40 [DEBUG] agent: Node info in sync
2018/08/26 21:30:46 [DEBUG] http: Request GET /v1/health/service/web?passing (1.221711ms) from=127.0.0.1:40608
2018/08/26 21:30:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small
2018/08/26 21:31:29 [DEBUG] manager: Rebalanced 1 servers, next active server is localhost.localdomain.dc1 (Addr: tcp/127.0.0.1:8300) (DC: dc1)
2018/08/26 21:31:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small
2018/08/26 21:32:00 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2018/08/26 21:32:00 [DEBUG] agent: Service "web" in sync
2018/08/26 21:32:00 [DEBUG] agent: Node info in sync
2018/08/26 21:32:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small
2018/08/26 21:33:05 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2018/08/26 21:33:05 [DEBUG] agent: Service "web" in sync
2018/08/26 21:33:05 [DEBUG] agent: Node info in sync
2018/08/26 21:33:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small
2018/08/26 21:34:13 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2018/08/26 21:34:13 [DEBUG] agent: Service "web" in sync
2018/08/26 21:34:13 [DEBUG] agent: Node info in sync
2018/08/26 21:34:18 [DEBUG] manager: Rebalanced 1 servers, next active server is localhost.localdomain.dc1 (Addr: tcp/127.0.0.1:8300) (DC: dc1)
2018/08/26 21:34:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small
2018/08/26 21:35:40 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2018/08/26 21:35:40 [DEBUG] agent: Service "web" in sync
2018/08/26 21:35:40 [DEBUG] agent: Node info in sync
2018/08/26 21:35:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small
2018/08/26 21:36:50 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small
2018/08/26 21:37:02 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2018/08/26 21:37:02 [DEBUG] agent: Service "web" in sync
2018/08/26 21:37:02 [DEBUG] agent: Node info in sync
2018/08/26 21:37:08 [DEBUG] manager: Rebalanced 1 servers, next active server is localhost.localdomain.dc1 (Addr: tcp/127.0.0.1:8300) (DC: dc1)
^C 2018/08/26 21:37:15 [INFO] agent: Caught signal: interrupt
2018/08/26 21:37:15 [INFO] agent: Graceful shutdown disabled. Exiting
2018/08/26 21:37:15 [INFO] agent: Requesting shutdown
2018/08/26 21:37:15 [WARN] agent: dev mode disabled persistence, killing all proxies since we can't recover them
2018/08/26 21:37:15 [DEBUG] agent/proxy: Stopping managed Connect proxy manager
2018/08/26 21:37:15 [INFO] consul: shutting down server
2018/08/26 21:37:15 [WARN] serf: Shutdown without a Leave
2018/08/26 21:37:15 [WARN] serf: Shutdown without a Leave
2018/08/26 21:37:15 [INFO] manager: shutting down
2018/08/26 21:37:15 [INFO] agent: consul server down
2018/08/26 21:37:15 [INFO] agent: shutdown complete
2018/08/26 21:37:15 [INFO] agent: Stopping DNS server 127.0.0.1:8600 (tcp)
2018/08/26 21:37:15 [INFO] agent: Stopping DNS server 127.0.0.1:8600 (udp)
2018/08/26 21:37:15 [INFO] agent: Stopping HTTP server 127.0.0.1:8500 (tcp)
2018/08/26 21:37:15 [INFO] agent: Waiting for endpoints to shut down
2018/08/26 21:37:15 [INFO] agent: Endpoints down
2018/08/26 21:37:15 [INFO] agent: Exit code: 1
查询服务
一旦代理启动并且服务同步,我们可以使用DNS或HTTP API来查询服务
DNS API
我们首先使用DNS API来查询我们的服务。 对于DNS API,服务的DNS名称是NAME.service.consul。 默认情况下,所有DNS名称始终在consul命名空间中,尽管这是可配置的。 服务子域告诉Consul我们正在查询服务,NAME是服务的名称。
对于我们注册的Web服务,这些约定和设置会生成web.service.consul的完全限定的域名:
; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7 <<>> @127.0.0.1 -p 8600 web.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 5363
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 2
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;web.service.consul. IN A
;; ANSWER SECTION:
web.service.consul. 0 IN A 127.0.0.1
;; ADDITIONAL SECTION:
web.service.consul. 0 IN TXT "consul-network-segment="
;; Query time: 3 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Sun Aug 26 21:28:30 EDT 2018
;; MSG SIZE rcvd: 99
最后,我们也可以使用DNS API来按标签过滤服务。 基于标记的服务查询的格式是TAG.NAME.service.consul。 在下面的例子中,我们向Consul询问所有带有“rails”标签的web服务。 自从我们使用该标签注册我们的服务后,我们得到了成功的回应:
dig @127.0.0.1 -p 8600 rails.web.service.consul
HTTP API
除了DNS API之外,HTTP API还可以用来查询服务:
curl http://localhost:8500/v1/catalog/service/web
目录API提供了托管给定服务的所有节点。 正如我们稍后将看到的健康检查一样,您通常只需要查询检查通过的健康实例。 这是DNS正在做的事情。 这是一个查询只查找健康的实例:
curl 'http://localhost:8500/v1/health/service/web?passing'
[
{
"Node": {
"ID": "f532e531-85e3-8426-8510-6aee9ee2b500",
"Node": "localhost.localdomain",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 9,
"ModifyIndex": 10
},
"Service": {
"ID": "web",
"Service": "web",
"Tags": [
"rails"
],
"Address": "",
"Meta": null,
"Port": 80,
"EnableTagOverride": false,
"ProxyDestination": "",
"Connect": {
"Native": false,
"Proxy": null
},
"CreateIndex": 10,
"ModifyIndex": 10
},
"Checks": [
{
"Node": "localhost.localdomain",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"ServiceTags": [],
"Definition": {},
"CreateIndex": 9,
"ModifyIndex": 9
}
]
}
]
上一篇: ngrinder部署
推荐阅读
-
OpenStack与ZStack深度对比:架构、部署、计算存储与网络、运维监控等
-
Linux下安装和部署LXC(内核虚拟化技术)的方法
-
WebService的创建和部署以及通过反射动态调用WebService
-
ASP.Net Core on Linux (CentOS7) 共享第三方依赖库部署
-
使用Zookeeper分布式部署PHP应用程序
-
解决vue项目nginx部署到非根目录下刷新空白的问题
-
Linux远程部署MySQL数据库详细步骤
-
自建windows服务器如何部署egg应用
-
apache实现部署多个网站(一个ip部署多域名)的方法详解
-
在Python的Django框架上部署ORM库的教程