HA FastDFS分布式文件系统部署详细步骤
程序员文章站
2024-03-08 14:19:18
...
本篇文章搭建的分布式图片服务器测试上传的美女图片
(图片文件分别为20160501.jpg 20160502.jpg 20160503.jpg,另外3张20160701.jpg 20160702.jpg 20160703.jpg就自己在网上随便下载几张吧)如下:
apt-get update && apt-get -y install git wget build-essential
git clone https://github.com/happyfish100/libfastcommon.git
cd libfastcommon
./make.sh
./make.sh install
cd ..
wget https://github.com/happyfish100/fastdfs/archive/V5.11.tar.gz
tar xzf V5.11.tar.gz
cd fastdfs-5.11
./make.sh
./make.sh install
aaa@qq.com:~/FastDFS# ls /etc/fdfs
client.conf.sample storage.conf.sample tracker.conf.sample
aaa@qq.com:~/FastDFS# cp /etc/fdfs/tracker.conf.sample /etc/fdfs/tracker.conf
aaa@qq.com:~/FastDFS# sed -i -- 's/^base_path=\/home\/yuqing\/fastdfs/base_path=\/fdfs\/tracker/g' /etc/fdfs/tracker.conf
aaa@qq.com:~# mkdir -p /fdfs/tracker
aaa@qq.com:~# chmod 777 /fdfs/tracker/
aaa@qq.com:~# /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf start
aaa@qq.com:~# find / -name trackerd.log
/fdfs/tracker/logs/trackerd.log
aaa@qq.com:~# tail -f /fdfs/tracker/logs/trackerd.log
aaa@qq.com:~# ps -ef | grep 22122
root 7388 752 0 19:15 pts/0 00:00:00 grep 22122
aaa@qq.com:~#
aaa@qq.com:~# ls /usr/bin/ | grep fdfs
fdfs_appender_test
fdfs_appender_test1
fdfs_append_file
fdfs_crc32
fdfs_delete_file
fdfs_download_file
fdfs_file_info
fdfs_monitor
fdfs_storaged
fdfs_test
fdfs_test1
fdfs_trackerd
fdfs_upload_appender
fdfs_upload_file
aaa@qq.com:~#
启动 Tracker: # /etc/init.d/fdfs_trackerd start
(初次成功启动,会在/fastdfs/tracker 目录下创建 data、logs 两个目录)可以通过以下两个方法查
看 tracker 是否启动成功:
(1)查看 22122 端口监听情况:netstat -unltp|grep fdfs
aaa@qq.com:~/FastDFS# netstat -unltp|grep fdfs
tcp 0 0 0.0.0.0:22122 0.0.0.0:* LISTEN 7409/fdfs_trackerd
aaa@qq.com:~/FastDFS#
(2)通过以下命令查看 tracker 的启动日志,看是否有错误
tail -f /fdfs/tracker/logs/trackerd.log
关闭 Tracker:
/etc/init.d/fdfs_trackerd stop
设置 FastDFS 跟踪器开机启动:
cat > /etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
## FastDFS Tracker
/etc/init.d/fdfs_trackerd start
exit 0
三、配置 FastDFS 存储 ( 192.168.10.23 、192.168.10.24 、192.168.10.25 、192.168.10.26 、192.168.10.27 、192.168.10.28 )
1、 复制 FastDFS 存储器样例配置文件,并重命名:
aaa@qq.com:~/FastDFS# ls /etc/fdfs
client.conf.sample storage.conf.sample tracker.conf.sample
aaa@qq.com:~/FastDFS# cp /etc/fdfs/storage.conf.sample /etc/fdfs/storage.conf
aaa@qq.com:~# mkdir -p /fdfs/storage
aaa@qq.com:~# chmod 777 /fdfs/storage/
2、 编辑存储器样例配置文件(以 group1 中的 storage 节点的 storage.conf 为例) : # vi /etc/fdfs/storage.conf
修改的内容如下:
disabled=false #启用配置文件
group_name=group1 #组名(第一组为 group1,第二组为 group2)
port=23000 #storage 的端口号,同一个组的 storage 端口号必须相同
base_path=/fdfs/storage #设置 storage 的日志目录
store_path0=/fdfs/storage #存储路径
store_path_count=1 #存储路径个数,需要和 store_path 个数匹配
tracker_server=192.168.10.21:22122 #tracker 服务器的 IP 地址和端口
tracker_server=192.168.10.22:22122 #多个 tracker 直接添加多条配置
http.server_port=8888 #设置 http 端口号
(其它参数保留默认配置,具体配置解释请参考官方文档说明:
http://bbs.chinaunix.net/thread-1941456-1-1.html )
aaa@qq.com:~/FastDFS# cat > /etc/fdfs/storage.conf
# is this config file disabled
# false for enabled
# true for disabled
disabled=false
# the name of the group this storage server belongs to
#
# comment or remove this item for fetching from tracker server,
# in this case, use_storage_id must set to true in tracker.conf,
# and storage_ids.conf must be configed correctly.
group_name=group3
# bind an address of this host
# empty for bind all addresses of this host
bind_addr=
# if bind an address of this host when connect to other servers
# (this storage server as a client)
# true for binding the address configed by above parameter: "bind_addr"
# false for binding any address of this host
client_bind=true
# the storage server port
port=23000
# connect timeout in seconds
# default value is 30s
connect_timeout=30
# network timeout in seconds
# default value is 30s
network_timeout=60
# heart beat interval in seconds
heart_beat_interval=30
# disk usage report interval in seconds
stat_report_interval=60
# the base path to store data and log files
base_path=/fdfs/storage
# max concurrent connections the server supported
# default value is 256
# more max_connections means more memory will be used
max_connections=256
# the buff size to recv / send data
# this parameter must more than 8KB
# default value is 64KB
# since V2.00
buff_size = 256KB
# accept thread count
# default value is 1
# since V4.07
accept_threads=1
# work thread count, should <= max_connections
# work thread deal network io
# default value is 4
# since V2.00
work_threads=4
# if disk read / write separated
## false for mixed read and write
## true for separated read and write
# default value is true
# since V2.00
disk_rw_separated = true
# disk reader thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_reader_threads = 1
# disk writer thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_writer_threads = 1
# when no entry to sync, try read binlog again after X milliseconds
# must > 0, default value is 200ms
sync_wait_msec=50
# after sync a file, usleep milliseconds
# 0 for sync successively (never call usleep)
sync_interval=0
# storage sync start time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_start_time=00:00
# storage sync end time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_end_time=23:59
# write to the mark file after sync N files
# default value is 500
write_mark_file_freq=500
# path(disk or mount point) count, default value is 1
store_path_count=1
# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
store_path0=/fdfs/storage
#store_path1=/home/yuqing/fastdfs2
# subdir_count * subdir_count directories will be auto created under each
# store_path (disk), value can be 1 to 256, default value is 256
subdir_count_per_path=256
# tracker_server can ocur more than once, and tracker_server format is
# "host:port", host can be hostname or ip address
tracker_server=192.168.10.21:22122
tracker_server=192.168.10.22:22122
#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info
#unix group name to run this program,
#not set (empty) means run by the group of current user
run_by_group=
#unix username to run this program,
#not set (empty) means run by current user
run_by_user=
# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" (only one asterisk) means match all ip addresses
# we can use CIDR ips like 192.168.5.64/26
# and also use range like these: 10.0.1.[0-254] and host[01-08,20-25].domain.com
# for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
allow_hosts=*
# the mode of the files distributed to the data path
# 0: round robin(default)
# 1: random, distributted by hash code
file_distribute_path_mode=0
# valid when file_distribute_to_path is set to 0 (round robin),
# when the written file count reaches this number, then rotate to next path
# default value is 100
file_distribute_rotate_count=100
# call fsync to disk when write big file
# 0: never call fsync
# other: call fsync when written bytes >= this bytes
# default value is 0 (never call fsync)
fsync_after_written_bytes=0
# sync log buff to disk every interval seconds
# must > 0, default value is 10 seconds
sync_log_buff_interval=10
# sync binlog buff / cache to disk every interval seconds
# default value is 60 seconds
sync_binlog_buff_interval=10
# sync storage stat info to disk every interval seconds
# default value is 300 seconds
sync_stat_file_interval=300
# thread stack size, should >= 512KB
# default value is 512KB
thread_stack_size=512KB
# the priority as a source server for uploading file.
# the lower this value, the higher its uploading priority.
# default value is 10
upload_priority=10
# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# default values is empty
if_alias_prefix=
# if check file duplicate, when set to true, use FastDHT to store file indexes
# 1 or yes: need check
# 0 or no: do not check
# default value is 0
check_file_duplicate=0
# file signature method for check file duplicate
## hash: four 32 bits hash code
## md5: MD5 signature
# default value is hash
# since V4.01
file_signature_method=hash
# namespace for storing file indexes (key-value pairs)
# this item must be set when check_file_duplicate is true / on
key_namespace=FastDFS
# set keep_alive to 1 to enable persistent connection with FastDHT servers
# default value is 0 (short connection)
keep_alive=0
# you can use "#include filename" (not include double quotes) directive to
# load FastDHT server list, when the filename is a relative path such as
# pure filename, the base path is the base path of current/this config file.
# must set FastDHT server list when check_file_duplicate is true / on
# please see INSTALL of FastDHT for detail
##include /home/yuqing/fastdht/conf/fdht_servers.conf
# if log to access log
# default value is false
# since V4.00
use_access_log = false
# if rotate the access log every day
# default value is false
# since V4.00
rotate_access_log = false
# rotate access log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.00
access_log_rotate_time=00:00
# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false
# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time=00:00
# rotate access log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_access_log_size = 0
# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0
# keep days of the log files
# 0 means do not delete old log files
# default value is 0
log_file_keep_days = 0
# if skip the invalid record when sync file
# default value is false
# since V4.02
file_sync_skip_invalid_record=false
# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false
# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600
# use the ip address of this storage server if domain_name is empty,
# else this domain name will ocur in the url redirected by the tracker server
http.domain_name=
# the port of the web server on this storage server
http.server_port=8888
aaa@qq.com:~/FastDFS#
3、 创建基础数据目录(参考基础目录 base_path 配置):
mkdir -p /fdfs/storage
4、 防火墙中打开存储器端口(默认为 23000): # vi /etc/sysconfig/iptables
添加如下端口行:
## FastDFS Storage Port
-A INPUT -m state --state NEW -m tcp -p tcp --dport 23000 -j ACCEPT
重启防火墙:
# service iptables restart
5、 启动 Storage:
# /etc/init.d/fdfs_storaged start
(初次成功启动,会在/fdfs/storage 目录下创建数据目录 data 和日志目录 logs)
各节点启动动,使用 tail -f /fdfs/storage/logs/storaged.log 命令监听存储节点日志,可以
看到存储节点链接到跟踪器,并提示哪一个为 leader 跟踪器。同时也会看到同一组中的其他节点加入
进来的日志信息。
查看 23000 端口监听情况:netstat -unltp|grep fdfs
所有 Storage 节点都启动之后,可以在任一 Storage 节点上使用如下命令查看集群信息:
# /usr/bin/fdfs_monitor /etc/fdfs/storage.conf
可以看到存储节点状态为 ACTIVE 则可
6、 关闭 Storage:
# /etc/init.d/fdfs_storaged stop
7、 设置 FastDFS 存储器开机启动:
cat > /etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
## FastDFS Storage
/etc/init.d/fdfs_storaged start
exit 0
四、文件上传测试 ( 192.168.10.21 )
1、修改 Tracker 服务器中的客户端配置文件:
在机器aaa@qq.com:~#和aaa@qq.com:~#上执行指令 find / -name client.conf.sample
/etc/fdfs/client.conf.sample
在机器aaa@qq.com:~#和aaa@qq.com:~#上执行指令 cp /etc/fdfs/client.conf.sample /etc/fdfs/client.conf
在机器aaa@qq.com:~#和aaa@qq.com:~#上执行指令 vi /etc/fdfs/client.conf
base_path=/fdfs/tracker
tracker_server=192.168.10.21:22122
tracker_server=192.168.10.22:22122
2、执行如下文件上传命令:
在机器aaa@qq.com:~#和aaa@qq.com:~#上执行指令 ls /usr/bin/ | grep fdfs
fdfs_appender_test
fdfs_appender_test1
fdfs_append_file
fdfs_crc32
fdfs_delete_file
fdfs_download_file
fdfs_file_info
fdfs_monitor
fdfs_storaged
fdfs_test
fdfs_test1
fdfs_trackerd
fdfs_upload_appender
fdfs_upload_file
在机器aaa@qq.com:~#和aaa@qq.com:~#上执行指令 apt-get -y install lrzsz
在机器aaa@qq.com:~#和aaa@qq.com:~#上执行指令 cd /opt # 将测试图片从Windows 10 系统上传到此目录
aaa@qq.com:/opt# ls
20160501.jpg 20160502.jpg 20160503.jpg
aaa@qq.com:/opt# ls
20160701.jpg 20160702.jpg 20160703.jpg
在机器aaa@qq.com:~#上执行指令 /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /opt/20160501.jpg
在机器aaa@qq.com:~#上执行指令 /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /opt/20160502.jpg
在机器aaa@qq.com:~#上执行指令 /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /opt/20160503.jpg
在机器aaa@qq.com:~#上执行指令 /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /opt/20160701.jpg
在机器aaa@qq.com:~#上执行指令 /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /opt/20160702.jpg
在机器aaa@qq.com:~#上执行指令 /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /opt/20160703.jpg
返回 ID 号:
aaa@qq.com:/opt# ls
20160501.jpg 20160502.jpg 20160503.jpg
aaa@qq.com:/opt# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /opt/20160501.jpg
group1/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
aaa@qq.com:/opt# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /opt/20160502.jpg
group1/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
aaa@qq.com:/opt# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /opt/20160503.jpg
group1/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
aaa@qq.com:/opt#
(能返回以上文件 ID,说明文件上传成功)
返回 ID 号:
aaa@qq.com:/opt# ls
20160701.jpg 20160702.jpg 20160703.jpg
aaa@qq.com:/opt# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /opt/20160701.jpg
group1/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
aaa@qq.com:/opt# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /opt/20160702.jpg
group1/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
aaa@qq.com:/opt# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /opt/20160703.jpg
group1/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
aaa@qq.com:/opt#
(能返回以上文件 ID,说明文件上传成功)
六、在各存储节点(192.168.10.23、192.168.10.24、192.168.10.25、192.168.10.26、192.168.10.27、192.168.10.28)上安装 Nginx
1、fastdfs-nginx-module 作用说明
FastDFS 通过 Tracker 服务器,将文件放在 Storage 服务器存储,但是同组存储服务器之间需要进入
文件复制,有同步延迟的问题。假设 Tracker 服务器将文件上传到了 192.168.10.23,上传成功后文件 ID
已经返回给客户端。此时 FastDFS 存储集群机制会将这个文件同步到同组存储 192.168.10.24,在文件还
没有复制完成的情况下,客户端如果用这个文件 ID 在 192.168.10.24 上取文件,就会出现文件无法访问的
错误。而 fastdfs-nginx-module 可以重定向文件连接到源服务器取文件,避免客户端由于复制延迟导致的
文件无法访问错误。(解压后的 fastdfs-nginx-module 在 nginx 安装时使用)
安装说明https://github.com/happyfish100/fastdfs-nginx-module/blob/master/INSTALL
2、下载 fastdfs-nginx-module_v1.16.tar.gz 到aaa@qq.com:~# 、aaa@qq.com:~#、aaa@qq.com:~#、aaa@qq.com:~#、 aaa@qq.com:~#和aaa@qq.com:~#
git clone https://github.com/happyfish100/fastdfs-nginx-module.git
3、安装编译 Nginx 所需的依赖包
apt-get update && apt-get -y install libtool libpcre3 libpcre3-dev zlib1g-dev openssl libssl-dev
4、编译安装 Nginx(添加 fastdfs-nginx-module 模块)
wget http://nginx.org/download/nginx-1.8.1.tar.gz
tar -zxvf nginx-1.8.1.tar.gz
cd nginx-1.8.1
./configure --prefix=/usr/local/nginx --add-module=/root/fastdfs-nginx-module/src
make && make install
7、复制 fastdfs-nginx-module 源码中的配置文件到/etc/fdfs 目录,并修改
# cp /root/fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs/
# vi /etc/fdfs/mod_fastdfs.conf
(1)第一组 Storage 的 mod_fastdfs.conf 配置如下:
connect_timeout=10
base_path=/tmp
tracker_server=192.168.10.21:22122
tracker_server=192.168.10.22:22122
storage_server_port=23000
group_name=group1
url_have_group_name = true
store_path0=/fdfs/storage
group_count = 2
[group1]
group_name=group1
storage_server_port=23000
store_path_count=1
store_path0=/fastdfs/storage
[group2]
group_name=group2
storage_server_port=23000
store_path_count=1
store_path0=/fastdfs/storage
aaa@qq.com:~/nginx-1.8.1# cat > /etc/fdfs/mod_fastdfs.conf
# connect timeout in seconds
# default value is 30s
connect_timeout=30
# network recv and send timeout in seconds
# default value is 30s
network_timeout=30
# the base path to store log files
base_path=/tmp
# if load FastDFS parameters from tracker server
# since V1.12
# default value is false
load_fdfs_parameters_from_tracker=true
# storage sync file max delay seconds
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V1.12
# default value is 86400 seconds (one day)
storage_sync_file_max_delay = 86400
# if use storage ID instead of IP address
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# default value is false
# since V1.13
use_storage_id = false
# specify storage ids filename, can use relative or absolute path
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V1.13
storage_ids_filename = storage_ids.conf
# FastDFS tracker_server can ocur more than once, and tracker_server format is
# "host:port", host can be hostname or ip address
# valid only when load_fdfs_parameters_from_tracker is true
tracker_server=192.168.10.21:22122
tracker_server=192.168.10.22:22122
# the port of the local storage server
# the default value is 23000
storage_server_port=23000
# the group name of the local storage server
group_name=group3
# if the url / uri including the group name
# set to false when uri like /M00/00/00/xxx
# set to true when uri like ${group_name}/M00/00/00/xxx, such as group1/M00/xxx
# default value is false
url_have_group_name = true
# path(disk or mount point) count, default value is 1
# must same as storage.conf
store_path_count=1
# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
# must same as storage.conf
store_path0=/fdfs/storage
#store_path1=/home/yuqing/fastdfs1
# standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info
# set the log filename, such as /usr/local/apache2/logs/mod_fastdfs.log
# empty for output to stderr (apache and nginx error_log file)
log_filename=
# response mode when the file not exist in the local file system
## proxy: get the content from other storage server, then send to client
## redirect: redirect to the original storage server (HTTP Header is Location)
response_mode=proxy
# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# this paramter used to get all ip address of the local host
# default values is empty
if_alias_prefix=
# use "#include" directive to include HTTP config file
# NOTE: #include is an include directive, do NOT remove the # before include
#include http.conf
# if support flv
# default value is false
# since v1.15
flv_support = true
# flv file extension name
# default value is flv
# since v1.15
flv_extension = flv
# set the group count
# set to none zero to support multi-group
# set to 0 for single group only
# groups settings section as [group1], [group2], ..., [groupN]
# default value is 0
# since v1.14
group_count = 3
# group settings for group #1
# since v1.14
# when support multi-group, uncomment following section
[group1]
group_name=group1
storage_server_port=23000
store_path_count=1
store_path0=/fdfs/storage
#store_path1=/home/yuqing/fastdfs1
# group settings for group #2
# since v1.14
# when support multi-group, uncomment following section as neccessary
[group2]
group_name=group2
storage_server_port=23000
store_path_count=1
store_path0=/fdfs/storage
# group settings for group #2
# since v1.14
# when support multi-group, uncomment following section as neccessary
[group3]
group_name=group3
storage_server_port=23000
store_path_count=1
store_path0=/fdfs/storage
aaa@qq.com:~/nginx-1.8.1#
(2)第二组 Storage 的 mod_fastdfs.conf 配置与第一组配置只有 group_name 不同:
group_name=group2
8、复制 FastDFS 的部分配置文件到/etc/fdfs 目录
aaa@qq.com:~/nginx-1.8.1# ls /etc/fdfs
client.conf.sample mod_fastdfs.conf storage.conf storage.conf.sample tracker.conf.sample
aaa@qq.com:~/nginx-1.8.1# ls /root/FastDFS/conf/
anti-steal.jpg client.conf http.conf mime.types storage.conf storage_ids.conf tracker.conf
aaa@qq.com:~/nginx-1.8.1# cp /root/FastDFS/conf/http.conf /root/FastDFS/conf/mime.types /etc/fdfs
aaa@qq.com:~/nginx-1.8.1# ls /etc/fdfs
client.conf.sample mime.types storage.conf tracker.conf.sample
http.conf mod_fastdfs.conf storage.conf.sample
aaa@qq.com:~/nginx-1.8.1#
9、在/fdfs/storage 文件存储目录下创建软连接,将其链接到实际存放数据的目录
# ln -s /fdfs/storage/data/ /fdfs/storage/data/M00
10、配置 Nginx,简洁版 nginx 配置样例:
cat > /usr/local/nginx/conf/nginx.conf
user root;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 8888;
server_name localhost;
location ~/group([0-9])/M00 {
#alias /fdfs/storage/data;
ngx_fastdfs_module;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
注意、说明:
A、8888 端口值是要与/etc/fdfs/storage.conf 中的 http.server_port=8888 相对应,
因为 http.server_port 默认为 8888,如果想改成 80,则要对应修改过来。
B、Storage 对应有多个 group 的情况下,访问路径带 group 名,如/group1/M00/00/00/xxx,
对应的 Nginx 配置为:
location ~/group([0-9])/M00 {
ngx_fastdfs_module;
}
C、如查下载时如发现老报 404,将 nginx.conf 第一行 user nobody 修改为 user root 后重新启动。
11、防火墙中打开 Nginx 的 8888 端口
# vi /etc/sysconfig/iptables
添加:
## Nginx Port
-A INPUT -m state --state NEW -m tcp -p tcp --dport 8888 -j ACCEPT
重启防火墙: # service iptables restart
12、启动 Nginx
# /usr/local/nginx/sbin/nginx
ngx_http_fastdfs_set pid=xxx
(重启 Nginx 的命令为:/usr/local/nginx/sbin/nginx -s reload)
设置 Nginx 开机启动 ,加入: /usr/local/nginx/sbin/nginx
cat > /etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
## FastDFS Storage
/etc/init.d/fdfs_storaged start
## Start Nginx
/usr/local/nginx/sbin/nginx
exit 0
aaa@qq.com:~# netstat -unltp|grep nginx
tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN 501/nginx
aaa@qq.com:~#
13、通过浏览器访问测试上传的文件
group1/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
group1/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
group1/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
group1/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
group1/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
group1/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.23:8888/group1/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.23:8888/group1/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.23:8888/group1/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.23:8888/group1/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.23:8888/group1/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.23:8888/group1/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.23:8888/group2/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.23:8888/group2/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.23:8888/group2/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.23:8888/group2/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.23:8888/group2/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.23:8888/group2/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.23:8888/group3/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.23:8888/group3/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.23:8888/group3/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.23:8888/group3/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.23:8888/group3/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.23:8888/group3/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.24:8888/group1/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.24:8888/group1/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.24:8888/group1/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.24:8888/group1/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.24:8888/group1/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.24:8888/group1/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.24:8888/group2/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.24:8888/group2/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.24:8888/group2/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.24:8888/group2/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.24:8888/group2/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.24:8888/group2/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.24:8888/group3/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.24:8888/group3/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.24:8888/group3/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.24:8888/group3/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.24:8888/group3/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.24:8888/group3/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.25:8888/group1/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.25:8888/group1/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.25:8888/group1/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.25:8888/group1/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.25:8888/group1/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.25:8888/group1/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.25:8888/group2/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.25:8888/group2/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.25:8888/group2/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.25:8888/group2/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.25:8888/group2/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.25:8888/group2/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.25:8888/group3/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.25:8888/group3/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.25:8888/group3/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.25:8888/group3/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.25:8888/group3/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.25:8888/group3/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.26:8888/group1/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.26:8888/group1/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.26:8888/group1/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.26:8888/group1/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.26:8888/group1/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.26:8888/group1/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.26:8888/group2/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.26:8888/group2/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.26:8888/group2/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.26:8888/group2/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.26:8888/group2/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.26:8888/group2/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.26:8888/group3/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.26:8888/group3/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.26:8888/group3/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.26:8888/group3/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.26:8888/group3/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.26:8888/group3/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.27:8888/group1/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.27:8888/group1/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.27:8888/group1/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.27:8888/group1/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.27:8888/group1/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.27:8888/group1/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.27:8888/group2/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.27:8888/group2/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.27:8888/group2/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.27:8888/group2/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.27:8888/group2/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.27:8888/group2/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.27:8888/group3/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.27:8888/group3/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.27:8888/group3/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.27:8888/group3/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.27:8888/group3/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.27:8888/group3/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.28:8888/group1/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.28:8888/group1/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.28:8888/group1/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.28:8888/group1/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.28:8888/group1/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.28:8888/group1/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.28:8888/group2/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.28:8888/group2/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.28:8888/group2/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.28:8888/group2/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.28:8888/group2/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.28:8888/group2/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.28:8888/group3/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.28:8888/group3/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.28:8888/group3/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.28:8888/group3/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.28:8888/group3/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.28:8888/group3/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
七、在跟踪器节点(192.168.10.21、192.168.10.22)上安装 Nginx
1、在 tracker 上安装的 nginx 主要为了提供 http 访问的反向代理、负载均衡以及缓存服务。
2、安装编译 Nginx 所需的依赖包
apt-get update && apt-get -y install libtool libpcre3 libpcre3-dev zlib1g-dev openssl libssl-dev
3、下载ngx_cache_purge-2.3.tar.gz到/root,解压
wget http://labs.frickle.com/files/ngx_cache_purge-2.3.tar.gz
tar -zxvf ngx_cache_purge-2.3.tar.gz
5、编译安装 Nginx(添加 fastdfs-nginx-module 模块)
wget http://nginx.org/download/nginx-1.8.1.tar.gz
tar -zxvf nginx-1.8.1.tar.gz
cd nginx-1.8.1
./configure --prefix=/usr/local/nginx --add-module=/root/ngx_cache_purge-2.3
make && make install
6、配置 Nginx,设置负载均衡以及缓存
# cat > /usr/local/nginx/conf/nginx.conf
user root;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
use epoll;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
#设置缓存
server_names_hash_bucket_size 128;
client_header_buffer_size 32k;
large_client_header_buffers 4 32k;
client_max_body_size 300m;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 16k;
proxy_buffers 4 64k;
proxy_busy_buffers_size 128k;
proxy_temp_file_write_size 128k;
#设置缓存存储路径、存储方式、分配内存大小、磁盘最大空间、缓存期限
proxy_cache_path /fdfs/cache/nginx/proxy_cache levels=1:2
keys_zone=http-cache:200m max_size=1g inactive=30d;
proxy_temp_path /fdfs/cache/nginx/proxy_cache/tmp;
#设置 group1 的服务器
upstream fdfs_group1 {
server 192.168.10.23:8888 weight=1 max_fails=2 fail_timeout=30s;
server 192.168.10.24:8888 weight=1 max_fails=2 fail_timeout=30s;
}
#设置 group2 的服务器
upstream fdfs_group2 {
server 192.168.10.25:8888 weight=1 max_fails=2 fail_timeout=30s;
server 192.168.10.26:8888 weight=1 max_fails=2 fail_timeout=30s;
}
#设置 group3 的服务器
upstream fdfs_group3 {
server 192.168.10.27:8888 weight=1 max_fails=2 fail_timeout=30s;
server 192.168.10.28:8888 weight=1 max_fails=2 fail_timeout=30s;
}
server {
listen 8000;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
#设置 group 的负载均衡参数
location /group1/M00 {
proxy_next_upstream http_502 http_504 error timeout invalid_header;
proxy_cache http-cache;
proxy_cache_valid 200 304 12h;
proxy_cache_key $uri$is_args$args;
proxy_pass http://fdfs_group1;
expires 30d;
}
location /group2/M00 {
proxy_next_upstream http_502 http_504 error timeout invalid_header;
proxy_cache http-cache;
proxy_cache_valid 200 304 12h;
proxy_cache_key $uri$is_args$args;
proxy_pass http://fdfs_group2;
expires 30d;
}
location /group3/M00 {
proxy_next_upstream http_502 http_504 error timeout invalid_header;
proxy_cache http-cache;
proxy_cache_valid 200 304 12h;
proxy_cache_key $uri$is_args$args;
proxy_pass http://fdfs_group3;
expires 30d;
}
#设置清除缓存的访问权限
location ~/purge(/.*) {
allow 127.0.0.1;
allow 192.168.10.0/24;
deny all;
proxy_cache_purge http-cache $1$is_args$args;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
按以上 nginx 配置文件的要求,创建对应的缓存目录:
# mkdir -p /fdfs/cache/nginx/proxy_cache
# mkdir -p /fdfs/cache/nginx/proxy_cache/tmp
7、系统防火墙打开对应的端口
# vi /etc/sysconfig/iptables
## Nginx
-A INPUT -m state --state NEW -m tcp -p tcp --dport 8000 -j ACCEPT
# service iptables restart
8、启动 Nginx
# /usr/local/nginx/sbin/nginx
重启 Nginx
# /usr/local/nginx/sbin/nginx -s reload
设置 Nginx 开机启动 /etc/rc.local 加入:/usr/local/nginx/sbin/nginx
cat > /etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
## FastDFS Tracker
/etc/init.d/fdfs_trackerd start
## start Nginx
/usr/local/nginx/sbin/nginx
exit 0
aaa@qq.com:~# netstat -unltp|grep nginx
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 499/nginx
aaa@qq.com:~#
group1/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
group1/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
group1/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
group1/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
group1/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
group1/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
现在可以通过 Tracker 中的 Nginx 来进行访问
(1)通过 Tracker1 中的 Nginx 来访问
http://192.168.10.21:8000/group1/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.21:8000/group2/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.21:8000/group1/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.21:8000/group2/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.21:8000/group1/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.21:8000/group2/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.21:8000/group1/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.21:8000/group2/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.21:8000/group1/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.21:8000/group2/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.21:8000/group1/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.21:8000/group2/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
(2)通过 Tracker2 中的 Nginx 来访问
http://192.168.10.22:8000/group1/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.22:8000/group2/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.22:8000/group1/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.22:8000/group2/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.22:8000/group1/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.22:8000/group2/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.22:8000/group1/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.22:8000/group2/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.22:8000/group1/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.22:8000/group2/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.22:8000/group1/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.22:8000/group2/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
由上面的文件访问效果可以看到,每一个 Tracker 中的 Nginx 都单独对后端的 Storage 组做了负载均衡,
但整套 FastDFS 集群如果想对外提供统一的文件访问地址,还需要对两个 Tracker 中的 Nginx 进行 HA 集
群。
八、使用 Keepalived + Nginx 组成的高可用负载均衡集群做两个 Tracker 节点中 Nginx 的负载均衡
1、 《Dubbo ****--高可用架构篇--第 08 节--Keepalived+Nginx 实现高可用负载均衡》
2、在 Keepalived+Nginx 实现高可用负载均衡集群中配置 Tracker 节点中 Nginx 的负载均衡反向代理
(192.168.10.31 和 192.168.10.32 中的 Nginx 执行相同的配置)
cat > /etc/nginx/nginx.conf
user root;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
## FastDFS Tracker Proxy
upstream fastdfs_tracker {
server 192.168.10.21:8000 weight=1 max_fails=2 fail_timeout=30s;
server 192.168.10.22:8000 weight=1 max_fails=2 fail_timeout=30s;
}
server {
listen 88;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
## FastDFS Proxy
location /dfs {
root html;
index index.html index.htm;
proxy_pass http://fastdfs_tracker/;
proxy_set_header Host $http_host;
proxy_set_header Cookie $http_cookie;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 300m;
}
}
}
aaa@qq.com:~# cat > /etc/hosts
127.0.0.1 localhost
127.0.1.1 contoso31.com contoso31
192.168.10.31 contoso31.com contoso31
cat > /etc/keepalived/keepalived.conf
global_defs {
notification_email {
aaa@qq.com
}
notification_email_from aaa@qq.com
smtp_server smtp.qq.com
smtp_connect_timeout 30
router_id contoso31.com
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.10.100
}
}
aaa@qq.com:~# cat > /etc/hosts
127.0.0.1 localhost
127.0.1.1 contoso32.com contoso32
192.168.10.32 contoso32.com contoso32
cat > /etc/keepalived/keepalived.conf
global_defs {
notification_email {
aaa@qq.com
}
notification_email_from aaa@qq.com
smtp_server smtp.qq.com
smtp_connect_timeout 30
router_id contoso32.com
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.10.100
}
}
aaa@qq.com:~# cat > /etc/hosts
127.0.0.1 localhost
127.0.1.1 contoso32.com contoso32
192.168.10.32 contoso32.com contoso32
aaa@qq.com:~# cat > /etc/keepalived/keepalived.conf
global_defs {
notification_email {
aaa@qq.com
}
notification_email_from aaa@qq.com
smtp_server smtp.qq.com
smtp_connect_timeout 30
router_id contoso32.com
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.10.100
}
}
aaa@qq.com:~# cat > /etc/hosts
127.0.0.1 localhost
127.0.1.1 contoso31.com contoso31
192.168.10.31 contoso31.com contoso31
aaa@qq.com:~# cat > /etc/keepalived/keepalived.conf
global_defs {
notification_email {
aaa@qq.com
}
notification_email_from aaa@qq.com
smtp_server smtp.qq.com
smtp_connect_timeout 30
router_id contoso31.com
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.10.100
}
}
aaa@qq.com:~# cat > /etc/nginx/nginx.conf
user root;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
## FastDFS Tracker Proxy
upstream fastdfs_tracker {
server 192.168.10.21:8000 weight=1 max_fails=2 fail_timeout=30s;
server 192.168.10.22:8000 weight=1 max_fails=2 fail_timeout=30s;
}
server {
listen 88;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
## FastDFS Proxy
location /dfs {
root html;
index index.html index.htm;
proxy_pass http://fastdfs_tracker/;
proxy_set_header Host $http_host;
proxy_set_header Cookie $http_cookie;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 300m;
}
}
}
aaa@qq.com:~# systemctl enable keepalived
Synchronizing state for keepalived.service with sysvinit using update-rc.d...
Executing /usr/sbin/update-rc.d keepalived defaults
Executing /usr/sbin/update-rc.d keepalived enable
aaa@qq.com:~# systemctl restart keepalived
aaa@qq.com:~# systemctl status keepalived
鈼[0m keepalived.service - LSB: Starts keepalived
Loaded: loaded (/etc/init.d/keepalived)
Active: active (running) since Wed 2016-06-08 00:59:20 CST; 8s ago
Process: 1852 ExecStop=/etc/init.d/keepalived stop (code=exited, status=0/SUCCESS)
Process: 1857 ExecStart=/etc/init.d/keepalived start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/keepalived.service
鈹溾攢1860 /usr/sbin/keepalived
鈹溾攢1861 /usr/sbin/keepalived
鈹斺攢1862 /usr/sbin/keepalived
Jun 08 00:59:20 contoso31 Keepalived_vrrp[1862]: Opening file '/etc/keepalived/keepalived.conf'.
Jun 08 00:59:20 contoso31 Keepalived_vrrp[1862]: Configuration is using : 62874 Bytes
Jun 08 00:59:20 contoso31 Keepalived_vrrp[1862]: Using LinkWatch kernel netlink reflector...
Jun 08 00:59:20 contoso31 Keepalived_healthcheckers[1861]: Registering Kernel netlink reflector
Jun 08 00:59:20 contoso31 Keepalived_healthcheckers[1861]: Registering Kernel netlink command channel
Jun 08 00:59:20 contoso31 Keepalived_healthcheckers[1861]: Opening file '/etc/keepalived/keepalived.conf'.
Jun 08 00:59:20 contoso31 Keepalived_healthcheckers[1861]: Configuration is using : 7417 Bytes
Jun 08 00:59:20 contoso31 Keepalived_healthcheckers[1861]: Using LinkWatch kernel netlink reflector...
Jun 08 00:59:21 contoso31 Keepalived_vrrp[1862]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jun 08 00:59:22 contoso31 Keepalived_vrrp[1862]: VRRP_Instance(VI_1) Entering MASTER STATE
aaa@qq.com:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:5f:09:8d brd ff:ff:ff:ff:ff:ff
inet 192.168.10.31/24 brd 192.168.10.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.10.100/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe5f:98d/64 scope link
valid_lft forever preferred_lft forever
aaa@qq.com:~#
aaa@qq.com:~# cat > /etc/nginx/nginx.conf
user root;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
## FastDFS Tracker Proxy
upstream fastdfs_tracker {
server 192.168.10.21:8000 weight=1 max_fails=2 fail_timeout=30s;
server 192.168.10.22:8000 weight=1 max_fails=2 fail_timeout=30s;
}
server {
listen 88;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
## FastDFS Proxy
location /dfs {
root html;
index index.html index.htm;
proxy_pass http://fastdfs_tracker/;
proxy_set_header Host $http_host;
proxy_set_header Cookie $http_cookie;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 300m;
}
}
}
aaa@qq.com:~# systemctl enable keepalived
Synchronizing state for keepalived.service with sysvinit using update-rc.d...
Executing /usr/sbin/update-rc.d keepalived defaults
Executing /usr/sbin/update-rc.d keepalived enable
aaa@qq.com:~# systemctl restart keepalived
aaa@qq.com:~# systemctl status keepalived
鈼[0m keepalived.service - LSB: Starts keepalived
Loaded: loaded (/etc/init.d/keepalived)
Active: active (running) since Wed 2016-06-08 00:59:47 CST; 9s ago
Process: 1853 ExecStop=/etc/init.d/keepalived stop (code=exited, status=0/SUCCESS)
Process: 1858 ExecStart=/etc/init.d/keepalived start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/keepalived.service
鈹溾攢1861 /usr/sbin/keepalived
鈹溾攢1862 /usr/sbin/keepalived
鈹斺攢1863 /usr/sbin/keepalived
Jun 08 00:59:47 contoso32 Keepalived_vrrp[1863]: Opening file '/etc/keepalived/keepalived.conf'.
Jun 08 00:59:47 contoso32 Keepalived_vrrp[1863]: Configuration is using : 62872 Bytes
Jun 08 00:59:47 contoso32 Keepalived_vrrp[1863]: Using LinkWatch kernel netlink reflector...
Jun 08 00:59:47 contoso32 Keepalived_vrrp[1863]: VRRP_Instance(VI_1) Entering BACKUP STATE
Jun 08 00:59:47 contoso32 keepalived[1858]: Starting keepalived: keepalived.
Jun 08 00:59:47 contoso32 Keepalived_healthcheckers[1862]: Registering Kernel netlink reflector
Jun 08 00:59:47 contoso32 Keepalived_healthcheckers[1862]: Registering Kernel netlink command channel
Jun 08 00:59:47 contoso32 Keepalived_healthcheckers[1862]: Opening file '/etc/keepalived/keepalived.conf'.
Jun 08 00:59:47 contoso32 Keepalived_healthcheckers[1862]: Configuration is using : 7415 Bytes
Jun 08 00:59:47 contoso32 Keepalived_healthcheckers[1862]: Using LinkWatch kernel netlink reflector...
aaa@qq.com:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:df:e5:e2 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.32/24 brd 192.168.10.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fedf:e5e2/64 scope link
valid_lft forever preferred_lft forever
aaa@qq.com:~#
aaa@qq.com:~# systemctl enable nginx
Synchronizing state for nginx.service with sysvinit using update-rc.d...
Executing /usr/sbin/update-rc.d nginx defaults
Executing /usr/sbin/update-rc.d nginx enable
aaa@qq.com:~# systemctl restart nginx
aaa@qq.com:~# systemctl status nginx
鈼[0m nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled)
Active: active (running) since Wed 2016-06-08 01:02:20 CST; 7s ago
Process: 1916 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status=0/SUCCESS)
Process: 1921 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 1919 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 1924 (nginx)
CGroup: /system.slice/nginx.service
鈹溾攢1924 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
鈹斺攢1925 nginx: worker process
aaa@qq.com:~#
aaa@qq.com:~# systemctl enable nginx
Synchronizing state for nginx.service with sysvinit using update-rc.d...
Executing /usr/sbin/update-rc.d nginx defaults
Executing /usr/sbin/update-rc.d nginx enable
aaa@qq.com:~# systemctl restart nginx
aaa@qq.com:~# systemctl status nginx
鈼[0m nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled)
Active: active (running) since Wed 2016-06-08 01:03:24 CST; 12s ago
Process: 1914 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status=0/SUCCESS)
Process: 1918 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 1917 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 1922 (nginx)
CGroup: /system.slice/nginx.service
鈹溾攢1922 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
鈹斺攢1923 nginx: worker process
aaa@qq.com:~#
4、通过 Keepalived+Nginx 组成的高可用负载集群的 VIP(192.168.10.100)来访问 FastDFS 集群中的文件
http://192.168.10.100:88/dfs/group1/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.100:88/dfs/group1/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.100:88/dfs/group1/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.100:88/dfs/group1/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.100:88/dfs/group1/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.100:88/dfs/group1/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.100:88/dfs/group2/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.100:88/dfs/group2/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.100:88/dfs/group2/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.100:88/dfs/group2/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.100:88/dfs/group2/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.100:88/dfs/group2/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg
http://192.168.10.100:88/dfs/group3/M00/00/00/wKgKF1dWMIuACRrOAACUbdn5ZBU223.jpg
http://192.168.10.100:88/dfs/group3/M00/00/00/wKgKGFdWMKCAL0IWAAEI-MQMIa8607.jpg
http://192.168.10.100:88/dfs/group3/M00/00/00/wKgKF1dWMKqAauxIAACWFy2WNSs175.jpg
http://192.168.10.100:88/dfs/group3/M00/00/00/wKgKGFdWMPCAS1n0AABZm0qVaPQ119.jpg
http://192.168.10.100:88/dfs/group3/M00/00/00/wKgKF1dWMPaAJcnBAAJzRAIgGB4713.jpg
http://192.168.10.100:88/dfs/group3/M00/00/00/wKgKGFdWMPqAK4JKAAAvRRpP8l4687.jpg