在nginx接收到请求之后, 需把请求分发到后端WEB服务集群.
在这里需要记录分发日志, 来分析后端每台WEB服务器处理的请求数目.
http { log_format main ' $remote_user [$time_local] $http_x_Forwarded_for $remote_addr $request ' '$http_x_forwarded_for ' '$upstream_addr ' 'ups_resp_time: $upstream_response_time ' 'request_time: $request_time'; access_log logs/access.log main; server{} ... }
在日志显示的信息为:
- [31/May/2013:00:01:03 -0700] - xxx.ip.addr.xxx GET /portal/index.html HTTP/1.1 - 192.168.100.15:8188 ups_resp_time: 0.010 request_time: 0.011 - [31/May/2013:00:01:03 -0700] - xxx.ip.addr.xxx GET /portal/index.html HTTP/1.1 - 192.168.100.16:8188 ups_resp_time: 0.006 request_time: 0.006 - [31/May/2013:00:01:03 -0700] - xxx.ip.addr.xxx GET /portal/index.html HTTP/1.1 - 192.168.100.15:8188 ups_resp_time: 0.013 request_time: 0.013 - [31/May/2013:00:01:03 -0700] - xxx.ip.addr.xxx GET /portal/index.html HTTP/1.1 - 192.168.100.17:8188 ups_resp_time: 0.003 request_time: 0.003 - [31/May/2013:00:01:03 -0700] - xxx.ip.addr.xxx GET /portal/index.html HTTP/1.1 - 192.168.100.18:8188 ups_resp_time: 0.004 request_time: 0.004 - [31/May/2013:00:01:03 -0700] - xxx.ip.addr.xxx GET /portal/index.html HTTP/1.1 - 192.168.100.15:8188 ups_resp_time: 0.012 request_time: 0.013 - [31/May/2013:00:01:03 -0700] - xxx.ip.addr.xxx GET /portal/index.html HTTP/1.1 - 192.168.100.18:8188 ups_resp_time: 0.005 request_time: 0.005 - [31/May/2013:00:01:03 -0700] - xxx.ip.addr.xxx GET /portal/index.html HTTP/1.1 - 192.168.100.16:8188 ups_resp_time: 0.011 request_time: 0.011 - [31/May/2013:00:01:03 -0700] - xxx.ip.addr.xxx GET /portal/index.html HTTP/1.1 - 192.168.100.15:8188 ups_resp_time: 0.447 request_time: 0.759
全部配置文件nginx.conf.
# greatwqs@163.com Install on 2012-08-11 linux # user devwqs; # 2 intel(R) xeon(R) CPU worker_processes 4; worker_cpu_affinity 00000001 00000010 00000100 00001000; # error_log logs/error.log; # error_log logs/error.log notice; error_log logs/error.log error; pid logs/nginx.pid; # allow openning file nums worker_rlimit_nofile 25600; events { # linux 2.6 upper version. use epoll; worker_connections 51200; } http { include mime.types; default_type application/octet-stream; # log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"' # '"$upstream_addr"' '"$upstream_response_time"'; log_format main ' $remote_user [$time_local] $http_x_Forwarded_for $remote_addr $request ' '$http_x_forwarded_for ' '$upstream_addr ' 'ups_resp_time: $upstream_response_time ' 'request_time: $request_time'; access_log logs/access.log main; sendfile on; # tcp_nopush on; keepalive_requests 200; keepalive_timeout 20; gzip on; client_body_buffer_size 128k; client_body_timeout 60s; client_max_body_size 10m; # proxy_buffer_size 8k; # proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; # portal-cluster upstream portal-cluster { # http://192.168.100.15:8188/portal/ server 192.168.100.15:8188 weight=5 max_fails=5 fail_timeout=30s; # http://192.168.100.16:8188/portal/ server 192.168.100.16:8188 weight=5 max_fails=5 fail_timeout=30s; # http://192.168.100.17:8188/portal/ server 192.168.100.17:8188 weight=5 max_fails=5 fail_timeout=30s; # http://192.168.100.18:8188/portal/ server 192.168.100.18:8188 weight=5 max_fails=5 fail_timeout=30s; } # manage-cluster upstream manage-cluster { # http://192.168.100.25:8189/manage/ server 192.168.100.25:8189 weight=4 max_fails=5 fail_timeout=30s; # http://192.168.100.26:8189/manage/ server 192.168.100.26:8189 weight=6 max_fails=5 fail_timeout=30s; } # External Internet. server { listen 80; server_name www.huaxixiang.com; access_log logs/host.access.log main; location /portal/ { # root html; # index index.html index.htm; # nginx http header send to tomcat app. # proxy_set_header Host $host; # proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_set_header X-Real-Ip $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://portal-cluster; } # nginx status location /nginx_status { # copied from http://blog.kovyrin.net/2006/04/29/monitoring-nginx-with-rrdtool/ stub_status on; access_log off; allow 192.168.100.100; #deny all; } location / { root html; index index.html index.htm; } error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } # External Internet. server { listen 80; server_name manage.huaxixiang.com; access_log logs/host.access.log main; location /manage/ { proxy_pass http://manage-cluster; } location / { root html; index index.html index.htm; } error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } }
nginx status查看:
相关推荐
对nginx进行详细的访问数量,日志请求头配置,多域名分发,反向代理等配置,可以对手机访问进行判断重定向手机域名
自1.9.8版起的图像带有nginx-debug二进制文件,当使用较高的日志级别时,该文件会产生详细的输出。它可以与简单的CMD替换一起使用: $ docker run --name my-nginx -v /host/path/nginx.conf:/etc/nginx/nginx....
插件使用 Lua 编写,而且Kong还有如下几个基础功能:HTTP 基本认证、密钥认证、CORS( Cross-origin Resource Sharing,跨域资源共享)、TCP、UDP、文件日志、API 请求限流、请求转发以及 nginx 监控。
3、基于SLB价格以及灵活性考虑后面我们接入Nginx做限流分发,来保障后端服务的正常运行。 4、后端秒杀业务逻辑,基于Redis 或者 Zookeeper 分布式锁,Kafka 或者 Redis 做消息队列,DRDS数据库中间件实现数据的读写...
插件使用 Lua 编写,而且 Kong 还有如下几个基础功能:HTTP 基本认证、密钥认证、CORS( Cross-origin Resource Sharing,跨域资源共享)、TCP、UDP、文件日志、API 请求限流、请求转发以及 nginx 监控。Kong包可...
4、多节点管理、多节点自动分发 5、实时监控项目状态异常自动报警 6、在线构建项目发布项目一键搞定 7、多用户管理,用户项目权限独立(上传、删除权限可控制),完善的操作日志 8、系统路径白名单模式,杜绝用户误操作...
3、基于SLB价格以及灵活性考虑后面我们接入Nginx做限流分发,来保障后端服务的正常运行。 4、后端秒杀业务逻辑,基于Redis 或者 Zookeeper 分布式锁,Kafka 或者 Redis 做消息队列,DRDS数据库中间件实现数据的读写...
3、基于SLB价格以及灵活性考虑后面我们接入Nginx做限流分发,来保障后端服务的正常运行。 4、后端秒杀业务逻辑,基于Redis 或者 Zookeeper 分布式锁,Kafka 或者 Redis 做消息队列,DRDS数据库中间件实现数据的读写...
01 FTP之参数解析与命令分发 02 FTP之逻辑梳理 03 FTP之验证功能 05 FTP之文件上传 06 FTP之断点续传 08 FTP之进度条 09 FTP之cd切换 11 FTP之创建文件夹及MD5校验思路 第33章 01 操作系统历史 02 进程的概念 03 ...