HTTP1.1 Notes 5, Connections 博客分类: Infrastructure Firefox浏览器
程序员文章站
2024-02-22 16:16:58
...
Persistent Connections
advantages:
- By opening and closing fewer TCP connections, CPU time is saved
in routers and hosts (clients, servers, proxies, gateways,
tunnels, or caches), and memory used for TCP protocol control
blocks can be saved in hosts.
- HTTP requests and responses can be pipelined on a connection.
Pipelining allows a client to make multiple requests without
waiting for each response, allowing a single TCP connection to
be used much more efficiently, with much lower elapsed time.
- Network congestion is reduced by reducing the number of packets
caused by TCP opens, and by allowing TCP sufficient time to
determine the congestion state of the network.
- Latency on subsequent requests is reduced since there is no time
spent in TCP's connection opening handshake.
- HTTP can evolve more gracefully, since errors can be reported
without the penalty of closing the TCP connection. Clients using
future versions of HTTP might optimistically try a new feature,
but if communicating with an older server, retry with old
semantics after an error is reported.
主流浏览器都支持persistent connection
像Firefox里就可以设置network.http.max-persistent-connections-per-server参数,默认为6
network.http.keep-alive.timeout参数默认为300ms
An HTTP/1.1 server MAY assume that a HTTP/1.1 client intends to maintain a persistent connection unless a Connection header including the connection-token "close" was sent in the request
If the server chooses to close the connection immediately after sending the response, it SHOULD send a Connection header including the connection-token close
An HTTP/1.1 client MAY expect a connection to remain open, but would decide to keep it open based on whether the response from a server contains a Connection header with the connection-token close
In case the client does not want to maintain a connection for more than that request, it SHOULD send a Connection header including the connection-token close
A client that supports persistent connections MAY "pipeline" its requests (i.e., send multiple requests without waiting for each response)
A server MUST send its responses to those requests in the same order that the requests were received
It is especially important that proxies correctly implement the properties of the Connection header field
The proxy server MUST signal persistent connections separately with its clients and the origin servers (or other proxy servers) that it connects to
Each persistent connection applies to only one transport link
Message Transmission Requirements
HTTP/1.1 servers SHOULD maintain persistent connections and use TCP's flow control mechanisms to resolve temporary overloads, rather than terminating connections with the expectation that clients will retry
An HTTP/1.1 (or later) client sending a message-body SHOULD monitor the network connection for an error status while it is transmitting the request
If the client sees an error status, it SHOULD immediately cease transmitting the body
The purpose of the 100 (Continue) status is to allow a client that is sending a request message with a request body to determine if the origin server is willing to accept the request (based on the request headers) before the client sends the request body
If an HTTP/1.1 client sends a request which includes a request body, but which does not include an Expect request-header field with the "100-continue" expectation, and if the client is not directly connected to an HTTP/1.1 origin server, and if the client sees the connection close before receiving any status from the server, the client SHOULD retry the request
If the client does retry this request, it MAY use the "binary exponential backoff" algorithm to be assured of obtaining a reliable response
advantages:
- By opening and closing fewer TCP connections, CPU time is saved
in routers and hosts (clients, servers, proxies, gateways,
tunnels, or caches), and memory used for TCP protocol control
blocks can be saved in hosts.
- HTTP requests and responses can be pipelined on a connection.
Pipelining allows a client to make multiple requests without
waiting for each response, allowing a single TCP connection to
be used much more efficiently, with much lower elapsed time.
- Network congestion is reduced by reducing the number of packets
caused by TCP opens, and by allowing TCP sufficient time to
determine the congestion state of the network.
- Latency on subsequent requests is reduced since there is no time
spent in TCP's connection opening handshake.
- HTTP can evolve more gracefully, since errors can be reported
without the penalty of closing the TCP connection. Clients using
future versions of HTTP might optimistically try a new feature,
but if communicating with an older server, retry with old
semantics after an error is reported.
主流浏览器都支持persistent connection
像Firefox里就可以设置network.http.max-persistent-connections-per-server参数,默认为6
network.http.keep-alive.timeout参数默认为300ms
An HTTP/1.1 server MAY assume that a HTTP/1.1 client intends to maintain a persistent connection unless a Connection header including the connection-token "close" was sent in the request
If the server chooses to close the connection immediately after sending the response, it SHOULD send a Connection header including the connection-token close
An HTTP/1.1 client MAY expect a connection to remain open, but would decide to keep it open based on whether the response from a server contains a Connection header with the connection-token close
In case the client does not want to maintain a connection for more than that request, it SHOULD send a Connection header including the connection-token close
A client that supports persistent connections MAY "pipeline" its requests (i.e., send multiple requests without waiting for each response)
A server MUST send its responses to those requests in the same order that the requests were received
It is especially important that proxies correctly implement the properties of the Connection header field
The proxy server MUST signal persistent connections separately with its clients and the origin servers (or other proxy servers) that it connects to
Each persistent connection applies to only one transport link
Message Transmission Requirements
HTTP/1.1 servers SHOULD maintain persistent connections and use TCP's flow control mechanisms to resolve temporary overloads, rather than terminating connections with the expectation that clients will retry
An HTTP/1.1 (or later) client sending a message-body SHOULD monitor the network connection for an error status while it is transmitting the request
If the client sees an error status, it SHOULD immediately cease transmitting the body
The purpose of the 100 (Continue) status is to allow a client that is sending a request message with a request body to determine if the origin server is willing to accept the request (based on the request headers) before the client sends the request body
If an HTTP/1.1 client sends a request which includes a request body, but which does not include an Expect request-header field with the "100-continue" expectation, and if the client is not directly connected to an HTTP/1.1 origin server, and if the client sees the connection close before receiving any status from the server, the client SHOULD retry the request
If the client does retry this request, it MAY use the "binary exponential backoff" algorithm to be assured of obtaining a reliable response