欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

怎么配置TCP BBR作为默认的拥塞控制算法?

程序员文章站 2022-07-13 17:50:59
...

How to configure TCP BBR as the default congestion control algorithm?

https://access.redhat.com/solutions/3713681

 SOLUTION 已验证 - 已更新 2019年二月22日00:33 - 

English 

环境

  • Red Hat Enterprise Linux 8
  • TCP (Transmission Control Protocol)

问题

  • How to configure TCP BBR as the default congestion control algorithm?
  • How do I use Google's "Bottleneck Bandwidth and Round-trip propagation time" on RHEL 8?

决议

  • Use of this algorithm is controlled via sysctl.

  • To change at runtime, use the command:

Raw

    # sysctl -w net.ipv4.tcp_congestion_control=bbr
  • To make this persistent across reboots, add to sysctl.conf like:

Raw

    # echo "net.ipv4.tcp_congestion_control = bbr" >> /etc/sysctl.conf
  • To verify the current setting, check sysctl -a:

Raw

    # sysctl -a | egrep -e congestion
    net.ipv4.tcp_allowed_congestion_control = reno cubic bbr
    net.ipv4.tcp_available_congestion_control = reno cubic bbr
    net.ipv4.tcp_congestion_control = bbr
  • A change in the congestion control algorithm only affects new TCP connections. Existing connections will have to be restarted for the change to affect them.

根源

  • The default congestion control algorithm in RHEL 8 is cubic.

  • A congestion control algorithm affects only the rate of transmission for a TCP connection. The algorithm in use on a system will have no affect on how fast the system receives traffic.

  • To function correctly, the BBR congestion control algorithm needs to be able to "pace" the rate of outgoing TCP segments. The RHEL8 kernel's TCP stack has pacing functionality built-in.

  • As an alternative to the built-in TCP stack pacing functionality, the fq qdisc (queuing discipline) can provide the pacing functionality to the TCP stack. If the fq qdisc is operational on the outgoing interface then BBR will use the qdisc for pacing instead. This may result in slightly lower CPU usage but results will vary depending on the environment.

  • The kernel TCP stack pacing functionality was added in upstream v4.13 by the following commit: tcp: internal implementation for pacing

诊断步骤

  • The sysctl net.ipv4.tcp_congestion_control controls which congestion control algorithm will be used for new TCP connections. To check its current value:

    Raw

    # sysctl net.ipv4.tcp_congestion_control
    net.ipv4.tcp_congestion_control = bbr
    
  • The ss command with the -i option will list the congestion control algorithm in use by the current TCP connections. In the example below it is bbr:

    Raw

    # ss -tin sport = :22
    State     Recv-Q       Send-Q                Local Address:Port              Peer Address:Port                                                                                                            
    ESTAB     0            0                   192.168.122.102:22               192.168.122.1:44364      
         bbr wscale:7,7 rto:202 rtt:1.246/1.83 ato:44 mss:1448 pmtu:1500 rcvmss:1448 advmss:1448 cwnd:168 bytes_acked:72441 bytes_received:9405 segs_out:258 segs_in:371 data_segs_out:250 data_segs_in:150 bbr:(bw:591.0Mbps,mrtt:0.142,pacing_gain:2.88672,cwnd_gain:2.88672) send 1561.9Mbps lastsnd:4 lastrcv:7 lastack:4 pacing_rate 1767.4Mbps delivery_rate 591.0Mbps app_limited busy:362ms rcv_rtt:1 rcv_space:28960 rcv_ssthresh:40640 minrtt:0.123
    
  • The ip link or ip addr commands can be used to see the current queuing discipline (qdisc) attached to an interface. In the example below, interface ens3 has the fq_codel qdisc in place:

    Raw

    # ip link show ens3
    2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 52:54:00:30:62:3f brd ff:ff:ff:ff:ff:ff