欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

浅谈golang fasthttp踩坑经验

程序员文章站 2022-03-07 15:21:33
一个简单的系统,结构如下:我们的服务a接受外部的http请求,然后通过golang的fasthttp将请求转发给服务b,流程非常简单。线上运行一段时间之后,发现服务b完全不再接收任何请求,查看服务a的...

一个简单的系统,结构如下:

浅谈golang fasthttp踩坑经验

我们的服务a接受外部的http请求,然后通过golang的fasthttp将请求转发给服务b,流程非常简单。线上运行一段时间之后,发现服务b完全不再接收任何请求,查看服务a的日志,发现大量的如下错误

浅谈golang fasthttp踩坑经验

  从错误原因看是因为连接被占满导致的。进入服务a的容器中(服务a和服务b都是通过docker启动的),通过netstat -anlp查看,发现有大量的tpc连接,处于establish。我们采用的是长连接的方式,此时心里非常疑惑:1. fasthttp是能够复用连接的,为什么还会有如此多的tcp连接,2.为什么这些连接不能够使用了,出现上述异常,原因是什么?

  从fasthttpclient源码出发,我们调用请求转发的时候是用的是

f.client.dotimeout(req, resp, f.exectimeout),其中f.client是一个fasthttp.hostclient,f.exectimeout设置的是5s。
追查代码,直到client.go中的这个方法

func (c *hostclient) dononnilreqresp(req *request, resp *response) (bool, error) {
    if req == nil {
        panic("bug: req cannot be nil")
    }
    if resp == nil {
        panic("bug: resp cannot be nil")
    }
 
    atomic.storeuint32(&c.lastusetime, uint32(time.now().unix()-starttimeunix))
 
    // free up resources occupied by response before sending the request,
    // so the gc may reclaim these resources (e.g. response body).
    resp.reset()
 
    // if we detected a redirect to another schema
    if req.schemaupdate {
        c.istls = bytes.equal(req.uri().scheme(), strhttps)
        c.addr = addmissingport(string(req.host()), c.istls)
        c.addridx = 0
        c.addrs = nil
        req.schemaupdate = false
        req.setconnectionclose()
    }
 
    cc, err := c.acquireconn()
    if err != nil {
        return false, err
    }
    conn := cc.c
 
    resp.parsenetconn(conn)
 
    if c.writetimeout > 0 {
        // set deadline every time, since golang has fixed the performance issue
        // see https://github.com/golang/go/issues/15133#issuecomment-271571395 for details
        currenttime := time.now()
        if err = conn.setwritedeadline(currenttime.add(c.writetimeout)); err != nil {
            c.closeconn(cc)
            return true, err
        }
    }
 
    resetconnection := false
    if c.maxconnduration > 0 && time.since(cc.createdtime) > c.maxconnduration && !req.connectionclose() {
        req.setconnectionclose()
        resetconnection = true
    }
 
    useragentold := req.header.useragent()
    if len(useragentold) == 0 {
        req.header.useragent = c.getclientname()
    }
    bw := c.acquirewriter(conn)
    err = req.write(bw)
 
    if resetconnection {
        req.header.resetconnectionclose()
    }
 
    if err == nil {
        err = bw.flush()
    }
    if err != nil {
        c.releasewriter(bw)
        c.closeconn(cc)
        return true, err
    }
    c.releasewriter(bw)
 
    if c.readtimeout > 0 {
        // set deadline every time, since golang has fixed the performance issue
        // see https://github.com/golang/go/issues/15133#issuecomment-271571395 for details
        currenttime := time.now()
        if err = conn.setreaddeadline(currenttime.add(c.readtimeout)); err != nil {
            c.closeconn(cc)
            return true, err
        }
    }
 
    if !req.header.isget() && req.header.ishead() {
        resp.skipbody = true
    }
    if c.disableheadernamesnormalizing {
        resp.header.disablenormalizing()
    }
 
    br := c.acquirereader(conn)
    if err = resp.readlimitbody(br, c.maxresponsebodysize); err != nil {
        c.releasereader(br)
        c.closeconn(cc)
        // don't retry in case of errbodytoolarge since we will just get the same again.
        retry := err != errbodytoolarge
        return retry, err
    }
    c.releasereader(br)
 
    if resetconnection || req.connectionclose() || resp.connectionclose() {
        c.closeconn(cc)
    } else {
        c.releaseconn(cc)
    }
 
    return false, err
}

  请注意c.acquireconn()这个方法,这个方法即从连接池中获取连接,如果没有可用连接,则创建新的连接,该方法实现如下

func (c *hostclient) acquireconn() (*clientconn, error) {
    var cc *clientconn
    createconn := false
    startcleaner := false
 
    var n int
    c.connslock.lock()
    n = len(c.conns)
    if n == 0 {
        maxconns := c.maxconns
        if maxconns <= 0 {
            maxconns = defaultmaxconnsperhost
        }
        if c.connscount < maxconns {
            c.connscount++
            createconn = true
            if !c.connscleanerrun {
                startcleaner = true
                c.connscleanerrun = true
            }
        }
    } else {
        n--
        cc = c.conns[n]
        c.conns[n] = nil
        c.conns = c.conns[:n]
    }
    c.connslock.unlock()
 
    if cc != nil {
        return cc, nil
    }
    if !createconn {
        return nil, errnofreeconns
    }
 
    if startcleaner {
        go c.connscleaner()
    }
 
    conn, err := c.dialhosthard()
    if err != nil {
        c.decconnscount()
        return nil, err
    }
    cc = acquireclientconn(conn)
 
    return cc, nil
}

其中errnofreeconns 即为errors.new("no free connections available to host"),该错误就是我们服务中出现的错误。那原因很明显就是因为!createconn,即无法创建新的连接,为什么无法创建新的连接,是因为连接数已经达到了maxconns =defaultmaxconnsperhost = 512(默认值)。连接数达到最大值了,但是为什么连接没有回收也没有复用,从这块看,还是没有看出来。又仔细的查了一下业务代码,发现很多服务a到服务b的请求,都是因为超时了而结束的,即达到了f.exectimeout = 5s。

又从头查看源码,终于发现了玄机。

func clientdodeadline(req *request, resp *response, deadline time.time, c clientdoer) error {
    timeout := -time.since(deadline)
    if timeout <= 0 {
        return errtimeout
    }
 
    var ch chan error
    chv := errorchpool.get()
    if chv == nil {
        chv = make(chan error, 1)
    }
    ch = chv.(chan error)
 
    // make req and resp copies, since on timeout they no longer
    // may be accessed.
    reqcopy := acquirerequest()
    req.copytoskipbody(reqcopy)
    swaprequestbody(req, reqcopy)
    respcopy := acquireresponse()
    if resp != nil {
        // not calling resp.copytoskipbody(respcopy) here to avoid
        // unexpected messing with headers
        respcopy.skipbody = resp.skipbody
    }
 
    // note that the request continues execution on errtimeout until
    // client-specific readtimeout exceeds. this helps limiting load
    // on slow hosts by maxconns* concurrent requests.
    //
    // without this 'hack' the load on slow host could exceed maxconns*
    // concurrent requests, since timed out requests on client side
    // usually continue execution on the host.
 
    var mu sync.mutex
    var timedout bool
        //这个goroutine是用来处理连接以及发送请求的
    go func() {
        errdo := c.do(reqcopy, respcopy)
        mu.lock()
        {
            if !timedout {
                if resp != nil {
                    respcopy.copytoskipbody(resp)
                    swapresponsebody(resp, respcopy)
                }
                swaprequestbody(reqcopy, req)
                ch <- errdo
            }
        }
        mu.unlock()
 
        releaseresponse(respcopy)
        releaserequest(reqcopy)
    }()
        //这块内容是用来处理超时的
    tc := acquiretimer(timeout)
    var err error
    select {
    case err = <-ch:
    case <-tc.c:
        mu.lock()
        {
            timedout = true
            err = errtimeout
        }
        mu.unlock()
    }
    releasetimer(tc)
 
    select {
    case <-ch:
    default:
    }
    errorchpool.put(chv)
 
    return err
}

  我们看到,请求的超时时间是如何处理的。当我的请求超时后,主流程直接返回了超时错误,而此时,goroutine里面还在等待请求的返回,而偏偏b服务,由于一些情况会抛出异常,也就是没有对这个请求进行返回,从而导致这个链接一直未得到释放,终于解答了为什么有大量的连接一直被占有从而导致无连接可用的情况。

  最后,当我心里还在腹诽为什么fasthttp这么优秀的框架会有这种问题,如果服务端抛异常(不对请求进行返回)就会把连接打满?又自己看了一下代码,原来,

// dotimeout performs the given request and waits for response during
// the given timeout duration.
//
// request must contain at least non-zero requesturi with full url (including
// scheme and host) or non-zero host header + requesturi.
//
// the function doesn't follow redirects. use get* for following redirects.
//
// response is ignored if resp is nil.
//
// errtimeout is returned if the response wasn't returned during
// the given timeout.
//
// errnofreeconns is returned if all hostclient.maxconns connections
// to the host are busy.
//
// it is recommended obtaining req and resp via acquirerequest
// and acquireresponse in performance-critical code.
//
// warning: dotimeout does not terminate the request itself. the request will
// continue in the background and the response will be discarded.
// if requests take too long and the connection pool gets filled up please
// try setting a readtimeout.
func (c *hostclient) dotimeout(req *request, resp *response, timeout time.duration) error {
    return clientdotimeout(req, resp, timeout, c)
}

  人家这个方法的注释早就说明了,看最后一段注释,大意就是超时之后,请求依然会继续等待返回值,只是返回值会被丢弃,如果请求时间太长,会把连接池占满,正好是我们遇到的问题。为了解决,需要设置readtimeout字段,这个字段的我个人理解的意思就是当请求发出之后,达到readtimeout时间还没有得到返回值,客户端就会把连接断开(释放)。

  以上就是这次经验之谈,切记,使用fasthttp的时候,加上readtimeout字段。

到此这篇关于浅谈golang fasthttp踩坑经验的文章就介绍到这了,更多相关golang fasthttp踩坑内容请搜索以前的文章或继续浏览下面的相关文章希望大家以后多多支持!