flask开启多线程的具体方法
在我之前解释了flask如何支持多线程主要通过两个类来实现,localstack和local,在local中有两个属性,__storage__和__ident_func__,后者用来获取线程id,从而区分不同线程发来的请求
这次要说的是flask如何开启多线程
先从app.run()这个方法看起
def run(self, host=none, port=none, debug=none, **options): from werkzeug.serving import run_simple if host is none: host = '127.0.0.1' if port is none: server_name = self.config['server_name'] if server_name and ':' in server_name: port = int(server_name.rsplit(':', 1)[1]) else: port = 5000 if debug is not none: self.debug = bool(debug) options.setdefault('use_reloader', self.debug) options.setdefault('use_debugger', self.debug) try: run_simple(host, port, self, **options) #会进入这个函数 finally: # reset the first request information if the development server # reset normally. this makes it possible to restart the server # without reloader and that stuff from an interactive shell. self._got_first_request = false
经过判断和设置后进入run_simple()这个函数,看下源码
def run_simple(hostname, port, application, use_reloader=false,
use_debugger=false, use_evalex=true, extra_files=none, reloader_interval=1, reloader_type='auto', threaded=false, processes=1, request_handler=none, static_files=none, passthrough_errors=false, ssl_context=none): """start a wsgi application. optional features include a reloader, multithreading and fork support. this function has a command-line interface too:: python -m werkzeug.serving --help .. versionadded:: 0.5 `static_files` was added to simplify serving of static files as well as `passthrough_errors`. .. versionadded:: 0.6 support for ssl was added. .. versionadded:: 0.8 added support for automatically loading a ssl context from certificate file and private key. .. versionadded:: 0.9 added command-line interface. .. versionadded:: 0.10 improved the reloader and added support for changing the backend through the `reloader_type` parameter. see :ref:`reloader` for more information. :param hostname: the host for the application. eg: ``'localhost'`` :param port: the port for the server. eg: ``8080`` :param application: the wsgi application to execute :param use_reloader: should the server automatically restart the python process if modules were changed? :param use_debugger: should the werkzeug debugging system be used? :param use_evalex: should the exception evaluation feature be enabled? :param extra_files: a list of files the reloader should watch additionally to the modules. for example configuration files. :param reloader_interval: the interval for the reloader in seconds. :param reloader_type: the type of reloader to use. the default is auto detection. valid values are ``'stat'`` and ``'watchdog'``. see :ref:`reloader` for more information. :param threaded: should the process handle each request in a separate thread? :param processes: if greater than 1 then handle each request in a new process up to this maximum number of concurrent processes. :param request_handler: optional parameter that can be used to replace the default one. you can use this to replace it with a different :class:`~basehttpserver.basehttprequesthandler` subclass. :param static_files: a list or dict of paths for static files. this works exactly like :class:`shareddatamiddleware`, it's actually just wrapping the application in that middleware before serving. :param passthrough_errors: set this to `true` to disable the error catching. this means that the server will die on errors but it can be useful to hook debuggers in (pdb etc.) :param ssl_context: an ssl context for the connection. either an :class:`ssl.sslcontext`, a tuple in the form ``(cert_file, pkey_file)``, the string ``'adhoc'`` if the server should automatically create one, or ``none`` to disable ssl (which is the default). """ if not isinstance(port, int): raise typeerror('port must be an integer') if use_debugger: from werkzeug.debug import debuggedapplication application = debuggedapplication(application, use_evalex) if static_files: from werkzeug.wsgi import shareddatamiddleware application = shareddatamiddleware(application, static_files) def log_startup(sock): display_hostname = hostname not in ('', '*') and hostname or 'localhost' if ':' in display_hostname: display_hostname = '[%s]' % display_hostname quit_msg = '(press ctrl+c to quit)' port = sock.getsockname()[1] _log('info', ' * running on %s://%s:%d/ %s', ssl_context is none and 'http' or 'https', display_hostname, port, quit_msg) def inner(): try: fd = int(os.environ['werkzeug_server_fd']) except (lookuperror, valueerror): fd = none srv = make_server(hostname, port, application, threaded, processes, request_handler, passthrough_errors, ssl_context, fd=fd) if fd is none: log_startup(srv.socket) srv.serve_forever() if use_reloader: # if we're not running already in the subprocess that is the # reloader we want to open up a socket early to make sure the # port is actually available. if os.environ.get('werkzeug_run_main') != 'true': if port == 0 and not can_open_by_fd: raise valueerror('cannot bind to a random port with enabled ' 'reloader if the python interpreter does ' 'not support socket opening by fd.') # create and destroy a socket so that any exceptions are # raised before we spawn a separate python interpreter and # lose this ability. address_family = select_ip_version(hostname, port) s = socket.socket(address_family, socket.sock_stream) s.setsockopt(socket.sol_socket, socket.so_reuseaddr, 1) s.bind(get_sockaddr(hostname, port, address_family)) if hasattr(s, 'set_inheritable'): s.set_inheritable(true) # if we can open the socket by file descriptor, then we can just # reuse this one and our socket will survive the restarts. if can_open_by_fd: os.environ['werkzeug_server_fd'] = str(s.fileno()) s.listen(listen_queue) log_startup(s) else: s.close() # do not use relative imports, otherwise "python -m werkzeug.serving" # breaks. from werkzeug._reloader import run_with_reloader run_with_reloader(inner, extra_files, reloader_interval, reloader_type) else: inner() #默认会执行
经过判断和设置后进入run_simple()这个函数,看下源码
def run_simple(hostname, port, application, use_reloader=false,
use_debugger=false, use_evalex=true, extra_files=none, reloader_interval=1, reloader_type='auto', threaded=false, processes=1, request_handler=none, static_files=none, passthrough_errors=false, ssl_context=none): """start a wsgi application. optional features include a reloader, multithreading and fork support. this function has a command-line interface too:: python -m werkzeug.serving --help .. versionadded:: 0.5 `static_files` was added to simplify serving of static files as well as `passthrough_errors`. .. versionadded:: 0.6 support for ssl was added. .. versionadded:: 0.8 added support for automatically loading a ssl context from certificate file and private key. .. versionadded:: 0.9 added command-line interface. .. versionadded:: 0.10 improved the reloader and added support for changing the backend through the `reloader_type` parameter. see :ref:`reloader` for more information. :param hostname: the host for the application. eg: ``'localhost'`` :param port: the port for the server. eg: ``8080`` :param application: the wsgi application to execute :param use_reloader: should the server automatically restart the python process if modules were changed? :param use_debugger: should the werkzeug debugging system be used? :param use_evalex: should the exception evaluation feature be enabled? :param extra_files: a list of files the reloader should watch additionally to the modules. for example configuration files. :param reloader_interval: the interval for the reloader in seconds. :param reloader_type: the type of reloader to use. the default is auto detection. valid values are ``'stat'`` and ``'watchdog'``. see :ref:`reloader` for more information. :param threaded: should the process handle each request in a separate thread? :param processes: if greater than 1 then handle each request in a new process up to this maximum number of concurrent processes. :param request_handler: optional parameter that can be used to replace the default one. you can use this to replace it with a different :class:`~basehttpserver.basehttprequesthandler` subclass. :param static_files: a list or dict of paths for static files. this works exactly like :class:`shareddatamiddleware`, it's actually just wrapping the application in that middleware before serving. :param passthrough_errors: set this to `true` to disable the error catching. this means that the server will die on errors but it can be useful to hook debuggers in (pdb etc.) :param ssl_context: an ssl context for the connection. either an :class:`ssl.sslcontext`, a tuple in the form ``(cert_file, pkey_file)``, the string ``'adhoc'`` if the server should automatically create one, or ``none`` to disable ssl (which is the default). """ if not isinstance(port, int): raise typeerror('port must be an integer') if use_debugger: from werkzeug.debug import debuggedapplication application = debuggedapplication(application, use_evalex) if static_files: from werkzeug.wsgi import shareddatamiddleware application = shareddatamiddleware(application, static_files) def log_startup(sock): display_hostname = hostname not in ('', '*') and hostname or 'localhost' if ':' in display_hostname: display_hostname = '[%s]' % display_hostname quit_msg = '(press ctrl+c to quit)' port = sock.getsockname()[1] _log('info', ' * running on %s://%s:%d/ %s', ssl_context is none and 'http' or 'https', display_hostname, port, quit_msg) def inner(): try: fd = int(os.environ['werkzeug_server_fd']) except (lookuperror, valueerror): fd = none srv = make_server(hostname, port, application, threaded, processes, request_handler, passthrough_errors, ssl_context, fd=fd) if fd is none: log_startup(srv.socket) srv.serve_forever() if use_reloader: # if we're not running already in the subprocess that is the # reloader we want to open up a socket early to make sure the # port is actually available. if os.environ.get('werkzeug_run_main') != 'true': if port == 0 and not can_open_by_fd: raise valueerror('cannot bind to a random port with enabled ' 'reloader if the python interpreter does ' 'not support socket opening by fd.') # create and destroy a socket so that any exceptions are # raised before we spawn a separate python interpreter and # lose this ability. address_family = select_ip_version(hostname, port) s = socket.socket(address_family, socket.sock_stream) s.setsockopt(socket.sol_socket, socket.so_reuseaddr, 1) s.bind(get_sockaddr(hostname, port, address_family)) if hasattr(s, 'set_inheritable'): s.set_inheritable(true) # if we can open the socket by file descriptor, then we can just # reuse this one and our socket will survive the restarts. if can_open_by_fd: os.environ['werkzeug_server_fd'] = str(s.fileno()) s.listen(listen_queue) log_startup(s) else: s.close() # do not use relative imports, otherwise "python -m werkzeug.serving" # breaks. from werkzeug._reloader import run_with_reloader run_with_reloader(inner, extra_files, reloader_interval, reloader_type) else: inner() #默认会执行
还是经过一系列判断后默认会进入inner()函数,这个函数定义在run_simple()内,属于闭包,inner()中会执行make_server()这个函数,看下源码:
def make_server(host=none, port=none, app=none, threaded=false, processes=1,
request_handler=none, passthrough_errors=false, ssl_context=none, fd=none): """create a new server instance that is either threaded, or forks or just processes one request after another. """ if threaded and processes > 1: raise valueerror("cannot have a multithreaded and " "multi process server.") elif threaded: return threadedwsgiserver(host, port, app, request_handler, passthrough_errors, ssl_context, fd=fd) elif processes > 1: return forkingwsgiserver(host, port, app, processes, request_handler, passthrough_errors, ssl_context, fd=fd) else: return basewsgiserver(host, port, app, request_handler, passthrough_errors, ssl_context, fd=fd)
看到这也很明白了,想要配置多线程或者多进程,则需要设置threaded或processes这两个参数,而这两个参数是从app.run()中传递过来的:
app.run(**options) ---> run_simple(threaded,processes) ---> make_server(threaded,processes)
默认情况下flask是单线程,单进程的,想要开启只需要在run中传入对应的参数:app.run(threaded=true)即可.
从make_server中可知,flask提供了三种server:threadedwsgiserver,forkingwsgiserver,basewsgiserver,默认情况下是basewsgiserver
以线程为例,看下threadedwsgiserver这个类:
class threadedwsgiserver(threadingmixin, basewsgiserver): #继承自threadingmixin, basewsgiserver
"""a wsgi server that does threading.""" multithread = true daemon_threads = true
threadingmixin = socketserver.threadingmixin
class threadingmixin:
"""mix-in class to handle each request in a new thread.""" # decides how threads will act upon termination of the # main process daemon_threads = false def process_request_thread(self, request, client_address): """same as in baseserver but as a thread. in addition, exception handling is done here. """ try: self.finish_request(request, client_address) self.shutdown_request(request) except: self.handle_error(request, client_address) self.shutdown_request(request) def process_request(self, request, client_address): """start a new thread to process the request.""" t = threading.thread(target = self.process_request_thread, args = (request, client_address)) t.daemon = self.daemon_threads t.start()
process_request就是对每个请求产生一个新的线程来处理
最后写一个非常简单的应用来验证以上说法:
from flask import flask
from flask import _request_ctx_stackapp = flask(__name__)
@app.route('/')
def index():
print(_request_ctx_stack._local.__ident_func__()) while true: pass return '<h1>hello</h1>'
app.run() #如果需要开启多线程则app.run(threaded=true)
_request_ctx_stack._local.__ident_func__()对应这get_ident()这个函数,返回当前线程id,为什么要在后面加上while true这句呢,我们看下get_ident()这个函数的说明:
return a non-zero integer that uniquely identifies the current thread amongst other threads that exist simultaneously. this may be used to identify per-thread resources. even though on some platforms threads identities may appear to be allocated consecutive numbers starting at 1, this behavior should not be relied upon, and the number should be seen purely as a magic cookie. a thread's identity may be reused for another thread after it exits.
关键字我已经加粗了,线程id会在线程结束后重复利用,所以我在路由函数中加了这个死循环来阻塞请求以便于观察到不同的id,这就会产生两种情况:
1.没开启多线程的情况下,一次请求过来,服务器直接阻塞,并且之后的其他请求也都阻塞
2.开启多线程情况下,每次都会打印出不同的线程id
结果:
第一种情况
running on http://127.0.0.1:5000/ (press ctrl+c to quit)
139623180527360
第二种情况
running on http://127.0.0.1:5000/ (press ctrl+c to quit)
140315469436672
140315477829376
140315486222080
140315316901632
140315105163008
140315096770304
140315088377600
结果显而易见
综上所述:flask支持多线程,但默认没开启,其次app.run()只适用于开发环境,生产环境下可以使用uwsgi,gunicorn等web服务器
内容扩展:
flask开启多线程还是多进程
flask 默认是单进程,单线程阻塞的任务模式,在项目上线的时候可以通过nginx+gunicorn 的方式部署flask任务。
但是在开发的过程中如果想通过延迟的方式测试高并发怎么实现呢,其实非常简单,
app.run()中可以接受两个参数,分别是threaded和processes,用于开启线程支持和进程支持。
1.threaded : 多线程支持,默认为false,即不开启多线程;
2.processes:进程数量,默认为1.
开启方式:
if __name__ == '__main__': app.run(threaded=true) # app.run(processes=4)
注意:多进程或多线程只能选择一个,不能同时开启。
以上就是flask开启多线程的具体方法的详细内容,更多关于flask如何开启多线程详解的资料请关注其它相关文章!