Python 多进程、多线程效率对比
程序员文章站
2022-06-15 10:28:44
python 界有条不成文的准则: 计算密集型任务适合多进程,io 密集型任务适合多线程。本篇来作个比较。通常来说多线程相对于多进程有优势,因为创建一个进程开销比较大,然而因为在 python 中有...
python 界有条不成文的准则: 计算密集型任务适合多进程,io 密集型任务适合多线程。本篇来作个比较。
通常来说多线程相对于多进程有优势,因为创建一个进程开销比较大,然而因为在 python 中有 gil 这把大锁的存在,导致执行计算密集型任务时多线程实际只能是单线程。而且由于线程之间切换的开销导致多线程往往比实际的单线程还要慢,所以在 python 中计算密集型任务通常使用多进程,因为各个进程有各自独立的 gil,互不干扰。
而在 io 密集型任务中,cpu 时常处于等待状态,操作系统需要频繁与外界环境进行交互,如读写文件,在网络间通信等。在这期间 gil 会被释放,因而就可以使用真正的多线程。
以上是理论,下面做一个简单的模拟测试: 大量计算用 math.sin() + math.cos()
来代替,io 密集型用 time.sleep()
来模拟。 在 python 中有多种方式可以实现多进程和多线程,这里一并纳入看看是否有效率差异:
- 多进程: joblib.multiprocessing, multiprocessing.pool, multiprocessing.apply_async, concurrent.futures.processpoolexecutor
- 多线程: joblib.threading, threading.thread, concurrent.futures.threadpoolexecutor
from multiprocessing import pool from threading import thread from concurrent.futures import threadpoolexecutor, processpoolexecutor import time, os, math from joblib import parallel, delayed, parallel_backend def f_io(a): # io 密集型 time.sleep(5) def f_compute(a): # 计算密集型 for _ in range(int(1e7)): math.sin(40) + math.cos(40) return def normal(sub_f): for i in range(6): sub_f(i) return def joblib_process(sub_f): with parallel_backend("multiprocessing", n_jobs=6): res = parallel()(delayed(sub_f)(j) for j in range(6)) return def joblib_thread(sub_f): with parallel_backend('threading', n_jobs=6): res = parallel()(delayed(sub_f)(j) for j in range(6)) return def mp(sub_f): with pool(processes=6) as p: res = p.map(sub_f, list(range(6))) return def asy(sub_f): with pool(processes=6) as p: result = [] for j in range(6): a = p.apply_async(sub_f, args=(j,)) result.append(a) res = [j.get() for j in result] def thread(sub_f): threads = [] for j in range(6): t = thread(target=sub_f, args=(j,)) threads.append(t) t.start() for t in threads: t.join() def thread_pool(sub_f): with threadpoolexecutor(max_workers=6) as executor: res = [executor.submit(sub_f, j) for j in range(6)] def process_pool(sub_f): with processpoolexecutor(max_workers=6) as executor: res = executor.map(sub_f, list(range(6))) def showtime(f, sub_f, name): start_time = time.time() f(sub_f) print("{} time: {:.4f}s".format(name, time.time() - start_time)) def main(sub_f): showtime(normal, sub_f, "normal") print() print("------ 多进程 ------") showtime(joblib_process, sub_f, "joblib multiprocess") showtime(mp, sub_f, "pool") showtime(asy, sub_f, "async") showtime(process_pool, sub_f, "process_pool") print() print("----- 多线程 -----") showtime(joblib_thread, sub_f, "joblib thread") showtime(thread, sub_f, "thread") showtime(thread_pool, sub_f, "thread_pool") if __name__ == "__main__": print("----- 计算密集型 -----") sub_f = f_compute main(sub_f) print() print("----- io 密集型 -----") sub_f = f_io main(sub_f)
结果:
----- 计算密集型 ----- normal time: 15.1212s ------ 多进程 ------ joblib multiprocess time: 8.2421s pool time: 8.5439s async time: 8.3229s process_pool time: 8.1722s ----- 多线程 ----- joblib thread time: 21.5191s thread time: 21.3865s thread_pool time: 22.5104s ----- io 密集型 ----- normal time: 30.0305s ------ 多进程 ------ joblib multiprocess time: 5.0345s pool time: 5.0188s async time: 5.0256s process_pool time: 5.0263s ----- 多线程 ----- joblib thread time: 5.0142s thread time: 5.0055s thread_pool time: 5.0064s
上面每一方法都统一创建6个进程/线程,结果是计算密集型任务中速度:多进程 > 单进程/线程 > 多线程, io 密集型任务速度: 多线程 > 多进程 > 单进程/线程。
以上就是python 多进程、多线程效率比较的详细内容,更多关于python 多进程、多线程的资料请关注其它相关文章!
上一篇: Chipotle数据分析-知识点汇总