欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Linux pytorch环境搭建

程序员文章站 2022-06-11 22:01:00
...

安装samba服务

samba服务主要是方面通过window访问linux目录,主要是提到xshell等命令行访问工具,主要用在文件copy和目录管理场景

安装命令:

sudo apt-get install samba samba-common

创建一个用于分享的samba目录并设置权限

sudo mkdir /home/os/window_share
sudo chmod 777/home/os/window_share

添加用户和对应用户密码

useradd other
/home# smbpasswd -a other
New SMB password:111111
Retype new SMB password:111111
Added user other.

添加samba配置文件

[share]
   comment = share folder
   browseable = yes
   path = /home/os/window_share
   create mask = 0777
   directory mask = 0777
   valid users = other
   force user = other
   force group = other
   public = yes
   writable = yes
   available = yes

重启samba服务器

systemctl restart smbd
sudo service smbd reload
sudo service smbd restart

输入地址进行访问

//10.78.5.1

参考文档

安装方式

Linux下cuda的安装

查看显卡型号

# lspci |grep -i vga
04:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41)
19:00.0 VGA compatible controller: NVIDIA Corporation Device 1e02 (rev a1)
65:00.0 VGA compatible controller: NVIDIA Corporation Device 1e02 (rev a1)

在以下网址查新型号1e02对应的版本;如下所示,本机安装了两个TITAN RTX显卡
英伟达型号查询网址

Name: TU102 [TITAN RTX]

安装的驱动版本为:

Driver Version: 440.33.01

安装cuda需要跟本地的linux版本适配,先查询服务器linux版本

r# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.4 LTS"

下载对应的cuda版本,目前最新的cuda是cuda_11.0.3。

https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1804&target_type=runfilelocal

pytorch目前最新的稳定版本是1.6.0,对应的cuda版本是10.2。所有本人安装的依然是cuda10.2版本

sudo -i
sudo chmod a+x cuda_10.2.89_440.33.01_linux
sudo ./cuda_10.2.89_440.33.01_linux

安装过程中,默认选择yes,安装驱动项选择no。因为我们已经安装了对应的显卡驱动

确认是否安装了cuda

/Downloads$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_19:24:38_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89

创建anaconda 3

下载Anaconda3,并安装

Anaconda3-2020.02-Linux-x86_64.sh

bash Anaconda3-2020.02-Linux-x86_64.sh -p /usr/local/anaconda3

创建公共anaconda,经过这样创建后,其他用户也可以访问root用户安装的工具

su # 首先进入root用户安装anaconda至/opt/anaconda
groupadd anaconda # 创建anaconda组

sudo adduser other anaconda # 将需要的用户添加至anaconda组

chgrp -R anaconda /usr/local/anaconda3 # 移交目录管理权
chmod 777 -R /usr/local/anaconda3 # 设置读写权限


chmod g+s /usr/local/anaconda3 # 设置组继承
chmod g+s `find /usr/local/anaconda3 -type d` # 设置子目录组继承
chmod g-w /usr/local/anaconda3/envs # 关闭共享环境的写入权限
source /usr/local/anaconda3/bin/activate # root用户下启动anaconda环境

  • 由root用户创建的环境会保存在/opt/anaconda/envs中,所有anaconda组成员都可以访问。
  • 用户自己创建的环境则会保存至~/.conda/envs中,但是所有下载的pkg会共享在/opt/
  • anaconda/pkgs中,即如果是别人装过的包(比如下载缓慢的PyTorch)则不用重新下载。

用户要根据自己的需要创建自己的虚拟环境

创建共享Pytorch环境

在root用户下进行安装;
因为在线安装比较慢,因此使用下载好的包进行安装

conda install pytorch-1.5.0-py3.7_cuda10.2.89_cudnn7.6.5_0.tar.bz2

更新和下载其他依赖包:

conda install torchvision-0.6.0-py37_cu102.tar.bz2
conda install pytorch torchvision cudatoolkit=10.2 
pip install opencv_python-4.2.0.34-cp37-cp37m-manylinux1_x86_64.whl
pip install matplotlib

测试是否安装成功

//root用户测试:

(base) [email protected]:~$ python
Python 3.7.6 (default, Jan  8 2020, 19:59:22)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>>

//其他用户测试:

[email protected]:~$ python
Python 3.7.6 (default, Jan  8 2020, 19:59:22)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>>


报错处理

**1. qt.qpa.screen: QXcbConnection: Could not connect to display localhost:12.0
Could not connect to any X display.
**

远程服务器运行的,不能调用GUI界面,需要在虚拟gui里面调用

解决方案:
https://www.jianshu.com/p/7df287155ce0

运行方法:xvfb-run python3 …

2. RuntimeError: Model replicas must have an equal number of parameters

服务器中有多个GPU,选择特定的GPU运行程序可在程序运行命令前使用:CUDA_VISIBLE_DEVICES=0命令。0为服务器中的GPU编号,可以为0, 1, 2, 3等,表明对程序可见的GPU编号。

UserWarning: Single-Process Multi-GPU is not the recommended mode for DDP. In this mode, each DDP instance operates on multiple devices and creates multiple module replicas within one process. The overhead of scatter/gather and GIL contention in every forward pass can slow down training. Please consider using one DDP instance per device or per module replica by explicitly setting device_ids or CUDA_VISIBLE_DEVICES. NB: There is a known issue in nn.parallel.replicate that prevents a single DDP instance to operate on multiple model replicas.
"Single-Process Multi-GPU is not the recommended mode for "

临时设置:

Linux: export CUDA_VISIBLE_DEVICES=1
windows:  set CUDA_VISIBLE_DEVICES=1

永久设置:

linux:
在~/.bashrc 的最后加上export CUDA_VISIBLE_DEVICES=1,然后source ~/.bashrc
windows:
打开我的电脑环境变量设置的地方,直接添加就行了
相关标签: pytorch实践