pytorch:torch.cuda中的函数
程序员文章站
2022-10-28 11:17:00
pytorch:torch.cuda中的函数device_count = torch.cuda.device_count() # 可用的GPU数量current_device = torch.cuda.current_device() #返回当前所选设备的索引device_name = torch.cuda.get_device_name(current_device) #返回设备名, 默认值为当前设备device_capability = torch.cuda.get_device_capabil...
pytorch:torch.cuda中的函数
device_count = torch.cuda.device_count() # 可用的GPU数量
current_device = torch.cuda.current_device() #返回当前所选设备的索引
device_name = torch.cuda.get_device_name(current_device) #返回设备名, 默认值为当前设备
device_capability = torch.cuda.get_device_capability(current_device) # 设备的最大和最小的cuda容量, 默认值为当前设备
device_properties = torch.cuda.get_device_properties(current_device)
# device_properties = torch.cuda.get_device_properties(0)
is_available = torch.cuda.is_available() # 当前CUDA是否可用
device_cuda = torch.device("cuda") # GPU设备
print('device_count: {device_count}'.format(device_count=device_count))
print('current_device: {current_device}'.format(current_device=current_device))
print('device_name: {device_name}'.format(device_name=device_name))
print('device_capability: {device_capability}'.format(device_capability=device_capability))
print('device_properties: {device_properties}'.format(device_properties=device_properties))
print('is_available: {is_available}'.format(is_available=is_available))
print('device_cuda: {device_cuda}'.format(device_cuda=device_cuda))
结果:
device_count: 1
current_device: 0
device_name: GeForce RTX 2060 SUPER
device_capability: (7, 5)
device_properties: _CudaDeviceProperties(name='GeForce RTX 2060 SUPER', major=7, minor=5, total_memory=8192MB, multi_processor_count=34)
is_available: True
device_cuda: cuda
参考:
https://pytorch.org/docs/stable/cuda.html
https://www.cntofu.com/book/169/docs/1.0/cuda.md
(后续继续增加…)
==============================================================================
以下为在GPU上运行的一段代码:
x = torch.tensor([1, 2, 3])
print('x: {x}'.format(x=x))
if torch.cuda.is_available():
device = torch.device("cuda") # GPU -- torch.device('cpu')为CPU
# device = torch.device('cpu')
print('device: {device}'.format(device=device)) # device: cuda
y = torch.ones_like(x, device=device) # 直接创建一个在GPU上的Tensor
x = x.to(device) # 等价于 .to("cuda")
z = x + y
print('z: {z}'.format(z=z))
print(z.to("cpu", torch.double)) # 用方法to()可以将Tensor在CPU和GPU(需要硬件支持)之间相互移动; to()还可以同时更改数据类型
结果:
device: cuda
z: tensor([2, 3, 4], device='cuda:0')
tensor([2., 3., 4.], dtype=torch.float64)
本文地址:https://blog.csdn.net/qq757056521/article/details/107585381
推荐阅读