张量的简介与创建
程序员文章站
2022-04-16 16:17:19
文章目录1、张量:就是一个多为数组(三维的)2、张量的创建2.1 直接创建张量2.2 依据数值创建2.3 依据概率创建1、张量:就是一个多为数组(三维的)一维的就是一个标量二维的就是一个数组Variable是torch.autograd中的数据类型主要用于封装Tensor,进行自动求导;data:被包装的Tensor;grad:data的梯度;grad_fn:创建Tensor的function,是自动求导的关键;用于对记录对数据使用了什么操作;requires_grad:指示是否需要梯度;...
1、张量:就是一个多为数组(三维的)
一维的就是一个标量
二维的就是一个数组
Variable是torch.autograd中的数据类型主要用于封装Tensor,进行自动求导;
data:被包装的Tensor;
grad:data的梯度;
grad_fn:创建Tensor的function,是自动求导的关键;用于对记录对数据使用了什么操作;
requires_grad:指示是否需要梯度;
is_leaf:指示是否是叶子节点(张量)这个是用在图中的;
Pytorch4.0版本之后,Variable并入到Tensor中
dtype:张量的数据类型,如:torch.FloatTensor,torch.cuda.FloatTensor,torch.Float32(使用最多的),图像的标签:torch.int64
shape:张量的形状,如:(64,3,224,224)分别对应样本数:通道数:宽:高
device:指定张量所在的硬件位置:GPU/CPU
2、张量的创建
2.1 直接创建张量
2.2 依据数值创建
2.3 依据概率创建
2.1 直接创建张量
#2.1 直接创建张量
import torch
help(torch.tensor)
Help on built-in function tensor:
tensor(...)
tensor(data, dtype=None, device=None, requires_grad=False, pin_memory=False) -> Tensor
Constructs a tensor with :attr:`data`.
.. warning::
:func:`torch.tensor` always copies :attr:`data`. If you have a Tensor
``data`` and want to avoid a copy, use :func:`torch.Tensor.requires_grad_`
or :func:`torch.Tensor.detach`.
If you have a NumPy ``ndarray`` and want to avoid a copy, use
:func:`torch.as_tensor`.
.. warning::
When data is a tensor `x`, :func:`torch.tensor` reads out 'the data' from whatever it is passed,
and constructs a leaf variable. Therefore ``torch.tensor(x)`` is equivalent to ``x.clone().detach()``
and ``torch.tensor(x, requires_grad=True)`` is equivalent to ``x.clone().detach().requires_grad_(True)``.
The equivalents using ``clone()`` and ``detach()`` are recommended.
Args:
data (array_like): Initial data for the tensor. Can be a list, tuple,
NumPy ``ndarray``, scalar, and other types.
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, infers data type from :attr:`data`.
device (:class:`torch.device`, optional): the desired device of returned tensor.
Default: if ``None``, uses the current device for the default tensor type
(see :func:`torch.set_default_tensor_type`). :attr:`device` will be the CPU
for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional): If autograd should record operations on the
returned tensor. Default: ``False``.
pin_memory (bool, optional): If set, returned tensor would be allocated in
the pinned memory. Works only for CPU tensors. Default: ``False``.
Example::
>>> torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])
tensor([[ 0.1000, 1.2000],
[ 2.2000, 3.1000],
[ 4.9000, 5.2000]])
>>> torch.tensor([0, 1]) # Type inference on data
tensor([ 0, 1])
>>> torch.tensor([[0.11111, 0.222222, 0.3333333]],
dtype=torch.float64,
device=torch.device('cuda:0')) # creates a torch.cuda.DoubleTensor
tensor([[ 0.1111, 0.2222, 0.3333]], dtype=torch.float64, device='cuda:0')
>>> torch.tensor(3.14159) # Create a scalar (zero-dimensional tensor)
tensor(3.1416)
>>> torch.tensor([]) # Create an empty tensor (of size (0,))
tensor([])
a = torch.tensor([[0,1,2],[2,3,4]])
print(a)
b= torch.tensor([0,1],dtype=torch.float64,device=torch.device('cuda:0'))
print(b)
tensor([[0, 1, 2],
[2, 3, 4]])
tensor([0., 1.], device='cuda:0', dtype=torch.float64)
# 从numpy创建一个tensor,不如直接创建一个tensor?
help(torch.from_numpy)
Help on built-in function from_numpy:
from_numpy(...)
from_numpy(ndarray) -> Tensor
Creates a :class:`Tensor` from a :class:`numpy.ndarray`.
The returned tensor and :attr:`ndarray` share the same memory. Modifications to
the tensor will be reflected in the :attr:`ndarray` and vice versa. The returned
tensor is not resizable.
It currently accepts :attr:`ndarray` with dtypes of ``numpy.float64``,
``numpy.float32``, ``numpy.float16``, ``numpy.int64``, ``numpy.int32``,
``numpy.int16``, ``numpy.int8``, ``numpy.uint8``, and ``numpy.bool``.
Example::
>>> a = numpy.array([1, 2, 3])
>>> t = torch.from_numpy(a)
>>> t
tensor([ 1, 2, 3])
>>> t[0] = -1
>>> a
array([-1, 2, 3])
import numpy
# a和t共用同一片内存,一个改变,另一个也会改变
a = numpy.array([3,6,8],dtype=numpy.float64)
t = torch.from_numpy(a)
print(a)
print(t)
# 查看内存中的位置
print(id(a))
print(id(t))
id(a) == id(t)
#修改张量看numpy是否改变:改变了!!!
a[1] = 0
print(a)
print(t)
[3. 6. 8.]
tensor([3., 6., 8.], dtype=torch.float64)
1587656816080
1587068834584
[3. 0. 8.]
tensor([3., 0., 8.], dtype=torch.float64)
2.2 依据数值创建
#2.2 依据数值创建
help(torch.zeros)
Help on built-in function zeros:
zeros(...)
zeros(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Returns a tensor filled with the scalar value `0`, with the shape defined
by the variable argument :attr:`size`.
Args:
size (int...): a sequence of integers defining the shape of the output tensor.
Can be a variable number of arguments or a collection like a list or tuple.
out (Tensor, optional): the output tensor
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
layout (:class:`torch.layout`, optional): the desired layout of returned Tensor.
Default: ``torch.strided``.
device (:class:`torch.device`, optional): the desired device of returned tensor.
Default: if ``None``, uses the current device for the default tensor type
(see :func:`torch.set_default_tensor_type`). :attr:`device` will be the CPU
for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional): If autograd should record operations on the
returned tensor. Default: ``False``.
Example::
>>> torch.zeros(2, 3)
tensor([[ 0., 0., 0.],
[ 0., 0., 0.]])
>>> torch.zeros(5)
tensor([ 0., 0., 0., 0., 0.])
a = torch.zeros(2,3,dtype=torch.float64)
print(a)
b = torch.tensor([1])
print(b)
#将生成的张量赋给b
t = torch.zeros((3,3),out=b)
print(b)
print(t)
tensor([[0., 0., 0.],
[0., 0., 0.]], dtype=torch.float64)
tensor([1])
tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0]])
tensor([[0, 0, 0],
[0, 0, 0],
[0, 0, 0]])
help(torch.zeros_like)
Help on built-in function zeros_like:
zeros_like(...)
zeros_like(input, dtype=None, layout=None, device=None, requires_grad=False) -> Tensor
Returns a tensor filled with the scalar value `0`, with the same size as
:attr:`input`. ``torch.zeros_like(input)`` is equivalent to
``torch.zeros(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)``.
.. warning::
As of 0.4, this function does not support an :attr:`out` keyword. As an alternative,
the old ``torch.zeros_like(input, out=output)`` is equivalent to
``torch.zeros(input.size(), out=output)``.
Args:
input (Tensor): the size of :attr:`input` will determine size of the output tensor
dtype (:class:`torch.dtype`, optional): the desired data type of returned Tensor.
Default: if ``None``, defaults to the dtype of :attr:`input`.
layout (:class:`torch.layout`, optional): the desired layout of returned tensor.
Default: if ``None``, defaults to the layout of :attr:`input`.
device (:class:`torch.device`, optional): the desired device of returned tensor.
Default: if ``None``, defaults to the device of :attr:`input`.
requires_grad (bool, optional): If autograd should record operations on the
returned tensor. Default: ``False``.
Example::
>>> input = torch.empty(2, 3)
>>> torch.zeros_like(input)
tensor([[ 0., 0., 0.],
[ 0., 0., 0.]])
a = torch.empty(3,4)
print(a)
b = torch.zeros_like(a)
print(b)
tensor([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]])
tensor([[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]])
help(torch.full)
Help on built-in function full:
full(...)
full(size, fill_value, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Returns a tensor of size :attr:`size` filled with :attr:`fill_value`.
Args:
size (int...): a list, tuple, or :class:`torch.Size` of integers defining the
shape of the output tensor.
fill_value: the number to fill the output tensor with.
out (Tensor, optional): the output tensor
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
layout (:class:`torch.layout`, optional): the desired layout of returned Tensor.
Default: ``torch.strided``.
device (:class:`torch.device`, optional): the desired device of returned tensor.
Default: if ``None``, uses the current device for the default tensor type
(see :func:`torch.set_default_tensor_type`). :attr:`device` will be the CPU
for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional): If autograd should record operations on the
returned tensor. Default: ``False``.
Example::
>>> torch.full((2, 3), 3.141592)
tensor([[ 3.1416, 3.1416, 3.1416],
[ 3.1416, 3.1416, 3.1416]])
#用2.4填充(3,4)的矩阵
a = torch.full((3,4),2.4)
print(a)
tensor([[2.4000, 2.4000, 2.4000, 2.4000],
[2.4000, 2.4000, 2.4000, 2.4000],
[2.4000, 2.4000, 2.4000, 2.4000]])
help(torch.full_like)
Help on built-in function full_like:
full_like(...)
full_like(input, fill_value, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Returns a tensor with the same size as :attr:`input` filled with :attr:`fill_value`.
``torch.full_like(input, fill_value)`` is equivalent to
``torch.full(input.size(), fill_value, dtype=input.dtype, layout=input.layout, device=input.device)``.
Args:
input (Tensor): the size of :attr:`input` will determine size of the output tensor
fill_value: the number to fill the output tensor with.
dtype (:class:`torch.dtype`, optional): the desired data type of returned Tensor.
Default: if ``None``, defaults to the dtype of :attr:`input`.
layout (:class:`torch.layout`, optional): the desired layout of returned tensor.
Default: if ``None``, defaults to the layout of :attr:`input`.
device (:class:`torch.device`, optional): the desired device of returned tensor.
Default: if ``None``, defaults to the device of :attr:`input`.
requires_grad (bool, optional): If autograd should record operations on the
returned tensor. Default: ``False``.
a = torch.zeros(2,3)
b = torch.full_like(a,2)
print(b)
tensor([[2., 2., 2.],
[2., 2., 2.]])
help(torch.arange)
Help on built-in function arange:
arange(...)
arange(start=0, end, step=1, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Returns a 1-D tensor of size :math:`\left\lceil \frac{\text{end} - \text{start}}{\text{step}} \right\rceil`
with values from the interval ``[start, end)`` taken with common difference
:attr:`step` beginning from `start`.
Note that non-integer :attr:`step` is subject to floating point rounding errors when
comparing against :attr:`end`; to avoid inconsistency, we advise adding a small epsilon to :attr:`end`
in such cases.
.. math::
\text{out}_{{i+1}} = \text{out}_{i} + \text{step}
Args:
start (Number): the starting value for the set of points. Default: ``0``.
end (Number): the ending value for the set of points
step (Number): the gap between each pair of adjacent points. Default: ``1``.
out (Tensor, optional): the output tensor
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`). If `dtype` is not given, infer the data type from the other input
arguments. If any of `start`, `end`, or `stop` are floating-point, the
`dtype` is inferred to be the default dtype, see
:meth:`~torch.get_default_dtype`. Otherwise, the `dtype` is inferred to
be `torch.int64`.
layout (:class:`torch.layout`, optional): the desired layout of returned Tensor.
Default: ``torch.strided``.
device (:class:`torch.device`, optional): the desired device of returned tensor.
Default: if ``None``, uses the current device for the default tensor type
(see :func:`torch.set_default_tensor_type`). :attr:`device` will be the CPU
for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional): If autograd should record operations on the
returned tensor. Default: ``False``.
Example::
>>> torch.arange(5)
tensor([ 0, 1, 2, 3, 4])
>>> torch.arange(1, 4)
tensor([ 1, 2, 3])
>>> torch.arange(1, 2.5, 0.5)
tensor([ 1.0000, 1.5000, 2.0000])
#创建等差的1维张量
#[start,end)
a = torch.arange(4)
print(a)
b = torch.arange(1,5)
print(b)
#计算过程:step
c = torch.arange(1,7,2)
print(c)
d = torch.arange(2,8,3)
print(d)
tensor([0, 1, 2, 3])
tensor([1, 2, 3, 4])
tensor([1, 3, 5])
tensor([2, 5])
help(torch.linspace)
Help on built-in function linspace:
linspace(...)
linspace(start, end, steps=100, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Returns a one-dimensional tensor of :attr:`steps`
equally spaced points between :attr:`start` and :attr:`end`.
The output tensor is 1-D of size :attr:`steps`.
Args:
start (float): the starting value for the set of points
end (float): the ending value for the set of points
steps (int): number of points to sample between :attr:`start`
and :attr:`end`. Default: ``100``.
out (Tensor, optional): the output tensor
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
layout (:class:`torch.layout`, optional): the desired layout of returned Tensor.
Default: ``torch.strided``.
device (:class:`torch.device`, optional): the desired device of returned tensor.
Default: if ``None``, uses the current device for the default tensor type
(see :func:`torch.set_default_tensor_type`). :attr:`device` will be the CPU
for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional): If autograd should record operations on the
returned tensor. Default: ``False``.
Example::
>>> torch.linspace(3, 10, steps=5)
tensor([ 3.0000, 4.7500, 6.5000, 8.2500, 10.0000])
>>> torch.linspace(-10, 10, steps=5)
tensor([-10., -5., 0., 5., 10.])
>>> torch.linspace(start=-10, end=10, steps=5)
tensor([-10., -5., 0., 5., 10.])
>>> torch.linspace(start=-10, end=10, steps=1)
tensor([-10.])
#创建均分为1维的张量
# 1表示长度值,闭合区间
a = torch.linspace(-10,10,1)
print(a)
#计算过程:6-2/2-1=4
b = torch.linspace(2,6,2)
print(b)
c = torch.linspace(1,7,3)
print(c)
tensor([-10.])
tensor([2., 6.])
tensor([1., 4., 7.])
#创建对数均分的1维张量,默认是以10为底
a = torch.logspace(1,4,2)
print(a)
#指定底数
b = torch.logspace(1,5,2,base=2)
print(b)
tensor([ 10., 10000.])
tensor([ 2., 32.])
a = torch.eye(2,3)
print(a)
tensor([[1., 0., 0.],
[0., 1., 0.]])
#生成正态分布
# 返回一个张量,张量里面的随机数是从相互独立的正态分布中随机生成的;
# 取值范围:[1,5) 左闭右开
# 当均值和方差为两个张量时
mean = torch.arange(1,5,dtype=torch.float)
std = torch.arange(1,5,dtype=torch.float)
t_normal = torch.normal(mean,std,)
print("mean:{}\n std:{}".format(mean,std))
print(t_normal)
# 都为标量时
r_normal = torch.normal(0.,1.,size=(4,))
print(r_normal)
# 一个为张量一个为标量时
b_normal = torch.normal(mean,1)
print(b_normal)
c_normal = torch.normal(0,std)
print(c_normal)
mean:tensor([1., 2., 3., 4.])
std:tensor([1., 2., 3., 4.])
tensor([0.8883, 1.5417, 5.5050, 1.3753])
tensor([ 1.2955, -1.0583, 0.6835, -1.5387])
tensor([0.0636, 1.5368, 3.5891, 1.3137])
tensor([ 0.7599, 1.5384, -3.1771, 1.3020])
2.3 依据概率创建
#3、依据概率分布创建张量
# 返回均值为0方差为1的正态分布
a = torch.randn(1)
print(a)
b = torch.randn(2,4)
print(b)
# 和b的形状是一样的
c = torch.randn_like(b)
print(c)
tensor([0.8118])
tensor([[ 0.0483, -0.2114, -1.1071, -0.2426],
[-0.6061, 0.7965, 2.1534, -0.2683]])
tensor([[ 0.2241, -0.1371, -1.0448, -0.2493],
[ 0.6166, -0.4598, -0.4811, 0.6507]])
help(torch.randn)
Help on built-in function randn:
randn(...)
randn(*size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Returns a tensor filled with random numbers from a normal distribution
with mean `0` and variance `1` (also called the standard normal
distribution).
.. math::
\text{out}_{i} \sim \mathcal{N}(0, 1)
The shape of the tensor is defined by the variable argument :attr:`size`.
Args:
size (int...): a sequence of integers defining the shape of the output tensor.
Can be a variable number of arguments or a collection like a list or tuple.
out (Tensor, optional): the output tensor
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
layout (:class:`torch.layout`, optional): the desired layout of returned Tensor.
Default: ``torch.strided``.
device (:class:`torch.device`, optional): the desired device of returned tensor.
Default: if ``None``, uses the current device for the default tensor type
(see :func:`torch.set_default_tensor_type`). :attr:`device` will be the CPU
for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional): If autograd should record operations on the
returned tensor. Default: ``False``.
Example::
>>> torch.randn(4)
tensor([-2.1436, 0.9966, 2.3426, -0.6366])
>>> torch.randn(2, 3)
tensor([[ 1.5954, 2.8929, -1.0923],
[ 1.1719, -0.4709, -0.1996]])
#在区间[0,1)上生成均匀分布
a = torch.rand(2)
print(a)
b = torch.rand_like(a)
print(b)
tensor([0.6405, 0.3735])
tensor([0.6588, 0.3158])
# 在区间[low,high)生成整数均匀分布
a = torch.randint(2,3,size=(4,))
print(a)
tensor([2, 2, 2, 2])
help(torch.randint_like)
Help on built-in function randint_like:
randint_like(...)
randint_like(input, low=0, high, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Returns a tensor with the same shape as Tensor :attr:`input` filled with
random integers generated uniformly between :attr:`low` (inclusive) and
:attr:`high` (exclusive).
.. note:
With the global dtype default (``torch.float32``), this function returns
a tensor with dtype ``torch.int64``.
Args:
input (Tensor): the size of :attr:`input` will determine size of the output tensor
low (int, optional): Lowest integer to be drawn from the distribution. Default: 0.
high (int): One above the highest integer to be drawn from the distribution.
dtype (:class:`torch.dtype`, optional): the desired data type of returned Tensor.
Default: if ``None``, defaults to the dtype of :attr:`input`.
layout (:class:`torch.layout`, optional): the desired layout of returned tensor.
Default: if ``None``, defaults to the layout of :attr:`input`.
device (:class:`torch.device`, optional): the desired device of returned tensor.
Default: if ``None``, defaults to the device of :attr:`input`.
requires_grad (bool, optional): If autograd should record operations on the
returned tensor. Default: ``False``.
#生成从0~n-1的随机排列;n表示张量的长度
a = torch.randperm(3)
print(a)
tensor([1, 0, 2])
# 以input为概率,生成伯努利分布(0-1分布,两点分布)
# 结果不是0就是1
# input必须是个张量
a = torch.tensor(0.3)
print(a)
b = torch.bernoulli(a)
print(b)
tensor(0.3000)
tensor(1.)
本文地址:https://blog.csdn.net/weixin_43687366/article/details/107440275