欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

【python机器学习手册】第二章 加载数据

程序员文章站 2022-06-15 17:22:14
...
#2.1
from sklearn import datasets
digits=datasets.load_digits()#把这个数据集命名为digits
digits
{'data': array([[ 0.,  0.,  5., ...,  0.,  0.,  0.],
        [ 0.,  0.,  0., ..., 10.,  0.,  0.],
        [ 0.,  0.,  0., ..., 16.,  9.,  0.],
        ...,
        [ 0.,  0.,  1., ...,  6.,  0.,  0.],
        [ 0.,  0.,  2., ..., 12.,  0.,  0.],
        [ 0.,  0., 10., ..., 12.,  1.,  0.]]),
 'target': array([0, 1, 2, ..., 8, 9, 8]),
 'target_names': array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]),
 'images': array([[[ 0.,  0.,  5., ...,  1.,  0.,  0.],
         [ 0.,  0., 13., ..., 15.,  5.,  0.],
         [ 0.,  3., 15., ..., 11.,  8.,  0.],
         ...,
         [ 0.,  4., 11., ..., 12.,  7.,  0.],
         [ 0.,  2., 14., ..., 12.,  0.,  0.],
         [ 0.,  0.,  6., ...,  0.,  0.,  0.]],
 
        [[ 0.,  0.,  0., ...,  5.,  0.,  0.],
         [ 0.,  0.,  0., ...,  9.,  0.,  0.],
         [ 0.,  0.,  3., ...,  6.,  0.,  0.],
         ...,
         [ 0.,  0.,  1., ...,  6.,  0.,  0.],
         [ 0.,  0.,  1., ...,  6.,  0.,  0.],
         [ 0.,  0.,  0., ..., 10.,  0.,  0.]],
 
        [[ 0.,  0.,  0., ..., 12.,  0.,  0.],
         [ 0.,  0.,  3., ..., 14.,  0.,  0.],
         [ 0.,  0.,  8., ..., 16.,  0.,  0.],
         ...,
         [ 0.,  9., 16., ...,  0.,  0.,  0.],
         [ 0.,  3., 13., ..., 11.,  5.,  0.],
         [ 0.,  0.,  0., ..., 16.,  9.,  0.]],
 
        ...,
 
        [[ 0.,  0.,  1., ...,  1.,  0.,  0.],
         [ 0.,  0., 13., ...,  2.,  1.,  0.],
         [ 0.,  0., 16., ..., 16.,  5.,  0.],
         ...,
         [ 0.,  0., 16., ..., 15.,  0.,  0.],
         [ 0.,  0., 15., ..., 16.,  0.,  0.],
         [ 0.,  0.,  2., ...,  6.,  0.,  0.]],
 
        [[ 0.,  0.,  2., ...,  0.,  0.,  0.],
         [ 0.,  0., 14., ..., 15.,  1.,  0.],
         [ 0.,  4., 16., ..., 16.,  7.,  0.],
         ...,
         [ 0.,  0.,  0., ..., 16.,  2.,  0.],
         [ 0.,  0.,  4., ..., 16.,  2.,  0.],
         [ 0.,  0.,  5., ..., 12.,  0.,  0.]],
 
        [[ 0.,  0., 10., ...,  1.,  0.,  0.],
         [ 0.,  2., 16., ...,  1.,  0.,  0.],
         [ 0.,  0., 15., ..., 15.,  0.,  0.],
         ...,
         [ 0.,  4., 16., ..., 16.,  6.,  0.],
         [ 0.,  8., 16., ..., 16.,  8.,  0.],
         [ 0.,  1.,  8., ..., 12.,  1.,  0.]]]),
 'DESCR': ".. _digits_dataset:\n\nOptical recognition of handwritten digits dataset\n--------------------------------------------------\n\n**Data Set Characteristics:**\n\n    :Number of Instances: 5620\n    :Number of Attributes: 64\n    :Attribute Information: 8x8 image of integer pixels in the range 0..16.\n    :Missing Attribute Values: None\n    :Creator: E. Alpaydin (alpaydin '@' boun.edu.tr)\n    :Date: July; 1998\n\nThis is a copy of the test set of the UCI ML hand-written digits datasets\nhttps://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits\n\nThe data set contains images of hand-written digits: 10 classes where\neach class refers to a digit.\n\nPreprocessing programs made available by NIST were used to extract\nnormalized bitmaps of handwritten digits from a preprinted form. From a\ntotal of 43 people, 30 contributed to the training set and different 13\nto the test set. 32x32 bitmaps are divided into nonoverlapping blocks of\n4x4 and the number of on pixels are counted in each block. This generates\nan input matrix of 8x8 where each element is an integer in the range\n0..16. This reduces dimensionality and gives invariance to small\ndistortions.\n\nFor info on NIST preprocessing routines, see M. D. Garris, J. L. Blue, G.\nT. Candela, D. L. Dimmick, J. Geist, P. J. Grother, S. A. Janet, and C.\nL. Wilson, NIST Form-Based Handprint Recognition System, NISTIR 5469,\n1994.\n\n.. topic:: References\n\n  - C. Kaynak (1995) Methods of Combining Multiple Classifiers and Their\n    Applications to Handwritten Digit Recognition, MSc Thesis, Institute of\n    Graduate Studies in Science and Engineering, Bogazici University.\n  - E. Alpaydin, C. Kaynak (1998) Cascading Classifiers, Kybernetika.\n  - Ken Tang and Ponnuthurai N. Suganthan and Xi Yao and A. Kai Qin.\n    Linear dimensionalityreduction using relevance weighted LDA. School of\n    Electrical and Electronic Engineering Nanyang Technological University.\n    2005.\n  - Claudio Gentile. A New Approximate Maximal Margin Classification\n    Algorithm. NIPS. 2000."}
features=digits.data#令数据命名为features
target=digits.target#令目标向量命名为target
features[0]#显示第一个样本数据
array([ 0.,  0.,  5., 13.,  9.,  1.,  0.,  0.,  0.,  0., 13., 15., 10.,
       15.,  5.,  0.,  0.,  3., 15.,  2.,  0., 11.,  8.,  0.,  0.,  4.,
       12.,  0.,  0.,  8.,  8.,  0.,  0.,  5.,  8.,  0.,  0.,  9.,  8.,
        0.,  0.,  4., 11.,  0.,  1., 12.,  7.,  0.,  0.,  2., 14.,  5.,
       10., 12.,  0.,  0.,  0.,  0.,  6., 13., 10.,  0.,  0.,  0.])
#2.2创建仿真数据
#用于回归的
from sklearn.datasets import make_regression
features,target,cofficients=make_regression(n_samples=100,#共100个样本
                                           n_features=3,#每个样本3个特征
                                           n_informative=3,#用于确定生成目标向量的特征的数量,理想的就是目标
                                           n_targets=1,
                                           noise=0,
                                           coef=True,#一定要大写T,是否返回系数
                                           random_state=1)#相当于随机种子数,保证每次产生的伪随机数都一样
print("Features Matrix\n",features[:3])#features matrix只是一个字符而已,输出的是1-3行的样本
#我们通常求特征值和特征向量即为求出该矩阵能使哪些向量(当然是特征向量)只发生拉伸,使其发生拉伸的程度如何(特征值大小)。
#这样做的意义在于,看清一个矩阵在那些方面能产生最大的效果(power),并根据所产生的每个特征向量(一般研究特征值最大的那几个)
#进行分类讨论与研究。
#特征矩阵就是特征向量组合起来
Features Matrix
 [[ 1.29322588 -0.61736206 -0.11044703]
 [-2.793085    0.36633201  1.93752881]
 [ 0.80186103 -0.18656977  0.0465673 ]]
print("你好\n","我很好谢谢","你呢\n","我也是")#和上面一比,懂么?\n是换行
你好
 我很好谢谢 你呢
 我也是
#用于分类的
from sklearn.datasets import make_classification
features,target=make_classification(n_samples=100,#100个样本
                                  n_features=3,##每个样本三个特征值
                                  n_informative=3,#用于生成目标向量的特征数量
                                  n_redundant=0,#冗余信息,informative特征的随机线性组合,大概是否多重共线性??
                                  n_classes=2,#分几个类
                                  weights=[0.25,0.75],#用于生成不均衡的数据,25属于1类,75属于第二类
                                  random_state=1)#种子数
target#结果向量,分类结果
array([1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0,
       1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
       1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1,
       1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0,
       1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0])
#用于聚类
from sklearn.datasets import make_blobs #生成有中心点的数据,一定要从模块中导入函数
features,target=make_blobs(n_samples=100,##100个样本
                                  n_features=3,##每个样本三个特征值
                                 centers=3,#决定要生成多少个聚类
                                  cluster_std=0.5,#k标准差=0.5
                                  shuffle=True,#是否随机排序                              
                                  random_state=1)#种子数
import matplotlib.pyplot as plt
plt.scatter(features[:,0],features[:,1],c=target)#在散点图中显示三组数据,c代表颜色,可以c=“tomato”就是番茄色了
plt.show()

【python机器学习手册】第二章 加载数据

#2.3 #2.4
import pandas as pd
url='D:\\研究生文件\\数据挖掘作业\\cpi.xls'#要双斜杠,单斜杠在python中有转义的意思
dataframe=pd.read_excel(url)
dataframe
指标 居民消费价格指数(上月=100)_当期
0 地区 全国
1 频度
2 单位 -
3 2001-01 100.9
4 2001-02 100.1
... ... ...
234 2020-04 99.14
235 2020-05 99.21
236 2020-06 99.93
237 2020-07 100.62
238 2020-08 100.4

239 rows × 2 columns

dataframe2=pd.read_excel(url,header=2)#标题也算一行,数据表名称sheetname
dataframe2
频度
0 单位 -
1 2001-01 100.9
2 2001-02 100.1
3 2001-03 99.4
4 2001-04 100.2
... ... ...
232 2020-04 99.14
233 2020-05 99.21
234 2020-06 99.93
235 2020-07 100.62
236 2020-08 100.4

237 rows × 2 columns

features[:,1]#显示第二列
array([-6.9192931 , -7.45371354,  4.54555429, -6.8957298 , -3.09039485,
       -3.14663284,  3.90174996,  4.09565644, -6.57077396, -2.22009312,
       -6.90950507, -6.73806205,  4.497097  , -3.94215434, -3.51470417,
       -6.80321521, -7.67112707, -3.38198702, -3.21254297,  4.88689987,
        5.42750424, -7.98387   , -6.99449559,  4.91167761,  4.37524407,
       -3.68929729, -4.12886905, -5.87227928, -2.85749944, -2.61177727,
       -2.72080053, -7.76716249, -7.09574322, -2.92789422,  4.97746113,
       -3.84324331, -6.36363885, -7.97241357, -7.23104461, -6.6564586 ,
        4.84319899, -3.6032657 ,  4.5076094 , -3.42805091, -2.53333406,
        4.032513  , -6.94846603, -2.90084324,  4.65028697, -1.7898901 ,
       -6.79829619,  4.77147767,  4.36050322,  3.49977809,  4.63770026,
       -6.82826149, -2.96697984, -7.33669343,  4.28113741, -2.79333408,
        3.97653657,  5.21024339,  4.49702358, -7.02787738,  4.41793286,
        4.97416254,  5.09000622,  4.0276528 , -7.1497692 , -6.96674708,
       -3.47019164, -7.12030441, -7.78013149, -2.42757008, -3.03819694,
       -3.48673198, -6.68515087,  4.10149724, -3.52544569, -2.85350566,
        4.44661516, -6.72812545,  4.51194088,  4.20138611, -7.74016104,
       -2.94976842,  4.19843408, -1.99569671, -7.15818092, -6.86924086,
       -3.2212658 , -7.68625504,  4.90601549, -7.06240196, -3.00610388,
        4.7193956 , -2.87047907,  3.98236791,  4.30053341, -3.40534366])
features[:4,:]#显示前四行
array([[ -4.03447086,  -6.9192931 ,  -8.09919678],
       [ -4.34290202,  -7.45371354,  -7.84194671],
       [ -2.0997983 ,   4.54555429, -10.03279089],
       [ -3.91596232,  -6.8957298 ,  -8.01418979]])
digits.data.shape#原数据的性质
(1797, 64)
digits.target#数据的标签
array([0, 1, 2, ..., 8, 9, 8])

相关标签: python 机器学习