欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

(转)如何将Sklearn数据集Brunch格式转换为Pandas数据集DataFrame?

程序员文章站 2024-03-16 14:30:52
...

转载链接:[https://vimsky.com/article/4362.html]

from sklearn.datasets import load_iris
import pandas as pd
data = load_iris()
print(type(data))   
 #输出:
<class 'sklearn.utils.Bunch'>
data1 = pd. # Is there a Pandas method to accomplish this? 

最佳思路
可以手动使用pd.DataFrame构造函数,提供一个numpy数组(data)和列名的列表(columns)。要将所有内容都放在一个DataFrame中,可以使用np.c_[…]将特征和目标(标签)连接到一个numpy数组中(请注意运算符[]):

import numpy as np
import pandas as pd
from sklearn.datasets import load_iris

# save load_iris() sklearn dataset to iris
# if you'd like to check dataset type use: type(load_iris())
# if you'd like to view list of attributes use: dir(load_iris())
iris = load_iris()

# np.c_ is the numpy concatenate function
# which is used to concat iris['data'] and iris['target'] arrays 
# for pandas column argument: concat iris['feature_names'] list
# and string list (in this case one string); you can make this anything you'd like..  
# the original dataset would probably call this ['Species']
data1 = pd.DataFrame(data= np.c_[iris['data'], iris['target']],
                     columns= iris['feature_names'] + ['target'])

(转)如何将Sklearn数据集Brunch格式转换为Pandas数据集DataFrame?

type(load_iris())
sklearn.utils.Bunch

dir(load_iris())
['DESCR', 'data', 'feature_names', 'target', 'target_names']

 'feature_names': ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'],
 'target': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
        1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
        1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
        2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
        2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]),
 'target_names': array(['setosa', 'versicolor', 'virginica']

第二种思路
对于scikit-learn中的所有数据集,上文"最佳思路"的解决方案不够通用。例如,它不适用于波士顿住房数据集。我提出了另一种更通用的解决方案。也无需使用numpy。

from sklearn import datasets
import pandas as pd

boston_data = datasets.load_boston()
df_boston = pd.DataFrame(boston_data.data,columns=boston_data.feature_names)
df_boston['target'] = pd.Series(boston_data.target)
df_boston.head()

作为通用函数:

def sklearn_to_df(sklearn_dataset):
    df = pd.DataFrame(sklearn_dataset.data, columns=sklearn_dataset.feature_names)
    df['target'] = pd.Series(sklearn_dataset.target)
    return df

df_boston = sklearn_to_df(datasets.load_boston())

dataframe转化成array:df=df.values

array转化成dataframe:
import pandas as pd
df = pd.DataFrame(df)

df=df.values.flatten() 需要的时候在末尾加一个flatten() 变成一行的方便统计分析

相关标签: 技术 python