欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

用 Python 定义 Schema 并生成 Parquet 文件详情

程序员文章站 2022-06-28 23:17:13
目录1、定义 schema 并生成 parquet 文件2、验证 parquet 数据文件1、验证 parquet 数据文件java 和 python 实现 avro 转换成 parquet 格式,...

java python 实现 avro 转换成 parquet 格式, chema 都是在 avro 中定义的。这里要尝试的是如何定义 parquet schema, 然后据此填充数据并生成 parquet 文件。

一、简单字段定义

1、定义 schema 并生成 parquet 文件

import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq

# 定义 schema
schema = pa.schema([
    ('id', pa.int32()),
    ('email', pa.string())
])

# 准备数据
ids = pa.array([1, 2], type = pa.int32())
emails = pa.array(['first@example.com', 'second@example.com'], pa.string())

# 生成 parquet 数据
batch = pa.recordbatch.from_arrays(
    [ids, emails],
    schema = schema
)
table = pa.table.from_batches([batch])

# 写 parquet 文件 plain.parquet
pq.write_table(table, 'plain.parquet')
import pandas as pd

import pyarrow as pa

import pyarrow . parquet as pq

# 定义 schema

schema = pa . schema ( [

     ( 'id' , pa . int32 ( ) ) ,

     ( 'email' , pa . string ( ) )

] )

# 准备数据

ids = pa . array ( [ 1 , 2 ] , type = pa . int32 ( ) )

emails = pa . array ( [ 'first@example.com' , 'second@example.com' ] , pa . string ( ) )

# 生成 parquet 数据

batch = pa . recordbatch . from_arrays (

     [ ids , emails ] ,

     schema = schema

)

table = pa . table . from_batches ( [ batch ] )

# 写 parquet 文件 plain.parquet

pq . write_table ( table , 'plain.parquet' )

2、验证 parquet 数据文件

我们可以用工具 parquet-tools 来查看 plain.parquet 文件的数据和 schema

 $ parquet-tools schema plain.parquet  message schema {      optional int32 id;      optional binary email (string);  }  $ parquet-tools cat --json plain.parquet  {"id":1,"email":"first@example.com"}  {"id":2,"email":"second@example.com"} 


没问题,与我们期望的一致。也可以用 pyarrow 代码来获取其中的 schema 和数据

schema = pq.read_schema('plain.parquet')
print(schema)

df = pd.read_parquet('plain.parquet')
print(df.to_json())
schema = pq . read_schema ( 'plain.parquet' )

print ( schema )

df = pd . read_parquet ( 'plain.parquet' )

print ( df . to_json ( ) )

输出为:

schema = pq.read_schema('plain.parquet')
print(schema)

df = pd.read_parquet('plain.parquet')
print(df.to_json())
schema = pq . read_schema ( 'plain.parquet' )

print ( schema )

df = pd . read_parquet ( 'plain.parquet' )

print ( df . to_json ( ) )

二、含嵌套字段定义

下面的 schema 定义加入一个嵌套对象,在 address 下分 email_address post_addressschema 定义及生成 parquet 文件的代码如下

import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq

# 内部字段
address_fields = [
    ('email_address', pa.string()),
    ('post_address', pa.string()),
]

# 定义 parquet schema,address 嵌套了 address_fields
schema = pa.schema(j)

# 准备数据
ids = pa.array([1, 2], type = pa.int32())
addresses = pa.array(
    [('first@example.com', 'city1'), ('second@example.com', 'city2')],
    pa.struct(address_fields)
)

# 生成 parquet 数据
batch = pa.recordbatch.from_arrays(
    [ids, addresses],
    schema = schema
)
table = pa.table.from_batches([batch])

# 写 parquet 数据到文件
pq.write_table(table, 'nested.parquet')
import pandas as pd

import pyarrow as pa

import pyarrow . parquet as pq

# 内部字段

address_fields = [

     ( 'email_address' , pa . string ( ) ) ,

     ( 'post_address' , pa . string ( ) ) ,

]

# 定义 parquet schema,address 嵌套了 address_fields

schema = pa . schema ( j )

# 准备数据

ids = pa . array ( [ 1 , 2 ] , type = pa . int32 ( ) )

addresses = pa . array (

     [ ( 'first@example.com' , 'city1' ) , ( 'second@example.com' , 'city2' ) ] ,

     pa . struct ( address_fields )

)

# 生成 parquet 数据

batch = pa . recordbatch . from_arrays (

     [ ids , addresses ] ,

     schema = schema

)

table = pa . table . from_batches ( [ batch ] )

# 写 parquet 数据到文件

pq . write_table ( table , 'nested.parquet' )

1、验证 parquet 数据文件

同样用 parquet-tools 来查看下 nested.parquet 文件

 $ parquet-tools schema nested.parquet  message schema {      optional int32 id;      optional group address {          optional binary email_address (string);          optional binary post_address (string);      }  }  $ parquet-tools cat --json nested.parquet  {"id":1,"address":{"email_address":"first@example.com","post_address":"city1"}}  {"id":2,"address":{"email_address":"second@example.com","post_address":"city2"}} 


parquet-tools 看到的 schama 并没有 struct 的字样,但体现了它 address 与下级属性的嵌套关系。

pyarrow 代码来读取 nested.parquet 文件的 schema 和数据是什么样子

schema = pq.read_schema("nested.parquet")
print(schema)

df = pd.read_parquet('nested.parquet')
print(df.to_json())
schema = pq . read_schema ( "nested.parquet" )

print ( schema )

df = pd . read_parquet ( 'nested.parquet' )

print ( df . to_json ( ) )

输出:

id: int32
  -- field metadata --
  parquet:field_id: '1'
address: struct<email_address: string, post_address: string>
  child 0, email_address: string
    -- field metadata --
    parquet:field_id: '3'
  child 1, post_address: string
    -- field metadata --
    parquet:field_id: '4'
  -- field metadata --
  parquet:field_id: '2'
{"id":{"0":1,"1":2},"address":{"0":{"email_address":"first@example.com","post_address":"city1"},"1":{"email_address":"second@example.com","post_address":"city2"}}}
id : int32

   -- field metadata --

   parquet : field_id : '1'

address : struct & lt ; email_address : string , post_address : string & gt ;

   child 0 , email_address : string

     -- field metadata --

     parquet : field_id : '3'

   child 1 , post_address : string

     -- field metadata --

     parquet : field_id : '4'

   -- field metadata --

   parquet : field_id : '2'

{ "id" : { "0" : 1 , "1" : 2 } , "address" : { "0" : { "email_address" : "first@example.com" , "post_address" : "city1" } , "1" : { "email_address" : "second@example.com" , "post_address" : "city2" } } }

数据当然是一样的,有略微不同的是显示的 schema 中, address 标识为 struct<email_address: string, post_address: string> , 明确的表明它是一个 struct 类型,而不是只展示嵌套层次。

到此这篇关于用 python 定义 schema 并生成 parquet 文件详情的文章就介绍到这了,更多相关用 python 定义 schema 并生成 parquet 文件内容请搜索以前的文章或继续浏览下面的相关文章希望大家以后多多支持!