欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

大数据上云第一课:(1)MaxCompute授权和外表操作躲坑指南 json 

程序员文章站 2022-03-03 14:48:30
...
**一、子账号创建、AK信息绑定** 
如果您是第一次使用子账号登录数加平台和使用DataWorks,需要确认以下信息: 
• 该子账号所属主账号的企业别名。 
• 该子账号的用户名和密码。 
• 该子账号的AccessKey ID和AccessKey Secret。 
• 确认主账号已经允许子账号启用控制台登录。 
• 确认主账号已经允许子账号自主管理AccessKey。 
**1、子账号创建** 
(1)创建子账号 
(2)绑定AK信息 
(3)DataWorks给定角色 
(1)使用阿里云账号(主账号)登录RAM控制台。 
(2)在左侧导航栏的人员管理菜单下,单击用户。 
(3)单击新建用户。 
(4)输入登录名称和显示名称。 
(5)在访问方式区域下,选择控制台密码登录。 
(6)单击确认。

```
说明:
(1)单击添加用户,可一次性创建多个RAM用户。
(2)RAM用户创建完成后,务必保存用户名和登录密码,并将其告知子账号。
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

**2、创建RAM子账号的访问密钥** 
访问密钥对开发人员在DataWorks中创建的任务顺利运行非常重要,该密钥区别于登录时填写的账号和密码,主要用于在阿里云各产品间互相认证使用权限。因此主账号需要为子账号创建AccessKey。创建成功后,请尽可能保证AccessKey ID和AccessKey Secret的安全,切勿让他人知晓,一旦有泄漏的风险,请及时禁用和更新。运行密钥AK包括AccessKey ID和AccessKey Secret两部分。如果云账号允许RAM用户自主管理AccessKey,RAM用户也可以自行创建AccessKey。 
为子账号创建AccessKey的操作如下所示。 
(1)在左侧导航栏的人员管理菜单下,单击用户。 
(2)在用户登录名称/显示名称列表下,单击目标RAM用户名称。 
(3)在用户AccessKey 区域下,单击创建新的AccessKey。 
(4)单击确认。

```
说明:
首次创建时需填写手机验证码。
AccessKeySecret只在创建时显示,不提供查询,请妥善保管。若AccessKey泄露或丢失,则需要创建新的AccessKey,最多可以创建2个AccessKey。
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

**3、给RAM子账号授权** 
如果您需要让子账号能够创建DataWorks工作空间,需要给子账号授予AliyunDataWorksFullAccess权限。 
(1)在左侧导航栏的人员管理菜单下,单击用户。 
(2)在用户登录名称/显示名称列表下,找到目标RAM用户。 
(3)单击添加权限,被授权主体会自动填入。 
(4)在左侧权限策略名称列表下,单击需要授予RAM用户的权限策略。 
(5)单击确定。 
(6)单击完成。

```
说明:在右侧区域框,选择某条策略并单击×,可撤销该策略。
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

**二、子账号生产环境创建函数、访问资源授权,OSS外部表授权** 
**1、账号生产环境创建函数、访问资源授权** 
子账号登录DataWorks控制台之后,单击工作空间管理,成员管理给该子账号一个相应的角色。各角色对应的权限可以在工作空间管理界面的权限列表查看。此处添加的成员角色对生产环境是隔离的。下面介绍一下生产环境创建函数、访问资源授权。 
(1)创建一个新的角色,给角色授权。

```

创建角色:create role worker;
角色指派:grant worker TO ram$建伟MaxCompute:banzhan;(ram$建伟MaxCompute:banzhan为RAM账号)
对角色授权:grant CreateInstance, CreateResource, CreateFunction, CreateTable, List ON PROJECT wei_wwww TO ROLE worker;
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

(2)创建UDF函数。

```
CREATE FUNCTION banzha_udf as 'com.aliyun.udf.test.UDF_DEMO' using '1218.jar';
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

前提条件是已经上传1818.jar包。资源上传结合搬站第一课视频。 
**2、OSS访问授权** 
MaxCompute需要直接访问OSS的数据,前提是需要您将OSS的数据相关权限赋给MaxCompute的访问账号。如果没有进行相应授权创,创建外部表会发现报错如下: 
![image.png](https://ucc.alicdn.com/pic/developer-ecology/d7fb840cc455495ab962527df023c70a.png) 
此时需要我们授权去访问OSS 
授权方式有两种: 
(1)当MaxCompute和OSS的Owner是同一个账号时,可以直接登录阿里云账号后,单击此处完成一键授权。[一键授权](https://ram.console.aliyun.com/?spm=a2c4g.11186623.2.9.7cec6a06eVvMBs#/role/authorize?request=%7B%22Requests%22:%20%7B%22request1%22:%20%7B%22RoleName%22:%20%22AliyunODPSDefaultRole%22,%20%22TemplateId%22:%20%22DefaultRole%22%7D%7D,%20%22ReturnUrl%22:%20%22https:%2F%2Fram.console.aliyun.com%2F%22,%20%22Service%22:%20%22ODPS%22%7D),我们可以在访问控制给改子账号添加管理对象存储服务(OSS)权限(AliyunOSSFullAccess)。 
(2)自定义授权 
a.新增一个RAM角色oss-admin 
b.修改角色策略内容设置

```
--当MaxCompute和OSS的Owner是同一个账号,设置如下。
{
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": [
    "odps.aliyuncs.com"
]
}
}
],
"Version": "1"
}


--当MaxCompute和OSS的Owner不是同一个账号,设置如下。
{
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": [
    "MaxCompute的Owner云账号id@odps.aliyuncs.com"
]
}
}
],
"Version": "1"
}
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

c.授予角色访问OSS必要的权限AliyunODPSRolePolicy

```
{
"Version": "1",
"Statement": [
{
"Action": [
"oss:ListBuckets",
"oss:GetObject",
"oss:ListObjects",
"oss:PutObject",
"oss:DeleteObject",
"oss:AbortMultipartUpload",
"oss:ListParts"
],
"Resource": "*",
"Effect": "Allow"
}
]
}
--可自定义其它权限。
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

d.将权限AliyunODPSRolePolicy授权给该角色。 
**三、OSS外部表创建指引** 
**1、外部表创建的语法格式介绍** 
**(1)外部表创建示例:**

```
CREATE EXTERNAL TABLE IF NOT EXISTS fc_rcfile
(   `id` int, 
    `name` string
)
PARTITIONED BY ( `time_ds` string)
ROW FORMAT SERDE 
'org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe' 
STORED AS RCFILE
LOCATION 'oss://oss-cn-beijing-internal.aliyuncs.com/oss-huabei2/jianwei/fc_rcfile/';
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

**(2)LOCATION说明** 
LOCATION必须指定一个OSS目录,默认系统会读取这个目录下所有的文件。 
建议您使用OSS提供的内网域名,否则将产生OSS流量费用。 
访问OSS外部表,目前不支持使用外网Endpoint。 
目前STORE AS单个文件大小不能超过3G,如果文件过大,建议split拆分。 
建议您OSS数据存放的区域对应您开通MaxCompute的区域。由于MaxCompute只有在部分区域部署,我们不承诺跨区域的数据连通性。

```
OSS的连接格式为oss://oss-cn-shanghai-internal.aliyuncs.com/Bucket名称/目录名称/。目录后不要加文件名称,以下为错误用法。
http://oss-odps-test.oss-cn-shanghai-internal.aliyuncs.com/Demo/  -- 不支持http连接。
https://oss-odps-test.oss-cn-shanghai-internal.aliyuncs.com/Demo/ -- 不支持https连接。
oss://oss-odps-test.oss-cn-shanghai-internal.aliyuncs.com/Demo -- 连接地址错误。
oss://oss://oss-cn-shanghai-internal.aliyuncs.com/oss-odps-test/Demo/vehicle.csv 
-- 不必指定文件名。
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

**(3)外部表创建格式说明** 
语法格式与Hive的语法相当接近,但需注意以下问题。

```
a.STORED AS关键字,在该语法格式中不是创建普通非结构化外部表时用的STORED BY关键字,这是目前在读取开源兼容数据时独有的。STORED AS后面接的是文件格式名字,例如ORC/PARQUET/RCFILE/SEQUENCEFILE/TEXTFILE等。
b.外部表的column schemas必须与具体OSS上存储的数据的schema相符合。
c.ROW FORMAT SERDE:非必选选项,只有在使用一些特殊的格式上,比如TEXTFILE时才需要使用。
d.WITH SERDEPROPERTIES:当关联OSS权限使用STS模式授权时,需要该参数指定odps.properties.rolearn属性,属性值为RAM中具体使用的Role的Arn的信息。您可以在配置STORED AS <file format>的同时也通过<serde class>说明file format文件格式。

以ORC文件格式为例,如下所示。
CREATE EXTERNAL TABLE [IF NOT EXISTS] <external_table>
(<column schemas>)
[PARTITIONED BY (partition column schemas)]
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
WITH SERDEPROPERTIES ('odps.properties.rolearn'='${roleran}'
STORED AS ORC
LOCATION 'oss://${endpoint}/${bucket}/${userfilePath}/'

e.不同file format对应的serde class如下:
• ALIORC: com.aliyun.apsara.serde.AliOrcSerDe
• SEQUENCEFILE: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
• TEXTFILE: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
• RCFILE: org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe
• ORC: org.apache.hadoop.hive.ql.io.orc.OrcSerde
• ORCFILE: org.apache.hadoop.hive.ql.io.orc.OrcSerde
• PARQUET: org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe
• AVRO: org.apache.hadoop.hive.serde2.avro.AvroSerDe
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

**(4)用Arn、AK两种认证方式建外表示例** 
**a.用RAM中具体使用的Role的Arn的信息创建外部表** 
当关联OSS权限使用STS模式授权时,需要该参数指定odps.properties.rolearn属性,属性值为RAM中具体使用的Role的Arn的信息。

```
WITH SERDEPROPERTIES ('odps.properties.rolearn'='${roleran}'
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

示例如下:

```
CREATE EXTERNAL TABLE IF NOT EXISTS fc_csv
(
vehicleId string,
recordId string,
patientId string,
calls string,
locationLatitute string,
locationLongtitue string,
recordTime string,
direction string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES
    ('separatorChar'=',',
    'odps.properties.rolearn'='acs:ram::1928466352305391:role/oss-admin'
    )
STORED AS TEXTFILE
LOCATION 'oss://oss-cn-beijing-internal.aliyuncs.com/oss-odps-bucket/extra_test/fc_csv';
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

**b.明文AK创建外部表(不推荐使用这种方式)** 
如果不使用STS模式授权,则无需指定odps.properties.rolearn属性,直接在Location传入明文AccessKeyId和AccessKeySecret。 
Location如果关联OSS,需使用明文AK,写法如下所示。

```

LOCATION 'oss://${accessKeyId}:${accessKeySecret}@${endpoint}/${bucket}/${userPath}/'

```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

示例如下:

```
CREATE EXTERNAL TABLE IF NOT EXISTS fc_csv1
(
vehicleId string,
recordId string,
patientId string,
calls string,
locationLatitute string,
locationLongtitue string,
recordTime string,
direction string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES
    ('separatorChar'=',',
    'odps.properties.rolearn'='acs:ram::1928466352305391:role/oss-admin'
    )
STORED AS TEXTFILE
-- LOCATION 'oss://oss-cn-beijing-internal.aliyuncs.com/oss-odps-bucket/extra_test/fc_csv';
LOCATION 'oss://LTAI4FfgVEQQwsNQ*******:J8FGZaoj2CMcunFrVn1FrL*****wM@oss-cn-beijing-internal.aliyuncs.com/oss-odps-bucket/extra_test/fc_csv';
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

**2、创建 Rcfile 类型的外部表** 
(1)查询HIVE表schema

```
show create table fc_rcfile;**
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

结果如下:

```
CREATE TABLE `fc_rcfile`(
`id` int, 
`name` string)
PARTITIONED BY ( 
`time_ds` string)
ROW FORMAT SERDE 
'org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe' 
STORED AS INPUTFORMAT 
'org.apache.hadoop.hive.ql.io.RCFileInputFormat' 
OUTPUTFORMAT 
'org.apache.hadoop.hive.ql.io.RCFileOutputFormat'
LOCATION
'hdfs://emr-header-1.cluster-138804:9000/user/hive/warehouse/extra_demo.db/fc_rcfile'
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

(2)在MaxCompute创建外部表

```
CREATE EXTERNAL TABLE IF NOT EXISTS fc_rcfile
(  `id` int, 
`name` string)
PARTITIONED BY ( `time_ds` string)
ROW FORMAT SERDE 
'org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe' 
STORED AS RCFILE
LOCATION 'oss://oss-cn-beijing-internal.aliyuncs.com/oss-huabei2/jianwei/fc_rcfile/';
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

(3)添加分区

```
alter table fc_rcfile ADD PARTITION (time_ds = '20191209') ;
alter table fc_rcfile ADD PARTITION (time_ds = '20191210') ;
alter table fc_rcfile ADD PARTITION (time_ds = '20191211') ; 批量创建分区可参数使用MMA工具
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

(4)查询数据

```
select * from fc_rcfile where time_ds = '20191209' ;
select * from fc_rcfile where time_ds = '20191210' ;
select * from fc_rcfile where time_ds = '20191211' ;
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

**3、创建Json类型的外部表** 
(1)创建Json类型的外部表

```
CREATE EXTERNAL TABLE `student`(
`student` map<string,string> COMMENT 'from deserializer', 
`class` map<string,string> COMMENT 'from deserializer', 
`teacher` map<string,string> COMMENT 'from deserializer')
COMMENT '学生课程信息'
ROW FORMAT SERDE 
'org.apache.hive.hcatalog.data.JsonSerDe' 
STORED AS TEXTFILE
LOCATION
'oss://oss-cn-beijing-internal.aliyuncs.com/oss-huabei2/jianwei/student'
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

(2)在对应的OSS控制台bucket上传Json文件数据。 
(3)查询外部表的数据 
报错信息如下所示: 
![image.png](https://ucc.alicdn.com/pic/developer-ecology/43ea80dc1e574411a230b65e448b8c08.png) 
解决办法:需要设置开启hive兼容的flag。

```
set odps.sql.hive.compatible=true;
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

重新查询数据即可正确返回Json数据。 
![image.png](https://ucc.alicdn.com/pic/developer-ecology/f16eefa5c9574cd0980d037afac7d4f1.png) 
以下是在Hive中查询的数据,可以看到这两处数据是一致的。 
![image.png](https://ucc.alicdn.com/pic/developer-ecology/75f9810c59614d609945807c21e580f8.png) 
**4、创建CSV格式的外部表** 
(1)创建CSV格式的外部表

```
建表语句示例如下:
CREATE EXTERNAL TABLE IF NOT EXISTS fc_csv
(
vehicleId string,
recordId string,
patientId string,
calls string,
locationLatitute string,
locationLongtitue string,
recordTime string,
direction string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES
    ('separatorChar'=','
    )
STORED AS TEXTFILE
LOCATION 'oss://oss-cn-beijing-internal.aliyuncs.com/oss-huabei2/jianwei/fc_csv';
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

(2)查询数据

```
set odps.sql.hive.compatible=true;
select *  from fc_csv;
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

不加Hive兼容的flag设置会发现有如下报错信息:

```
FAILED: ODPS-0123131:User defined function exception - internal error - Fatal Error Happended
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

![image.png](https://ucc.alicdn.com/pic/developer-ecology/7b3452bf007c42c385e08bf0818f43d6.png) 
![image.png](https://ucc.alicdn.com/pic/developer-ecology/12ccf468125d465c852dd6772a0f446e.png) 
**5、创建压缩格式的外部表** 
创建外部表时列分隔符需要使用field.delim。选择delimiter会报错或数据没有按照预期的分割符去分割。以下分别按照两种方式去创建外部表。 
需要设置以下说明的属性flag。 
![image.png](https://ucc.alicdn.com/pic/developer-ecology/5bdbbeafb48d43708e99d8582e39f151.png) 
(1)创建外部表

```
a.列分隔符定义为:delimiter

drop TABLE if exists oss_gzip;
CREATE EXTERNAL TABLE IF NOT EXISTS  `oss_gzip` (
`userid` string,
`job` string,
`education` string,
`region` string
)
PARTITIONED BY (dt STRING COMMENT '日期')
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
'delimiter'='\t',
'odps.text.option.gzip.input.enabled'='true',
'odps.text.option.gzip.output.enabled'='true'
)
STORED AS TEXTFILE
LOCATION 'oss://oss-cn-beijing-internal.aliyuncs.com/oss-huabei2/jianwei/gzipfile/';
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

查询数据的时候会发现数据并没有按照我们的分隔符去进行分割,如下图所示: 
![image.png](https://ucc.alicdn.com/pic/developer-ecology/646a4cf86e1145f8b635724da9718e45.png) 
![image.png](https://ucc.alicdn.com/pic/developer-ecology/56059c96320a499da497876604595b7c.png)

```
b.列分隔符定义为:field.delim

drop TABLE if exists oss_gzip2;
CREATE EXTERNAL TABLE IF NOT EXISTS  `oss_gzip2` (
`userid` string,
`job` string,
`education` string,
`region` string
)
PARTITIONED BY (dt STRING COMMENT '日期')
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
'field.delim'='\t',
'odps.text.option.gzip.input.enabled'='true',
'odps.text.option.gzip.output.enabled'='true'
)
STORED AS TEXTFILE
LOCATION 'oss://oss-cn-beijing-internal.aliyuncs.com/oss-huabei2/jianwei/gzipfile/';
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

![image.png](https://ucc.alicdn.com/pic/developer-ecology/5f033d1c9de241fab4ec35e3be50aadb.png) 
![image.png](https://ucc.alicdn.com/pic/developer-ecology/1e5d350a811f41ec9702f710b53643b3.png)

```
注意:在进行列分隔符定义时使用field.delim,不可以使用delimiter
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

**6、创建存在新数据类型的外部表** 
当外部表创建字段涉及新数据类型时,需要开启新类型flag。

```
set odps.sql.type.system.odps2=true;
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

否则会报如下错误: 
![image.png](https://ucc.alicdn.com/pic/developer-ecology/f8bfa73208db4639b13a43386bf640c9.png)

```
set odps.sql.type.system.odps2=true ;
drop TABLE if exists oss_gzip3;
CREATE EXTERNAL TABLE IF NOT EXISTS  `oss_gzip3` (
`userid` FLOAT ,
`job` string,
`education` string,
`region` VARCHAR(20)
)
PARTITIONED BY (dt STRING COMMENT '日期')
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
'field.delim'='\t',
'odps.text.option.gzip.input.enabled'='true',
'odps.text.option.gzip.output.enabled'='true'
)
STORED AS TEXTFILE
LOCATION 'oss://oss-cn-beijing-internal.aliyuncs.com/oss-huabei2/jianwei/flag_file/';

添加对应的分区:
alter table oss_gzip3 add if NOT EXISTS  partition(dt='20191224');

查询数据:
select * from oss_gzip3  where dt='20191224';
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

**四、利用Information Schema元数据查看project、table的操作行为以及费用计算** 
**1、主账号安装package** 
开始使用前,需要以Project Owner身份安装Information Schema的权限包,获得访问本项目元数据的权限。 
以下错误是没有安装对应的Information Schema的权限包和子账号没有相关的权限 
![image.png](https://ucc.alicdn.com/pic/developer-ecology/18575dec059144ccbcc2f83fccf3b352.png) 
安装Information Schema的权限包方式有如下两种: 
(1)在MaxCompute命令行工具(odpscmd)中执行如下命令。

```
odps@myproject1>install package information_schema.systables;
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

(2)在DataWorks中的数据开发 > 临时查询中执行如下语句。

```
install package information_schema.systables;

```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

**2、给子账号授权**

```
grant read on package information_schema.systables to role worker;
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

**3、查询元数据信息**

```
select * from information_schema.tasks_history limit 20;
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

![image.png](https://ucc.alicdn.com/pic/developer-ecology/3fe036e613f04b7d96ab38bfbbe8d67b.png) 
**TASKS\_HISTORY字段列信息如下:** 
![image.png](https://ucc.alicdn.com/pic/developer-ecology/0f5223221fe44a489f5a58eba8f8a123.png) 
![image.png](https://ucc.alicdn.com/pic/developer-ecology/ee2771064d3148d6b142150d1f7b3622.png) 
**4、通过 TASKS\_HISTORY 计算SQL费用** 
SQL任务按量计费:您每执行一条SQL作业,MaxCompute将根据该作业的输入数据及该SQL的复杂度进行计费。该费用在SQL执行完成后产生,并在第二天做一次性的计费结算。

```
开发者版SQL计算任务的计费公式为:
一次SQL计算费用 = 计算输入数据量 * 单价(0.15元/GB)

标准版SQL计算任务的计费公式为:
一次SQL计算费用 = 计算输入数据量 * SQL复杂度 * 单价(0.3元/GB)

按量付费一次SQL计算费用 = 计算输入数据量 * SQL复杂度 * 单价(0.3元/GB)
```

![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw==)![](data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== "点击并拖拽以移动")

计算输入数据量:指一条SQL语句实际扫描的数据量,大部分的SQL语句有分区过滤和列裁剪,所以一般情况下这个值会远小于源表数据大小。 
在 information\_schema.tasks\_history中字段input\_bytes为实际扫描的数据量也就是我们的计算输入数据量。字段complexity为sql复杂度。所以我们可以根据以下公式来计算SQL费用。

 

 

[原文链接](https://link.zhihu.com/?target=https%3A//yq.aliyun.com/articles/739645%3Futm_content%3Dg_1000094672)

本文为阿里云内容,未经允许不得转载。
相关标签: json