通过hive实现hbase的分页

张映 发表于 2019-10-15

分类目录: hadoop/spark/scala

标签:, ,

hbase通过scan和filter可以实现分页功能,网上已经有很多了。今天要说的是通过hivesql的形势来实现分页。

一,单表分页的步骤和过程

1,根据数据获取第一页的数据

hive> select * from hive_hbase_user where contact['mobile']>0 sort by key desc limit 2;

。。。。。。。。。。。。。省略。。。。。。。。。。。。。。

Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 7.54 sec   HDFS Read: 9329 HDFS Write: 333 SUCCESS
Stage-Stage-2: Map: 1  Reduce: 1   Cumulative CPU: 4.41 sec   HDFS Read: 6924 HDFS Write: 294 SUCCESS
Total MapReduce CPU Time Spent: 11 seconds 950 msec
OK
2019-10-15 14:35:13,397 INFO  [0f1041a5-c5c9-4b0b-a8bc-aad78f202e61 main] mapred.FileInputFormat: Total input paths to process : 1
1000520191014	{"mobile":"333333","tel":"44444"}	{"age":"34","sex":"male"}	{"password":"111111","username":"tanktest212"}
1000520190926	{"mobile":"11111111","tel":"2222222"}	{"age":"32","sex":"male"}	{"password":"111111","username":"tanktest"}
Time taken: 62.669 seconds, Fetched: 2 row(s)   //加了order by超级慢

hive_hbase_user表和下面提到hive_hbase_test表,是在上篇文章中创建的。

请参考:hive 与 hbase 整合

2,获取第二页的数据

hive> select * from hive_hbase_user where contact['mobile']>0 and key <1000520190926 sort by key desc limit 2;

。。。。。。。。。。。。。省略。。。。。。。。。。。。。。

MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 7.93 sec HDFS Read: 9702 HDFS Write: 345 SUCCESS
Stage-Stage-2: Map: 1 Reduce: 1 Cumulative CPU: 4.3 sec HDFS Read: 6916 HDFS Write: 306 SUCCESS
Total MapReduce CPU Time Spent: 12 seconds 230 msec
OK
2019-10-15 14:46:27,389 INFO [0f1041a5-c5c9-4b0b-a8bc-aad78f202e61 main] mapred.FileInputFormat: Total input paths to process : 1
1000420190926 {"mobile":"15833345678","tel":"02142345678"} {"age":"37","sex":"male3"} {"password":"pass3","username":"test3"}
1000320190926 {"mobile":"15822345678","tel":"02132345678"} {"age":"36","sex":"male2"} {"password":"pass2","username":"newtest2"}
Time taken: 65.752 seconds, Fetched: 2 row(s)

将上一页中最后一条数据的key,带到第二页的条件中来查询。为什么可以这么做呢,满足二个条件,第一,hbase中的rowkey是唯一的。第二加了order by,所以不会出现,错取的情况。缺点,hive是基于mr,一个字慢。

二,多表分页

1,根据数据获取第一页的数据

hive> select a.key,a.contact['mobile'],b.sex from hive_hbase_user a join hive_hbase_test b on a.key=b.key
    > where a.contact['mobile']>0 order by a.key desc limit 2;

2,获取第二页的数据

hive> select a.key,a.contact['mobile'],b.sex from hive_hbase_user a join hive_hbase_test b on a.key=b.key
    > where a.contact['mobile']>0 and a.key<1000520190926 order by a.key desc limit 2;


转载请注明
作者:海底苍鹰
地址:http://blog.51yip.com/hadoop/2199.html