HBase BucketAllocatorException 异常剖析
近日,观察到hbase集群出现如下warn日志:
2020-04-18 16:17:03,081 warn [regionserver/xxx-bucketcachewriter-1] bucket.bucketcache:failed allocation for 604acc82edd349ca906939af14464bcb_175674734;
org.apache.hadoop.hbase.io.hfile.bucket.bucketallocatorexception: allocation too big size=1114202; adjust bucketcache sizes hbase.bucketcache.bucket.sizes to accomodate if size seems reasonable and you want it cached.
大概意思是说:由于block块太大(size=1114202)导致bucketallocator无法为其分配空间,如果想要被缓存并且觉得这样做合理,可以调整参数hbase.bucketcache.bucket.sizes。
默认情况下,hbase bucketcache 能够缓存block的最大值为512kb,即hbase.bucketcache.bucket.sizes=5120,9216,17408,33792,41984,50176,58368,66560,99328,132096,197632,263168,394240,525312,默认14种size标签。如果想要缓存更大的block块,我们可以调整参数为 hbase.bucketcache.bucket.sizes=5120,9216,17408,33792,41984,50176,58368,66560,99328,132096,197632,263168,394240,525312,1049600,2098176,此时最大容许2mb的block。
下面我们简单看一下对应源代码,涉及相关类为:
/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/bucketallocator.java
bucketallocator主要实现对bucket的组织管理,为block分配内存空间。
/** * allocate a block with specified size. return the offset * @param blocksize size of block * @throws bucketallocatorexception * @throws cachefullexception * @return the offset in the ioengine */ public synchronized long allocateblock(int blocksize) throws cachefullexception, bucketallocatorexception { assert blocksize > 0; bucketsizeinfo bsi = rounduptobucketsizeinfo(blocksize); if (bsi == null) { throw new bucketallocatorexception("allocation too big size=" + blocksize + "; adjust bucketcache sizes " + blockcachefactory.bucket_cache_buckets_key + " to accomodate if size seems reasonable and you want it cached."); } long offset = bsi.allocateblock(); // ask caller to free up space and try again! if (offset < 0) throw new cachefullexception(blocksize, bsi.sizeindex()); usedsize += bucketsizes[bsi.sizeindex()]; return offset; }
在调用rounduptobucketsizeinfo()方法后,返回结果如果为null则抛出bucketallocatorexception异常。看一下rounduptobucketsizeinfo()方法:
/** * round up the given block size to bucket size, and get the corresponding * bucketsizeinfo */ public bucketsizeinfo rounduptobucketsizeinfo(int blocksize) { for (int i = 0; i < bucketsizes.length; ++i) if (blocksize <= bucketsizes[i]) return bucketsizeinfos[i]; return null; }
该方法将传入的blocksize与数组bucketsizes从索引0开始取值进行比较,一旦小于bucketsize[i],则为该block分配bucketsizeinfos[i]大小的空间存放该block。
我们看一下数组bucketsizes的初始化过程:
private static final int default_bucket_sizes[] = { 4 * 1024 + 1024, 8 * 1024 + 1024, 16 * 1024 + 1024, 32 * 1024 + 1024, 40 * 1024 + 1024, 48 * 1024 + 1024, 56 * 1024 + 1024, 64 * 1024 + 1024, 96 * 1024 + 1024, 128 * 1024 + 1024, 192 * 1024 + 1024, 256 * 1024 + 1024, 384 * 1024 + 1024, 512 * 1024 + 1024 }; private final int[] bucketsizes; bucketallocator(long availablespace, int[] bucketsizes) throws bucketallocatorexception { this.bucketsizes = bucketsizes == null ? default_bucket_sizes : bucketsizes; arrays.sort(this.bucketsizes); ... }
可以看到,如果bucketsizes == null默认取数组default_bucket_sizes的值,并对该数组进行排序。这一步的排序为上一步的循环比较奠定了基础。
而数组default_bucket_sizes的值,也就是参数hbase.bucketcache.bucket.sizes的默认值。
转载请注明出处!欢迎关注本人微信公众号【hbase工作笔记】
推荐阅读
-
hdfs/hbase 程序利用Kerberos认证超过ticket_lifetime期限后异常
-
Spark2中操作HBase的异常:java.lang.NoSuchMethodError: org.apache.hadoop.hbase.HTableDescriptor.addFamily
-
HBase BucketAllocatorException 异常剖析
-
Hbase架构剖析
-
[HBase]记一个state异常 hbaserpcstate
-
HBase Coprocessor 剖析与编程实践
-
Hive和HBase整合,查询异常
-
Hbase数据模型深入剖析-OLAP商业环境实战
-
Hbase异常(无法定位登录配置)
-
getOutputStream() has already been called for this response 异常剖析、解决