欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Rand index(兰德指数)原理以及numpy和pytorch实现

程序员文章站 2024-03-18 17:23:34
...

什么是Rand指数

关于Rand指数的定义我发现*上总结得到位,我也就不再进行赘述,为了本文的完整性和以防国内打不开*,我这里就当一次搬运工,当然有条件的还是建议去*上去看原文~~

Rand Index

The Rand index or Rand measure (named after William M. Rand) in statistics, and in particular in data clustering, is a measure of the similarity between two data clusterings. A form of the Rand index may be defined that is adjusted for the chance grouping of elements, this is the adjusted Rand index. From a mathematical standpoint, Rand index is related to the accuracy, but is applicable even when class labels are not used.

Definition

Given a set of nn elements S={o1,...,on}S = \{o_1, ..., o_n\} and two partitions of SS to compare, X={X1,...,Xr}X = \{ X_1, ..., X_r \}, a partition of SS into rr subsets, and Y={Y1,...,Ys}Y = \{ Y_1, ..., Y_s \}, a partition of SS into ss subsets, define the following:

  • a, the number of pairs of elements in SS that are in the same subset in XX and in the same subset in YY
  • b, the number of pairs of elements in SS that are in different subsets in XX and in different subsets in YY
  • c, the number of pairs of elements in SS that are in the same subset in XX and in different subsets in YY
  • d, the number of pairs of elements in SS that are in different subsets in XX and in the same subset in YY

The Rand index, R, is:
R=a+ba+b+c+d=a+bCn2=a+bn(n1)/2 R = \frac{a+b}{a+b+c+d} = \frac{a+b}{C_n^2} = \frac{a+b}{n(n-1)/2}
Intuitively, a+ba+b can be considered as the number of agreements between XX and YY, and c+dc+d as the number of disagreements between XX and YY.

Since the denominator is the total number of pairs, the Rand index represents the frequency of occurrence of agreements over the total pairs, or the probability that XX and YY will agree on a randomly chosen pair, e.g., Cn2=n(n1)/2C_n^2=n(n-1)/2.

Similarly, one can also view the Rand index as a measure of the percentage of correct decisions made by the algorithm. It can be computed using the following formula:
RI=TP+TNTP+FP+FN+TN RI = \frac{TP + TN}{TP + FP + FN + TN}
where TPTP is the number of true positives, TNTN is the number of true negatives, FPFP is the number of false positives, and FNFN is the number of false negatives.

Properties

The Rand index has a value between 0 and 1, with 0 indicating that the two data clusterings do not agree on any pair of points and 1 indicating that the data clusterings are exactly the same.

In mathematical terms, a, b, c, d are defined as follows:

  • a=Sa = |S^*|, where S={(oi,oj)oi,ojXk,oi,ojYl}S^* = \{(o_i, o_j) | o_i, o_j \in X_k, o_i, o_j \in Y_l \}
  • b=Sb = |S^*|, where S={(oi,oj)oiXk1,oj=Xk2,oiYl1,ojYl2}S^* = \{(o_i, o_j) | o_i \in X_{k_1}, o_j = X_{k_2}, o_i \in Y_{l_1}, o_j \in Y_{l_2} \}
  • c=Sc = |S^*|, where S={(oi,oj)oi,ojXk,oiYl1,ojYl2}S^* = \{(o_i, o_j) | o_i, o_j \in X_k, o_i \in Y_{l_1}, o_j \in Y_{l_2} \}
  • d=Sd = |S^*|, where S={(oi,oj)oiXk1,oj=Xk2,oi,ojYl}S^* = \{(o_i, o_j) | o_i \in X_{k_1}, o_j = X_{k_2}, o_i, o_j \in Y_l \}
    for some 1i,jn1 \leq i,j \leq n, iji \neq j, 1k,k1,k2r1 \leq k, k_1, k_2 \leq r, k1k2k_1 \neq k_2, 1l,l1,l2s1 \leq l, l_1, l_2 \leq s, l1l2l_1 \neq l_2

Relationship with classification accuracy

The Rand index can also be viewed through the prism of binary classification accuracy over the pairs of elements in SS. The two class labels are "oio_i and ojo_j are in the same subset in XX and YY" and "oio_i and ojo_j are in different subsets in XX and YY".

In that setting, aa is the number of pairs correctly labeled as belonging to the same subset (true positives), and bb is the number of pairs correctly labeled as belonging to different subsets (true negativess).

The contingency table

Given a set SS of nn elements, and two groupings or partitions (e.g. clusterings) of these elements, namely X=X1,X2,...,XrX={X_1, X_2, ..., X_r} and Y=Y1,Y2,...,YsY={Y_1, Y_2, ..., Y_s}, the overlap between XX and YY can be summarized in a contingency table [nij][n_{ij}] where each entry nijn_{ij} denotes the number of objects in common between XiX_i and YjY_j: nij=XiYjn_{ij} = |X_i \bigcap Y_j|.

XX \ YY Y1Y_1 Y2Y_2 YsY_s Sums
X1X_1 n11n_{11} n12n_{12} n1sn_{1s} a1a_1
X2X_2 n21n_{21} n22n_{22} n2sn_{2s} a2a_2
XrX_r nr1n_{r1} nr2n_{r2} nrsn_{rs} ara_r
Sums b1b_1 b2b_2 bsb_s NN

Adjusted Rand index

The adjusted Rand index is the corrected-for-chance version of the Rand index. Such a correction for chance establishes a baseline by using the expected similarity of all pair-wise comparisons between clusterings specified by a random model. Traditionally, the Rand Index was corrected using the Permutation Model for clusterings (the number and size of clusters within a clustering are fixed, and all random clusterings are generated by shuffling the elements between the fixed clusters). However, the premises of the permutation model are frequently violated; in many clustering scenarios, either the number of clusters or the size distribution of those clusters vary drastically. For example, consider that in K-means the number of clusters is fixed by the practitioner, but the sizes of those clusters are inferred from the data. Variations of the adjusted Rand Index account for different models of random clusterings.
Though the Rand Index may only yield a value between 0 and +1, the adjusted Rand index can yield negative values if the index is less than the expected index.


上面全是*上的内容,当了一个搬运工,还是那句话,有条件的去*去查看原文~~

接下来就是自己对Rand index的“白话文”解释了,希望能对大家有一点点的帮助,如有错误,也希望大家能及时指出,谢谢


一个二分类的例子

Rand index(兰德指数)原理以及numpy和pytorch实现
如上图,用一个最简单的例子来解释Rand index的代码实现过程。
我们假设XX就是预测的结果:X={X1,X2}X=\{X_1, X_2\}, X1+X2=UX_1 + X_2 = U, UU为全集
YY是groundtruth (GT)结果:Y={Y1,Y2}Y=\{Y_1, Y_2\}, Y1+Y2=UY_1 + Y_2 = U.
X1X_1是左边这个圆,Y1Y_1是右边这个圆,我们称X1X_1Y1Y_1为前景,X2X_2Y2Y_2为背景。
现在将两者重叠放在一起,假设出现上面的情况,即前景只有部分重叠。整个全集UU被分成A,B,C,DA, B, C, D四个部分,A+B+C+D=UA+B+C+D=U.

所以可得The contingency table (T)为:

XX \ YY Y1Y_1 Y2Y_2 Sums
X1X_1 A=n11A=n_{11} B=n12B = n_{12} a1a_1
X2X_2 C=n21C = n_{21} D=n22D = n_{22} a2a_2
Sums b1b_1 b2b_2 NN

箭头表示pair对:
对于A,B,C,DA, B, C, D四个子集,共有10种pair连接方式:4+C42=104+C_4^2 = 10
我们大致将这10种pair对分成两类,即a+ba+bc+dc+d两类,分别用蓝色和红色表示
以红色的(1)为例: 在XX中,两个端点属于同一类 (都属于X1X_1),而在YY中却不是,左端点属于Y2Y_2,右端点属于Y1Y_1,不是同一类。所以对于红色的(1)pair对应该属于c+dc+d中的情况。其他的情况不再一一列举,是一样的意思
从图中也可以看出计算c+dc+da+ba+b要容易一些,所以我们一般将Rand index的计算改为:
R=a+bn(n1)/2=1c+dn(n1)/2 R = \frac{a+b}{n(n-1)/2} = 1 - \frac{c+d}{n(n-1)/2}
而:
c+d=(1)+(2)+(3)+(4)=n11n12+n11n21+n12n22+n21n22=[Cn11+n122Cn112Cn122]+[Cn11+n212Cn112Cn212]+[Cn12+n222Cn122Cn222]+[Cn21+n222Cn212Cn222]=[Ca12Cn112Cn122]+[Cb12Cn112Cn212]+[Cb22Cn122Cn222]+[Ca22Cn212Cn222]=[Ca12+Ca22+Cb12+Cb22]2[Cn112+Cn122+Cn212+Cn222]=[(a1)2/2+(a2)2/2+(b1)2/2+(b2)2/2]2[(n11)2/2+(n12)2/2+(n21)2/2+(n22)2/2]=[(a1)2+(a2)2+(b1)2+(b2)2]/2[(n11)2+(n12)2+(n21)2+(n22)2]=[T.sum(1).pow(2).sum()+T.sum(0).pow(2).sum()]/2T.pow(2).sum() \begin{aligned} c + d & = (1) + (2) + (3) + (4) \\ & = n_{11} * n_{12} + n_{11} * n_{21} + n_{12} * n_{22} + n_{21} * n_{22} \\ & = [C_{n_{11}+n_{12}}^2 - C_{n_{11}}^2 - C_{n_{12}}^2] + [C_{n_{11}+n_{21}}^2 - C_{n_{11}}^2 - C_{n_{21}}^2] + [C_{n_{12}+n_{22}}^2 - C_{n_{12}}^2 - C_{n_{22}}^2] + [C_{n_{21}+n_{22}}^2 - C_{n_{21}}^2 - C_{n_{22}}^2] \\ & = [C_{a_1}^2 - C_{n_{11}}^2 - C_{n_{12}}^2] + [C_{b_1}^2 - C_{n_{11}}^2 - C_{n_{21}}^2] + [C_{b_2}^2 - C_{n_{12}}^2 - C_{n_{22}}^2] + [C_{a_2}^2 - C_{n_{21}}^2 - C_{n_{22}}^2] \\ & = [C_{a_1}^2 + C_{a_2}^2 + C_{b_1}^2 + C_{b_2}^2] - 2 * [C_{n_{11}}^2 + C_{n_{12}}^2 + C_{n_{21}}^2 + C_{n_{22}}^2] \\ & = [(a_1)^2/2 + (a_2)^2/2 + (b_1)^2/2 + (b_2)^2/2] - 2 * [(n_{11})^2/2 + (n_{12})^2/2 + (n_{21})^2/2 + (n_{22})^2/2] \\ & = [(a_1)^2 + (a_2)^2 + (b_1)^2 + (b_2)^2] / 2 - [(n_{11})^2 + (n_{12})^2 + (n_{21})^2 + (n_{22})^2] \\ & = [T.sum(1).pow(2).sum() + T.sum(0).pow(2).sum()] / 2 - T.pow(2).sum() \end{aligned}
这里,TT代表的是The contingency table,上面的化简是为了得到最后一步的矩阵运算,我们本可以直接使用第二个等号后面的方法计算c+dc+d的,但当不是二分类的时候,该等式的计算方式是非常低效的(避免不了要使用for循环),但如果我们化简为最后一步的方式时,不再需要循环运算,全部依赖矩阵运算(从代码的角度上来说就是一行的事),是非常简洁且高效的

numpy实现

import numpy as np
def Rand_index_numpy(predMasks, gtMasks):
    '''
    predMasks: Numpy-array, Predcition result; shape: [r, H, W], (r>=1)
    gtMasks: Numpy-array, Groundtruth; shape: [s, H, W], (s>=1)
    '''
    gtMasks = np.concatenate([gtMasks, np.clip(1 - np.sum(gtMasks, axis=0, keepdims=True), a_min=0, a_max=1)], axis=0)
    # 在GT上扩充一个类别,即除去所有前景(s类),剩下的背景归为一类
    predMasks = np.concatenate([predMasks, np.clip(1 - np.sum(predMasks, axis=0, keepdims=True), a_min=0, a_max=1)], axis=0)
    # 在prediction上扩充一个类别,即除去所有前景(r类),剩下的背景归为一类
    T = (np.expand_dims(gtMasks, axis=1) * predMasks).sum(-1).sum(-1).astype(np.float32)
    # The contingency table
    N = T.sum()
    # 所有的像素数量
    RI = 1 - ((np.power(T.sum(0), 2).sum() + np.power(T.sum(1), 2).sum()) / 2 - np.power(T, 2).sum()) / (N * (N - 1) / 2)
    return RI

pytorch实现

import torch
def Rand_index_torch(predMasks, gtMasks):
    '''
    predMasks: Tensor, Predcition result; shape: [r, H, W], (r>=1)
    gtMasks: Tensor, Groundtruth; shape: [s, H, W], (s>=1)
    '''
    gtMasks = torch.cat([gtMasks, torch.clamp(1 - gtMasks.sum(0, keepdim=True), min=0)], dim=0)
    # 在GT上扩充一个类别,即除去所有前景(s类),剩下的背景归为一类
    predMasks = torch.cat([predMasks, torch.clamp(1 - predMasks.sum(0, keepdim=True), min=0)], dim=0)
    # 在prediction上扩充一个类别,即除去所有前景(r类),剩下的背景归为一类
    T = (gtMasks.unsqueeze(1) * predMasks).sum(-1).sum(-1).float()
    # The contingency table
    N = T.sum()
    # 所有的像素数量
    RI = 1 - ((T.sum(0).pow(2).sum() + T.sum(1).pow(2).sum()) / 2 - T.pow(2).sum()) / (N * (N - 1) / 2)
    return RI