欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Coursera机器学习课程笔记(九)

程序员文章站 2024-01-04 16:51:58
...

Coursera机器学习课程笔记(九)

一、笔记

1.区分一下机器学习中异常检测算法(Anomaly detection)和监督学习(Supervised learning)所适用的不同情况:

  • 异常检测算法:极少的正样本(异常),大量的负样本
  • 监督学习:正负样本的数量都很多

通常情况下,异常检测算法是通过大量负样本来学习p(x)模型,之所以不用极少的正样本是因为未来的正样本有可能是全新的。

2.在电影推荐系统中,即使不知道该用哪些特征去代表不同的电影,同时也不知道theta值,可以通过协同过滤(collaborative filtering) 算法来同时学习特征与对应的theta参数。

在用该算法学习出合适的特征之后,既可以用来给用户推荐电影(本职工作),还能用来找出与一本电影相似的其他电影(通过计算不同电影特征向量之间的距离)。

二、课后作业

1.Estimate Gaussian Parameters

estimateGaussian.m文件:

function [mu sigma2] = estimateGaussian(X)
%ESTIMATEGAUSSIAN This function estimates the parameters of a 
%Gaussian distribution using the data in X
%   [mu sigma2] = estimateGaussian(X), 
%   The input X is the dataset with each n-dimensional data point in one row
%   The output is an n-dimensional vector mu, the mean of the data set
%   and the variances sigma^2, an n x 1 vector
% 

% Useful variables
[m, n] = size(X);

% You should return these values correctly
mu = zeros(n, 1);
sigma2 = zeros(n, 1);

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the mean of the data and the variances
%               In particular, mu(i) should contain the mean of
%               the data for the i-th feature and sigma2(i)
%               should contain variance of the i-th feature.
%

for i = 1:n
    mu(i) = mean(X(:,i));
    sigma2(i) = var(X(:,i)) * (m-1) / m;     % 默认的方差函数var是乘1/(m-1)的,而我们需要的是乘1/m的
end

% =============================================================
end

2.Select Threshold

selectThreshold.m文件:

function [bestEpsilon bestF1] = selectThreshold(yval, pval)
%SELECTTHRESHOLD Find the best threshold (epsilon) to use for selecting
%outliers
%   [bestEpsilon bestF1] = SELECTTHRESHOLD(yval, pval) finds the best
%   threshold to use for selecting outliers based on the results from a
%   validation set (pval) and the ground truth (yval).
%

bestEpsilon = 0;
bestF1 = 0;
F1 = 0;

stepsize = (max(pval) - min(pval)) / 1000;
for epsilon = min(pval):stepsize:max(pval)

    % ====================== YOUR CODE HERE ======================
    % Instructions: Compute the F1 score of choosing epsilon as the
    %               threshold and place the value in F1. The code at the
    %               end of the loop will compare the F1 score for this
    %               choice of epsilon and set it to be the best epsilon if
    %               it is better than the current choice of epsilon.
    %               
    % Note: You can use predictions = (pval < epsilon) to get a binary vector
    %       of 0's and 1's of the outlier predictions

    predictions = (pval < epsilon);           %predicitons矩阵中1为异常(positive),0为正常(negative)
    tp = sum((predictions == 1) & (yval == 1));       %计算true positive的个数
    fp = sum((predictions == 1) & (yval == 0));       %计算false positive的个数
    fn = sum((predictions == 0) & (yval == 1));       %计算false negative的个数

    P = tp / (tp + fp);     %计算准确度precision
    R = tp / (tp + fn);     %计算recall

    F1 = 2 * P * R / (P + R);
    % =============================================================

    if F1 > bestF1
       bestF1 = F1;
       bestEpsilon = epsilon;
    end
end

end

3.Collaborative Filtering Cost

4.Collaborative Filtering Gradient

5.Regularized Cost

6.Regularized Gradient

3,4,5,6都在一个文件中完成了
cofiCostFunc.m文件:

function [J, grad] = cofiCostFunc(params, Y, R, num_users, num_movies, ...
                                  num_features, lambda)
%COFICOSTFUNC Collaborative filtering cost function
%   [J, grad] = COFICOSTFUNC(params, Y, R, num_users, num_movies, ...
%   num_features, lambda) returns the cost and gradient for the
%   collaborative filtering problem.
%

% Unfold the U and W matrices from params
X = reshape(params(1:num_movies*num_features), num_movies, num_features);
Theta = reshape(params(num_movies*num_features+1:end), ...
                num_users, num_features);


% You need to return the following values correctly
J = 0;
X_grad = zeros(size(X));
Theta_grad = zeros(size(Theta));

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost function and gradient for collaborative
%               filtering. Concretely, you should first implement the cost
%               function (without regularization) and make sure it is
%               matches our costs. After that, you should implement the 
%               gradient and use the checkCostFunction routine to check
%               that the gradient is correct. Finally, you should implement
%               regularization.
%
% Notes: X - num_movies  x num_features matrix of movie features
%        Theta - num_users  x num_features matrix of user features
%        Y - num_movies x num_users matrix of user ratings of movies
%        R - num_movies x num_users matrix, where R(i, j) = 1 if the 
%            i-th movie was rated by the j-th user
%
% You should set the following variables correctly:
%
%        X_grad - num_movies x num_features matrix, containing the 
%                 partial derivatives w.r.t. to each element of X
%        Theta_grad - num_users x num_features matrix, containing the 
%                     partial derivatives w.r.t. to each element of Theta
%

sum_Theta = sum(sum(Theta .^ 2));    %正则化的第一项
sum_X = sum(sum(X.^2));                 %正则化的第二项
J = 0.5 * sum(sum((R .* (X * Theta' - Y)) .^ 2)) + lambda / 2 * (sum_Theta + sum_X) ;

for i = 1:num_movies
    X_grad(i,:) = R(i,:) .* (X(i,:) * Theta' - Y(i,:)) * Theta + lambda * X(i,:);
end

for j = 1:num_users
    Theta_grad(j,:) = (R(:,j) .* (X * Theta(j,:)' - Y(:,j)))' * X  + lambda * Theta(j,:);
end

% =============================================================

grad = [X_grad(:); Theta_grad(:)];

end

上一篇:

下一篇: