PCA带图像% Face recognition by Santiago Serrano %人脸识别代码 clear all close all clc % number of images on your training set. %训练集数目 M=10; %Chosen std and mean. %It can be any number that it is close to the std and mean of most of the images. um=100; ustd=8
function normal = normalization(x,kind) % by Li Yang BNU MATH Email:farutoliyang@gmail.com % last modified 2009.2.24 % if nargin < 2 kind = 2;%kind = 1 or 2 表示第一类或第二类规范化 end [m,n] = size(x); normal = zeros(m,n); %% normalize the data x to [0,1] i
Particle System API by David McAllister version 1.11 February 2, 1999 (Groundhog Day) http://www.cs.unc.edu/~davemc/Particle Running the PSpray Demo To run the PSpray demo, click on pspray.exe and the demo will start by drawing a fountain. It uses m
Programming Exercise 1: Linear Regression Machine Learning Introduction In this exercise, you will implement linear regression and get to see it work on data. Before starting on this programming exercise, we strongly recom- mend watching the video l
核主元分析KPCA的降维特征提取以及故障检测应用-KPCA_v2.zip 本帖最后由 iqiukp 于 2018-11-9 15:02 编辑 核主元分析(Kernel principal component analysis ,KPCA)在降维、特征提取以及故障检测中的应用。主要功能有:(1)训练数据和测试数据的非线性主元提取(降维、特征提取) (2)SPE和T2统计量及其控制限的计算 (3)故障检测 参考文献: Lee J M, Yoo C K, Choi S W, et al.
核主元分析KPCA的降维特征提取以及故障检测应用-data.rar 本帖最后由 iqiukp 于 2018-11-9 15:02 编辑 核主元分析(Kernel principal component analysis ,KPCA)在降维、特征提取以及故障检测中的应用。主要功能有:(1)训练数据和测试数据的非线性主元提取(降维、特征提取) (2)SPE和T2统计量及其控制限的计算 (3)故障检测 参考文献: Lee J M, Yoo C K, Choi S W, et al. Non
语音识别LAS结构where d and y, are MLP networks. After training, the a; distribution Table 1: WER comparison on the clean and noisy Google voice
is typically very sharp and focuses on only a few frames of h; ci car
search task. The CLDNN-hMM system is the s
matlab 代码,数据归一化函数。
function [X,log2_ds] = normalize(X,precision,min_order,epsilon)
%Normalize scattering coefficients by parents
% This function is for internal use only. It may change or be removed in a
% future release.
本文不讲归一化原理,只介绍实现(事实上看了代码就会懂原理),代码如下:
def Normalize(data):
m = np.mean(data)
mx = max(data)
mn = min(data)
return [(float(i) - m) / (mx - mn) for i in data]
代码只有5行并不复杂,但是需要注意的一点是一定要将计算的均值以及矩阵的最大、最小值存为变量放到循环里,如果直接在循环里计算对应的值会造成归一化特别慢,笔者之前有过深切的酸爽体验….