您好,欢迎光临本网站![请登录][注册会员]  
文件名称: 基于VQ方法的人工智能模糊系统
  所属分类: 机器学习
  开发工具:
  文件大小: 391kb
  下载次数: 0
  上传时间: 2019-04-06
  提 供 者: dreamr*******
 详细说明:非常经典的机器学习的好资料一个推荐思路VQ。Specifically, it is known that learning methods using vector quantization (VQ) and steepest descent method (SDM) are superior to other methods. In their learning methods, VQ is used only in determination of the initial parameters for the antecedent part of fuzzy rules.Learning Algorithms for Fuzzy Inference Systems Using Vector Quantization 131 nttp: //dx. doi. org/10.5772/intechopen 79925 A membership value u, of the antecedent part for input x is expressed as Then, the output y" of fuzzy inference method is obtained as 7) If Gaussian membership function is used, then M;i is expressed as where Ci and bi denote the center and the width values of Mij respectively. The objcctive function E is dctcrmincd to cvaluate the inference crror bctwccn the desirable output y and the inference output y LetD={(x2,…,x,y)p∈Zp}andD=(x2…,x)lp∈zp} be the set of learning data and the set of input part of D, respectively. The objective of learning is to minimize the following mean square error(mse)as E=>( where yp* and y mean inference and desired output for the pth input x In order to minimize the objective function E, each parameter of c, b, and w is updated based on SDM using the following relation OE O已 (y-y)(-y") dE 12-y)1(n-y) where t is iteration time and Ka is a learning constant [1] 132 From Natural to Artificial Intelligence -Algorithms and Applications The learning algorithm for the conventional fuzzy inference model is shown as follows Learning Algorithm A Step Al: The threshold 0 of inference error and the maximum number of learning time T, are set. Let no be the initial number of rules. Let t=1 Step A2: The paramcters bi, Ciji and wi arc sct randomly Step A3: Let p=1 Step A: a data(x1,…,x2n,yn)∈ D is given Step A5: From Eqs. (2)and (3), H; and y arc computcd Step A6: Parameters Wi, Cij and bi are updated by Eqs. (6),(7), and(8) Step A7: If p=P, then go to Step A8, and if p

0 and t< Tmax, then go to Step A3 with te t+ 1; clsc, if E(t)s0 and t s Tmax, then the algorithm terminates Step A9: If t> Tmax and E(t)>8, then go to Step A2 with n=n+ 1 and t=1 In particular, Algorithm SDM is defined as follows Algorithm SDM(c, b, w) 01: inference error Tnax1: the maximum number of learning time n: the number of rules input: current parameters output: parameters c, b, and w after learning Steps A3 to A8 of Algorithm A arc pcrformcd 2.2. Neural gas method Vector quantization techniques encode a data space cR", utilizing only a finite set C=cil iEZ of reference vectors [18] Let the winner vector ci(o) bc dcfincd for any vector vE Vas gmIn i∈Z By using the finite set C, the space v is partitioned as V-{v∈V|-cl‖s|-c‖forj∈Z} (10) where V-Uiez, Vi and VinI- for ifj Learning Algorithms for Fuzzy Inference Systems Using Vector Quantization 133 nttp: //dx. doi. org/10.5772/intechopen 79925 The evaluation function for the partition is defined by E=∑∑ 1=1∈b where n: =V Let us introduce the neural gas method as follows [18] For any input data vector v, the neighborhood ranking cik for keZ determined, being the reference vector for which there are k vectors Ci with U-c;‖<|-ck‖ (12) Let the number k associated with each vector c, denoted by ki (v, ci). Then, the adaption step for adjusting the parameters is given b △ hx(k;(O,C;)·(乙-C;) (13) ha( ki(v, ci))=exp(-ki(v,ci/ where e∈[,1andA>0 Let the probability of v selected from V be denoted by plo) The flowchart of the conventional neural gas algorithm is shown in Figure 1 [18 where eintr fin, and Tmax are learning constants and the maximum number of learning, respectively. The method is called learning algorithm NG Using the set D, a decision procedure for center and width parameters is given as follows Algorithm Center(c) ( ∈Z, plr: the probability of x selected for xED Step 1: By using p(x) for x E D, NG method of Figure 1 [16, 18] is performed As a result, the set C of reference vectors for D" is determined where C=n Step 2: Each valuc for center paramcters is assigncd to a reference vector. Let ∑( (15) ni where Ci and n: are the set and the number of learning data belonging to the ith cluster Ci and C=U1 Ci and n=∑1 As a result, center and width parameters are determined from algorithm center(c) 134 From Natural to Artificial Intelligence -Algorithms and Applications Given e int tinman Lett=1 Each reference vector Wi is selected randomly. Given v E V with(o) Determine the neighborhood ranking k(vW)fori∈Zr Update wi for i∈Z t←t+1 using Eqs. 14, (15)and t/T Eint f Eint Eint/Efin miax END Fi Learning Algorithm B using Algorithm Center(c) is introduced as follows [16, 17] Learning algorithm b 0: threshold of mse wr: maximum number of learning time for ng Tmax: maximum number of learning timc for SDM M: the size of ranges n the number of rules Step 1: Initialize Step 2: Center and width parameters are determined from Algorithm Center(P)and the set D Step 3: Parameters c, b, and w are updated using Algorithm SDM(c, b, w) Step 4: If E(t)se, then algorithm terminates else go to Step 3 with n+-n+ 1 and t+t+1 Learning Algorithms for Fuzzy Inference Systems Using Vector Quantization 135 nttp: //dx. doi. org/10.5772/intechopen 79925 2. 3. The probability distribution of input data based on the rate of change of output It is known that many rules are needed at or near the places where output data change quickly in fuzzy modeling. Then, how can we find the ratc of output change? The probability pM(x) one method to perform it. As shown in Eqs. (16) and(17), any input data where output hanges quickly is selected with the high probability, and any input data where output changes slowly is selected with the low probability, where M is the size of range considering output change Based on the literature [13], the probability( distribution is defined as follows Algorithm Prob(Pm(x)) Input:D={(x,y)lp∈Zp}andD"={(x2)lp∈Zp} Output: PM(r) Step 1: Give an input data x'ED", we determine the neighborhood ranking(r0, xl,.,,xk x' p-1)of the vector x' with x0=x', x'1 being closest to x and xk(k=0,., P-1) being the vector xfor which there are k vectors xwith x-xi

(系统自动生成,下载前可以参看下载内容)

下载文件列表

相关说明

  • 本站资源为会员上传分享交流与学习,如有侵犯您的权益,请联系我们删除.
  • 本站是交换下载平台,提供交流渠道,下载内容来自于网络,除下载问题外,其它问题请自行百度
  • 本站已设置防盗链,请勿用迅雷、QQ旋风等多线程下载软件下载资源,下载后用WinRAR最新版进行解压.
  • 如果您发现内容无法下载,请稍后再次尝试;或者到消费记录里找到下载记录反馈给我们.
  • 下载后发现下载的内容跟说明不相乎,请到消费记录里找到下载记录反馈给我们,经确认后退回积分.
  • 如下载前有疑问,可以通过点击"提供者"的名字,查看对方的联系方式,联系对方咨询.
 输入关键字,在本站1000多万海量源码库中尽情搜索: