您好,欢迎光临本网站![请登录][注册会员]  
文件名称: Attribute Weighting for Averaged One-Dependence Estimators
  所属分类: 机器学习
  开发工具:
  文件大小: 373kb
  下载次数: 0
  上传时间: 2019-02-23
  提 供 者: xiangzho********
 详细说明:这是我写的一个论文,关于朴素贝叶斯分类器经过属性权重调整提升分类性能并提出了新算法,名字是WAODE,摘要如下: Averaged One-Dependence Estimators (AODE) is one of supervised learning algorithm, which relaxes the conditional independent assumption that working on the standard naive Bayes learner. The AODE has shown the reasonable improvement of classifistructures of bn However, aode does not consider the relationships bet ween the attribute (super-parent node) which is called one-dependence attribute in AODE with other normal attributes. We belief that for any pairs of features Ai, A which have the more dependency to each other, the more confidences for the probabilistic estimation value of P(A; Ai or P(AiAi we have. Hence we propose an algorithm based on aode-framework, expect to improve the performance of aode by considering the relations of features in the real- world data sets. Base on our work. expert experiences also can be work via tuning those parameters which reflecting the relationships among the pairs of attributes in our algorithm Zheng and Webb(2006) proposed improving AODE method called lazy elimination for AODE (Simply, LE). LE is an interesting method which is simple, efficient but can be applied to any Bayesian classifier. LE has de fined the specialization-gemeralization relationship upon pairwise attributes and points out that deleting all generalization-attribute-values before train- ing an classifier can relatively improve the performance of Bayesian classifi- er.Zheng and Webb(2006) proved that there is no any information lose in the prediction after deleting the generalization-attribute-values in data set However, keeping the generalization-attribute-values(useless data) in train- ing data sets can cause prediction bias under the independence assumption because independence assumption has the bias per se(violated in real world problems usually) Jiang and Zhang(2006) proposed an algorithm named weightily averaged one-dependence estimators which assigns the weights to super-parent nodes according the relationships between the super-parent node and class. The method proposed by (Jiang and Zhang 2006) is different with our proposed method since that the weight-values in our paper indicate the relationships between super-parent node with another normal attributes given the class Jiang et al. (2009) proposed an algorithm called hidden naive Bayes(IIn B) which is a structure extending method of naive Bayes. In HNB, each feature in data sets has one hidden parent which is a mixture of the weighted influences from all other attributes. hNB and our proposed algorithm apply the same attribute weighting method to measure the relationship between a pair of attributes by conditional mutual information metric. Both hnB and WAODE avoid learning structure of bn also. However, our proposed algorithm WAOdE is different with HNB in two points: the first one is that the probability estimator of all attribute relies on the hidden parent of 3 Figure 1: An example of naive Bayes learner. Y is class attribute; A, is the i-th attribute in a data set A)( Figure 2: An example of AODE Figure 3: An example of TaN. y is class attribute; Ai is the i-th attribute in a data set 4 H Figure 4: An example of hnB. Y is class attribute; Ai is the i-th attribute in a data set and H:(dash line) presents a hidden parent node of A that attribute, but in our algorithm Waode the probability estimator of an attribute depends on the training data sampling. Actually, this kind of difference stems from the bns structure difference between the hnb and AODE. The second point, attribute weighing form is difference: the weight is as a multiplicator in hnb learner, but as a exponent in our WAOde Contributions of this paper are two-fold We briefly make a survey of ways to improve AOde We propose a novel attribute weighting for AOde called Weighted Av- eraged One-Dependence estimators, simply WAOdE. The WaOde not only keeps the advantage of AODE, but also considers the dependen- cy among the attributes by conditional mutual information measure methods Our experimental results show that WAODE learning algorithm has a better improvement compared to standard AODE The rest of the paper is organized as follows: we briefly discuss AODE and attribute weighting forms in Section 2. In Section 3, weighted AODE is proposed. In Section 4, we describe the experiments and results in details Lastly, we draw conclusions for our study and describe the future work in the Section 5 A A A A A AB (a) An cxamplc of naive Baycs lcarncr. Y (b) An cxamplc of AODE. s class attribute: Ai is the i-th attribute data set y A A Ab A A A A A 6 (c An cxamplc of TAN. Y is class at-( d)An cxamplc of HNB. Y is class at tribute: Ai is the i-th attribute in a data tribute: a is the i-th attribute in a data set set and H:(dash line) presents a hidden parent node of A i Figure 5: The Bayesian net structures of naive Bayes, AODE, TAN and HNB 2. AODE and attribute Weighting Forms In this section, we give a brief review of AODE. AOdE is a super vised learning algorithm based on naive Bayes learner, which expends the naive Bayes Learner by limited relaxing the conditional independent assump- tion. Averaged one-attribute-dependence is the main feature of AODE. Since AODE expending from naive Bayes learner. we have to simply introduce the naive bayes learner first. At last in this section, we discuss the forms of attribute weighting 2.1. Naive Bayes Classifier In supervised learning, consider a training data setd=x composed of n instances, where each instance x=(al,., m) d(in- dimensional vector)is labeled with class label y E Y. For the posterior probability of y given x, we have ply x)= ∝xp(xy) )(X) But likelihood, p(xg) cannot be directly estimated from D because of insufficient data in practice. Naive Bayes uses attributes independence as sumption to alleviate this problem, from the assumption, p(xy) is shown as OWS m p(xlg)=p(c1,., mIg)p(caly i=1 Then, the classifier of naive Bayes is shown as follows maxp(v)llp(cal) 讠=1 2.2. AODE AOdE structurally expends the naive bayes by averaged one-attribute dependence method. AODE learning algorithm relies on those test instances The details of aode is described as follows Given a test instance x=(a1,., i,..., am, we have p(, x)=p(ylxplx then 0(3, x) p(gp(xy X p(x) assume that there is a training data set with sufficient data. a test instance X Should be included in that training data set. Since t i in x, thus (g,x)=p(x, i,y)=p(y, cip(xg, i equation 5 combines with the equation 4 p/(y|x)= p(y, Tip(x) p(x) (6 AODE learning algorithm uses equation 6 and bases on attribute indepen dence assumption, thus we have m p(x,)≈1x,x) where both aj and i are the attribute-values in the test sample x. Hence, the classifier of aode can be described as follows: m arg max ∑v,x) iy, i i:1
(系统自动生成,下载前可以参看下载内容)

下载文件列表

相关说明

  • 本站资源为会员上传分享交流与学习,如有侵犯您的权益,请联系我们删除.
  • 本站是交换下载平台,提供交流渠道,下载内容来自于网络,除下载问题外,其它问题请自行百度
  • 本站已设置防盗链,请勿用迅雷、QQ旋风等多线程下载软件下载资源,下载后用WinRAR最新版进行解压.
  • 如果您发现内容无法下载,请稍后再次尝试;或者到消费记录里找到下载记录反馈给我们.
  • 下载后发现下载的内容跟说明不相乎,请到消费记录里找到下载记录反馈给我们,经确认后退回积分.
  • 如下载前有疑问,可以通过点击"提供者"的名字,查看对方的联系方式,联系对方咨询.
 输入关键字,在本站1000多万海量源码库中尽情搜索: