您好,欢迎光临本网站![请登录][注册会员]  

人工智能下载,深度学习下载列表 第657页

« 1 2 ... .52 .53 .54 .55 .56 657.58 .59 .60 .61 .62 ... 1775 »

[深度学习] MATLAB实现CNN.zip

说明: 代码中包含matlab实现cnn的四个途径:回归问题,分类问题,调用内部网络,微调内部网络,可以直接拿来做模版
<qq_42178256> 上传 | 大小:6kb

[深度学习] 【13】Achieving Human Parity in Conversational Speech Recognition.pdf

说明: Conversational speech recognition has served as a flagship speech recognition task since the release of the DARPA Switchboard corpus in the 1990s. In this paper, we measure the human error rate on the widely used NIST 2000 test set, and find that ou
<xinghaoyan> 上传 | 大小:249kb

[深度学习] 【12】Deep speech 2 End-to-end speech recognition in english and mandarin.pdf

说明: We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech—two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end
<xinghaoyan> 上传 | 大小:782kb

[深度学习] 【10】Towards End-to-End Speech Recognitionwith Recurrent Neural Networks.pdf

说明: This paper presents a speech recognition system that directly transcribes audio data with text, without requiring an intermediate phonetic representation. The system is based on a combination of the deep bidirectional LSTM recurrent neural network a
<xinghaoyan> 上传 | 大小:465kb

[深度学习] 【9】Speech recognition with deep recurrent neural networks.pdf

说明: Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is u
<xinghaoyan> 上传 | 大小:413kb

[深度学习] 【8】Deep neural networks

说明: Most current speech recognition systems use hidden Markov models (HMMs) to deal with the temporal variability of speech and Gaussian mixture models (GMMs) to determine how well each state of each HMM fits a frame or a short window of frames of coeff
<xinghaoyan> 上传 | 大小:593kb

[深度学习] 【7】Deep residual learning for image recognition.pdf

说明: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functio
<xinghaoyan> 上传 | 大小:755kb

[深度学习] 【6】Going deeper with convolutions.pdf

说明: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of th
<xinghaoyan> 上传 | 大小:1mb

[深度学习] 【5】Very deep convolutional networks for large-scale image recognition.pdf

说明: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very sm
<xinghaoyan> 上传 | 大小:185kb

[深度学习] 【4】Imagenet classification with deep convolutional neural networks.pdf

说明: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%
<xinghaoyan> 上传 | 大小:1mb

[深度学习] 【3】Reducing the dimensionality of data with neural networks.pdf

说明: Fig. 3. Theory, presented as the experiment (see Fig. 1). The SHG source is the magnetic component of the Lorentz force on metal electrons in the SRRs.
<xinghaoyan> 上传 | 大小:357kb

[深度学习] 【2】A fast learning algorithm for deep belief nets.pdf

说明: We show how to use “complementary priors” to eliminate the explainingaway effects thatmake inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can lea
<xinghaoyan> 上传 | 大小:764kb
« 1 2 ... .52 .53 .54 .55 .56 657.58 .59 .60 .61 .62 ... 1775 »