Show simple item record

dc.contributor.advisorMarjanović-Jakovljević, Marina
dc.contributor.otherVeinović, Mladen
dc.contributor.otherKovačević, Branko
dc.creatorNasef, Ashrf Ali Abraheem
dc.date.accessioned2017-12-29T13:09:27Z
dc.date.available2017-12-29T13:09:27Z
dc.date.issued2017-12-06
dc.identifier.urihttps://singipedia.singidunum.ac.rs/izdanje/42831-speech-recognition-in-noisy-environment-using-deep-learning-neural-networksr
dc.identifier.urihttp://nardus.mpn.gov.rs/123456789/9085
dc.description.abstractRecent researches in the field of automatic speaker recognition have shown that methods based on deep learning neural networks provide better performance than other statistical classifiers. On the other hand, these methods usually require adjustment of a significant number of parameters. The goal of this thesis is to show that selecting appropriate value of parameters can significantly improve speaker recognition performance of methods based on deep learning neural networks. The reported study introduces an approach to automatic speaker recognition based on deep neural networks and the stochastic gradient descent algorithm. It particularly focuses on three parameters of the stochastic gradient descent algorithm: the learning rate, and the hidden and input layer dropout rates. Additional attention was devoted to the research question of speaker recognition under noisy conditions. Thus, two experiments were conducted in the scope of this thesis. The first experiment was intended to demonstrate that the optimization of the observed parameters of the stochastic gradient descent algorithm can improve speaker recognition performance under no presence of noise. This experiment was conducted in two phases. In the first phase, the recognition rate is observed when the hidden layer dropout rate and the learning rate are varied, while the input layer dropout rate was constant. In the second phase of this experiment, the recognition rate is observed when the input layers dropout rate and learning rate are varied, while the hidden layer dropout rate was constant. The second experiment was intended to show that the optimization of the observed parameters of the stochastic gradient descent algorithm can improve speaker recognition performance even under noisy conditions. Thus, different noise levels were artificially applied on the original speech signal.sr
dc.language.isoengsr
dc.publisherУниверзитет Сингидунум, Студије при универзитетуsr
dc.rightsАуторство (CC BY)sr
dc.sourceУниверзитет Сингидунумsr
dc.subject.classificationElektrotehnika i računarstvosr
dc.titleSpeech Recognition in noisy environment using Deep Learning Neural Networksr
dc.typeThesisen
dcterms.abstractМарјановић-Јаковљевић, Марина; Веиновић, Младен; Ковачевић, Бранко; Насеф, Aсхрф Aли Aбрахеем;


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record