Bibliography | Rodriguez, Daniel del Hoyo: Optimization of Back-propagtion Learning Algorithm on MLP Networks. University of Stuttgart, Faculty of Computer Science, Electrical Engineering, and Information Technology, Diploma Thesis No. 3353 (2012). 95 pages, english.
|
CR-Schema | D.1.3 (Concurrent Programming) I.2.6 (Artificial Intelligence Learning)
|
Abstract | Abstract ================================= In order to generate more efficient neural networks, the configuration of the ANN itself has to be optimized, specially refering to its parameters and architecture. To do so, this problem will be approached from the learning and training process point of view, realizing different tests. These evaluations will lead us to determine which are the most optimum parameters for this processes. At the same time, the importance of the input pattern and the data used will be studied, observing how these influences on the learning process, not only from a runtime point of view, but also measuring the obtained error in the trained network.
On the other side, the implementation itself will be optimized, doing this by executing the learning algorithm in parallel, using different nodes, meassuring the time needed for completing the trainning, and comparing it with the time needed in a sequential execution.
|
Full text and other links | PDF (1464008 Bytes)
|
Department(s) | University of Stuttgart, Institute of Parallel and Distributed Systems, Distributed Systems
|
Superviser(s) | Zweigle Oliver; Glass Colin W. |
Entry date | December 5, 2012 |
---|