
Artificial Neural Networks with Java: Tools for Building Neural Network Applications
Igor Livshin
Résumé
This book discusses the practical aspects of using Java for neural network processing. You will know how to use the Encog Java framework for processing large-scale neural network applications. Also covered is the use of neural networks for approximation of non-continuous functions. In addition to using neural networks for regression, this second edition shows you how to use neural networks for computer vision. It focuses on image recognition such as the classification of handwritten digits, input data preparation and conversion, and building the conversion program. And you will learn about topics related to the classification of handwritten digits such as network architecture, program code, programming logic, and execution.
The step-by-step approach taken in the book includes plenty of examples, diagrams, and screenshots to help you grasp the concepts quickly and easily.
What You Will Learn
- Use Java for the development of neural network applications
- Prepare data for many different tasks
- Carry out some unusual neural network processing
- Use a neural network to process non-continuous functions
- Develop a program that recognizes handwritten digits
Who This Book Is For
Intermediate machine learning and deep learning developers who are interested in switching to Java
Sub-Topics
Biological and artificial neurons Activation functions Summary
Chapter 2. Internal Mechanism of Neural Network ProcessingChapter Goal: The chapter explores the inner machinery of neural network processing
Sub-Topics
Function to be approximatedNetwork architecture Forward pass calculations Back-propagation pass calculationsFunction derivative and function divergent Table of most commonly used function derivativesSummary
Chapter 3. Manual Neural Network Processing Chapter Goal: Manual neural network processing
Sub-Topics
Example 1. Manual approximation of a function at a single point Building the neural network Forward pass calculation Backward pass calculation Calculating weight adjustments for the output layer neurons Calculating weight adjustments for the hidden layer neuronsUpdating network biases Back to the forward passMatrix form of network calculationDigging deeper Mini-batches and stochastic gradient Summary
Part Two. Neural Network Java Development Environment Chapter 4. Configuring Your Development Environment Chapter Goal: Explain how to download and install a set of tools necessary for building, debugging, testing, and executing neural network applications.
Sub-Topics
Installing Java 8 environment on your Windows machineInstalling NetBeans IDEInstalling Encog Java framework Installing XChart Package Summary
Chapter 5. Neural Network Development Using Java EncogFramework Chapter Goal: Using Java Encog framework.
Sub-Topics
Example 2. Function approximation using Java environmentNetwork architecture Normalizing the input datasets Building the Java program that normalizes both datasetsProgram code Debugging and executing the program Processing results for the training method Testing the network Testing results Digging deeperSummary
Chapter 6. Neural Network Prediction Outside of the Training Range Chapter Goal: Neural network is not a function extrapolation mechanism.Sub-TopicsExample 3a. Approximating periodic functions outside of the training rangeNetwork architecture for example 3aProgram code for example 3aTesting the networkExample 3b. Correct way of approximating periodic functions outside of the training rangePreparing the training dataNetwork architecture for the example 3bProgram code for example 3bTraining results for example 3bTesting results for example 3b Summary
Chapter 7. Processing Complex Periodic FunctionsChapter Goal: Approximation of the complex periodic functionSub-Topics
Example 4. Approximation of a complex periodic functionData preparation Reflecting function topology in dataNetwork architecture Program codeTesting the network Digging deeperSummary
Chapter 8. Approximating Non-Continuous Functions Chapter Goal: This chapter introduced the micro-batch method that is able to approximate any non-continuous function with high precision results.Sub-Topics
Example 5. Approximating non-continuous functionsApproximating non-continuous function using conventional network process . . . . . . .Network architectureProgram codeCode fragments for the training processUnsatisfactory training resultsApproximating the non-continuous function using micro-bach methodProgram code for micro-batch processingProgram Code for the getChart() methodCode fragment 1 of the training methodCode fragment 2 of the training methodTraining results for micro-batch methodTest processing logicTesting results for micro-batch methodDigging deeperSummary
Chapter 9. Approximation Continuous Functions with Complex TopologyChapter Goal: Neural network has problem approximating continuous functions with complex topology. It is very difficult to obtain a good quality approximation for such functions. This chapter showed that the micro-batch method is able to approximate such functions with high precision results.Sub-Topics
Example 5a. Approximation of continuous function with complex topology Network architecture for example 5aProgram code for example 5aTraining processing results for example 5aApproximation of continuous function with complex topology using micro-batch method Program code for example 5a using micro-batch methodExample 5b. Approximation of spiral-like functions Network architecture for example 5bProgram Code for example 5bApproximation of the same functions using micro-batch methodSummary
Chapter 10. Using Neural Network for Classification of ObjectsChapter Goal: Show how to use neural networks for classification of objects
Sub-Topics
Example 6. Classification of records Training dataset Network architecture Testing dataset Program code for data normalizationProgram code for classification Training resultsTesting results Summary
Chapter 11. Importance of Selecting the Correct ModelChapter Goal: Explained the importance of selecting a correct working model
Sub-Topics
Example 7. Predicting next month stock market price Data preparationIncluding function topology in the dataset Building micro-batch filesNetwork architectureProgram code Training process Training resultsTesting processTest processing logicTesting resultsAnalyzing testing results Summary
Chapter 12. Approximation of Functions in 3-D SpaceChapter Goal: Using neuron network for approximation of functions in 3-D space.
Sub-Topics
Example 8. Approximation of functions in 3-D space Data preparation Network architectureProgram code Processing results Summary
Part Three. Introduction to Computer Vision Chapter 13. Image Recognition Chapter Goal: introduction to the computer vision - the branch of Artificial IntelligenceSub-Topics
Classification of handwritten digitsInput data preparationInput data conversionBuilding the conversion programSummary
Chapter 14. Classification of Handwritten DigitsChapter Goal: Developed a program able to recognize (classify) handwritten digitsSub-Topics
Network architectureProgram codeProgramming logicExecutionSummary
Caractéristiques techniques
PAPIER | |
Éditeur(s) | Apress |
Auteur(s) | Igor Livshin |
Parution | 18/10/2021 |
Nb. de pages | 631 |
EAN13 | 9781484273678 |
Avantages Eyrolles.com
Consultez aussi
- Les meilleures ventes en Graphisme & Photo
- Les meilleures ventes en Informatique
- Les meilleures ventes en Construction
- Les meilleures ventes en Entreprise & Droit
- Les meilleures ventes en Sciences
- Les meilleures ventes en Littérature
- Les meilleures ventes en Arts & Loisirs
- Les meilleures ventes en Vie pratique
- Les meilleures ventes en Voyage et Tourisme
- Les meilleures ventes en BD et Jeunesse