Tous nos rayons

Déjà client ? Identifiez-vous

Mot de passe oublié ?

Nouveau client ?

CRÉER VOTRE COMPTE
FOUNDATIONSOF DEEP REINFORCEMENT LEARNING THEORY AND PRACTICE PYTHON
Ajouter à une liste

Librairie Eyrolles - Paris 5e
Indisponible

FOUNDATIONSOF DEEP REINFORCEMENT LEARNING THEORY AND PRACTICE PYTHON

FOUNDATIONSOF DEEP REINFORCEMENT LEARNING THEORY AND PRACTICE PYTHON

Laura Harding / Wah Loon Graesser

360 pages, parution le 01/12/2019

Résumé

In just a few years, deep reinforcement learning (DRL) systems such as DeepMinds DQN have yielded remarkable results. This hybrid approach to machine learning shares many similarities with human learning: its unsupervised self-learning, self-discovery of strategies, usage of memory, balance of exploration and exploitation, and its exceptional flexibility. Exciting in its own right, DRL may presage even more remarkable advances in general artificial intelligence.
Deep Reinforcement Learning in Python: A Hands-On Introduction is the fastest and most accessible way to get started with DRL. The authors teach through practical hands-on examples presented with their advanced OpenAI Lab framework. While providing a solid theoretical overview, they emphasize building intuition for the theory, rather than a deep mathematical treatment of results. Coverage includes:
  • Components of an RL system, including environment and agents
  • Value-based algorithms: SARSA, Q-learning and extensions, offline learning
  • Policy-based algorithms: REINFORCE and extensions; comparisons with value-based techniques
  • Combined methods: Actor-Critic and extensions; scalability through async methods
  • Agent evaluation
  • Advanced and experimental techniques, and more
  • Chapter 1: Introduction to Reinforcement Learning
  • Part I: Policy-Based and Value-Based Algorithms
  • Chapter 2: Policy Gradient
  • Chapter 3: State Action Reward State Action
  • Chapter 4: Deep Q-Networks
  • Chapter 5: Improving Deep Q-Networks
  • Part II: Combined Methods
  • Chapter 6: Advantage Actor-Critic
  • Chapter 7: Proximal Policy Optimization
  • Chapter 8: Parallelization Methods
  • Chapter 9: Algorithm Summary
  • Part III: Practical Tips
  • Chapter 10: Getting Reinforcement Learning to Work
  • Chapter 11: SLM Lab
  • Chapter 12: Network Architectures
  • Chapter 13: Hardward
  • Chapter 14: Environment Design
  • Epilogue
  • Appendix A: Deep Reinforcement Learning Timeline
  • Appendix B: Example Environments
  • References
  • Index
Laura Graesser enjoys experimenting with, and writing about, machine learning techniques. Currently she is studying for an MS in Computer Science. She is particularly interested in deep learning algorithms and their application to reinforcement learning, computer vision, and NLP. Most recently, she is interested in combining reinforcement learning with supervised learning, knowledge distillation, and in tackling multi-modal and multi-task learning.
Wah Loon Keng likes building softwares for the research and application of theories in Computer Science and AI. He is an active open source contributor, and the creator of the data science platform at Eligible Inc. As a student, he did research on quantum foundation, computer science and mathematics. He is always interested in the theories of intelligence, especially reinforcement learning, semantics, and intuitive theories of mind. With his engineering skills, he is building experiment frameworks to test these theories, and OpenAI Lab is one.

Caractéristiques techniques

  PAPIER
Éditeur(s) Prentice
Auteur(s) Laura Harding / Wah Loon Graesser
Parution 01/12/2019
Nb. de pages 360
EAN13 9780135172384

Avantages Eyrolles.com

Livraison à partir de 0,01 en France métropolitaine
Paiement en ligne SÉCURISÉ
Livraison dans le monde
Retour sous 15 jours
+ d'un million et demi de livres disponibles
satisfait ou remboursé
Satisfait ou remboursé
Paiement sécurisé
modes de paiement
Paiement à l'expédition
partout dans le monde
Livraison partout dans le monde
Service clients sav@commande.eyrolles.com
librairie française
Librairie française depuis 1925
Recevez nos newsletters
Vous serez régulièrement informé(e) de toutes nos actualités.
Inscription