New Book: Understanding Deep Learning

By Simon Prince, computer science Professor at the University of Alberta. To be published by MIT Press, Dec 2023. The author shares the associated Jupyter notebooks on his website, here. Very popular, it got over 5,000 likes when the author announced the upcoming book on LinkedIn. I pre-ordered my copy.


An authoritative, accessible, and up-to-date treatment of deep learning that strikes a pragmatic middle ground between theory and practice.

Deep learning is a fast-moving field with sweeping relevance in today’s increasingly digital world. Understanding Deep Learning provides an authoritative, accessible, and up-to-date treatment of the subject, covering all the key topics along with recent advances and cutting-edge concepts. Many deep learning texts are crowded with technical details that obscure fundamentals, but Simon Prince ruthlessly curates only the most important ideas to provide a high density of critical information in an intuitive and digestible form. From machine learning basics to advanced models, each concept is presented in lay terms and then detailed precisely in mathematical form and illustrated visually. The result is a lucid, self-contained textbook suitable for anyone with a basic background in applied mathematics.

  • Up-to-date treatment of deep learning covers cutting-edge topics not found in existing texts, such as transformers and diffusion models
  • Short, focused chapters progress in complexity, easing students into difficult concepts
  • Pragmatic approach straddling theory and practice gives readers the level of detail required to implement naive versions of models
  • Streamlined presentation separates critical ideas from background context and extraneous detail
  • Minimal mathematical prerequisites, extensive illustrations, and practice problems make challenging material widely accessible
  • Programming exercises offered in accompanying Python Notebooks


Each topic below is a chapter.

  • Introduction
  • Supervised learning
  • Shallow neural networks
  • Deep neural networks
  • Loss functions
  • Training models
  • Gradients and initialization
  • Measuring performance
  • Regularization
  • Convolutional networks
  • Residual networks
  • Transformers
  • Graph neural networks
  • Unsupervised learning
  • Generative adversarial networks
  • Normalizing flows
  • Variational autoencoders
  • Diffusion models
  • Deep reinforcement learning
  • Why does deep learning work?
  • Deep learning and ethics

How to Get Your Copy

You can pre-order the book on Amazon, here. If in addition you are looking for state-of-the-art new developments (rather than a solid, modern introduction on deep learning), feel free to check out my own books. They feature upcoming trends and new open-source Python libraries, such as the powerful full multivariate KS evaluation metrics, and new methods such as NoGAN, not based on neural networks, along with classic GAN (and how to make it more generic). NoGAN and its sisters run 1000x faster and consistently deliver better and explainable results, with auto-tuning. Thus, saving a lot of GPU and cloud time, and as a result, lots of cost savings. My books are available here.

To not miss future articles and access members-only content, sign-up to my free newsletter, here.

Leave a Reply