I began part 1 of the Fastai course sometime in Q4 2019. After an inordinate amount of time for something that I could and should have finished in about 3-4 months, I
git pushed my last notebook couple of weeks ago. Nevertheless, I was excited but then, just two days later, Fastai launched a newer, and possibly a more improved, version of their course!
The new course is better for a few reasons.
- It incorporates some aspects of traditional Machine Learning (as opposed to how the previous one was only about Deep Learning).
- It is built on top of a newer version of their library which was re-written from scratch.
- It is accompanied with a book that can act as a full-replacement for the course in case you want to go faster.
I initially thought I’ll finish the new course (and re-do some parts that I may have covered already) but changed my mind shortly later.
I am now going through the Dive into Deep Learning book. It too takes a (mostly) code-first approach to teaching but differs from Fastai in the following respects.
- It sticks closer to standard libraries (such as PyTorch, MXNet and TensorFlow) and allows you to choose whichever you want. (I chose PyTorch.) The Fastai course, on the other hand, leans heavily on the Fastai library, which I don’t think is widely adopted in the industry, and only occasionally on PyTorch.
- It takes a more bottom-up approach where they build up from the basics. This is opposite of Fastai’s top-down approach, where they start with the big picture. Don’t let this deter you though because the D2L folks stay pragmatic on what basics they cover and to what detail.