My Notes Aboard The Hype Train: A Journey Through A Machine Learning Course

I have two contradicting characteristics that don’t always go along well. The first is that I am extremely motivated to finish what I started, especially if the goal line is visible at some point. It doesn’t come from a healthy, Zen-Like attitude, rather than what I call “the infantry attitude”. That (soft?) skill you learn when you need to complete a 45Km hike with full gear, oh, and by the way, you are the one navigating and if you get lost you add 2-3Km extra at least.

The second one, however, is that I have a real learning problem. I have had this discussion too many times before, but although in the end I graduated 2 degrees, one with honors, I failed my high school math finals. And many, many other tests, exams and assignments along the way, that I had to take twice or more. In my mentality I am set to fail and the only thing that helps me overcome that is the aforementioned “put your head down and go” mentality. The knowledge that every journey comes to an end, or at least has a resting point. Maybe this is due to an undiagnosed learning disability, maybe I am just built to chop down trees and breaking rocks rather than hitting the books. That doesn’t matter at this point in life. Especially because I occasionally still break stuff (DIY home renovations, you sick, sick puppies).

So, at some point in the last few months I decided to see what the hype is about, and started Coursera’s (Stanford’s?) “Machine Learning” course. Before I continue, disclaimer: I will say, for sure, that I have no comprehension regarding how profound this course is, how practical do actual practitioners of these techniques find it or nothing. I am looking at it objectively from what I learned Vs. my own prior knowledge about optimization and numerical methods.

The first two lectures were a breeze. Gradient methods (and numerical methods in general) are my bread and butter, and I have implemented linear regression in the past. Although I am always ready to crack my fingers at some python, this course requires computer exercises in Octave\Matlab, which are my actual comfort zone. I’ll give the guys who built this course props, they do give useful tips regarding efficient M-scripts along the way. I found the quiz interface a bit annoying, but they give you multiple tries (take that academia!) so it came out fine in the end.

In general, the course is explicitly built for people without a strong mathematical background. This started to be apparent when error back propagation came up in the neural networks lecture. I do want to positively denote the 6th lecture, that showed excellent methodology with intuitive explanations regarding how to actually train your system. I know, I know, hard to implement in practice. I’m a first timer, dude…

Support Vector Machine (SVM) was a comfortable jump back to known territory. I would really like to know how much this kind of technique is commonly used. After that, the unsupervised learning lecture was terrific. I’ve wanted some sort of introduction to clustering for a while, and this was a nice entry point. At the dimensionality reduction part of the week, the lack of mathematical explanation was tangible, or at least that’s how I felt. It’s simple enough to perceive the concept of error minimization at this point of the course to exclude it entirely. Maybe I’m just looking at it from a deductive point of view.

When the course got to “recommender systems”, I found the lecture quite non-comprehensive. I felt like the lecture was trying to review a subject a bit too large, only for us to come out of the course and say, “Hey! I think I know how Netflix works now!”. However, at that point of the course, the other thing kicked in, and I just really wanted to finish it. Is it excuses time now? I’ve got toddler twins that allow me to do this course only late at night. But for real… it’s mostly the other thing. So when I review this lecture and the next ones, take my hastiness to finish into consideration.

The last two lectures are a bit of bonus material to whet your appetite to learn more, and there are no more computer assignments at this point. Everything is nice to know, but like any good student, if you don’t need it for the test, you listen only…

In general, I found the course to be very success oriented. The programming assignments were more fun than work, and the fact that the quizzes and tests can be re-taken… Well, let’s say that it gives my type an easy out. In general, apart from neural networks, I was familiar with most of the material, just under the title of optimization methods.

One subject I did find important, and was not discussed at all in this course, is reinforcement learning. I found it a very cool approach, that is significantly different from the approaches shown in the course. Luckily, I was familiar with this approach since way back when I reviewed Genetic Algorithms for my Master’s Thesis.

So, did you do the course? Are you currently practicing the field? Tell me your thoughts!

Leave a Reply

Your email address will not be published. Required fields are marked *