Machine Learning is beyond a buzzword now – its use has grown rapidly over the last few years. It powers your phone’s autocorrect, your GPS traffic predictions and it even suggested that you add that weird guy from college as a Facebook friend. Transfer Learning is a term that lacks buzzword status but its importance to the progression of AI is huge. The purpose of Transfer Learning put simply is to leverage knowledge learned from other trained AI tasks – combining it with your own small set of data to apply the existing solution to a new problem.
An issue you may quickly come across when attempting your own AI implementations is that in order for them to work they often need training data, and lots of it. As an example, in order to train a neural network to recognise hand written text you need to give it many examples. The MNIST database of handwritten digits is 70,000 28x28px images. That’s the kind of data you need to train an AI – and that is just for single digits!
The process of putting transfer learning into practise varies depending on your type of data, which means some research of your own may be needed to make use of the concept. As an example, if your data is text based then you are in luck. Global Vectors for Word Representation or Facebook Wikipedia Trained Data are great starting points that you can use as a base to train your own data against.
It is a powerful and sometimes necessary tool in the world of machine learning, especially as the desire for the benefits of ML in many fields grows. For newcomers, it is often either overlooked or unheard of. Below is a Google trend graph showing the difference in search popularity of Machine Learning in Red versus Transfer Learning in blue. This post is more to raise awareness of the concept than a guide into its practical implementations – those are too vast and deep to cover. Hopefully this concept can help others break through with there own machine learning projects and put their own data to great use.