AI, Deep Learning, Machine Learning. You’re constantly bombarded with articles on new breakthroughs and discoveries. All the hype around AI may leave you with the lingering question, how can I make my application smarter?
AI, Deep Learning, Machine Learning.
You’re constantly bombarded with articles on new breakthroughs and discoveries. All the hype around AI may leave you with the lingering question, how can I make my application smarter?
Machine Learning is not a new phenomenon. Most of the algorithms used for machine learning today were discovered in the 18th and 19th century. Deep learning and neural networks, which power your Amazon Alexa or Google Home, were already in place by the 1950s. Throughout time, many people have given their definition of what machine learning truly is. A prominent figure in the machine learning space in the 1950s, Arthur Samuel, defined machine learning as a “Field of study that gives computers the ability to learn without being explicitly programmed”.
Field of study that gives computers the ability to learn without being explicitly programmed.
Why did it take until ~2010-2012 for AI and Machine Learning to truly take off? It falls to better hardware, more data, and better algorithms.
Training machine learning models and deep neural networks requires a lot of computational power. Between 1990 and 2019, CPUs became about 5000 times faster. During this time period, the gaming industry also took off, yielding faster and faster GPUs, which can be leveraged to run highly parallelizable machine learning computations.
With the rise of the internet and increased storage capabilities (goodbye floppy disks) we have experienced a boom of data. It is estimated that 2.5 quintillion bytes of new data is created each day. A large volume of data is essential to be able to create and refine accurate models.
Up until 2010, we weren’t able to reliably train a deep neural network. The gradient in the backpropogation step, which is how neural networks learn, faded quickly the deeper the network became. This took a turn for the better in 2010 with the discovery of better and more robust algorithms.
Python and R have for many years been the go-to languages for machine learning. There are numerous open-source libraries available, such as ScikitLearn, Numpy, Pandas, Keras and Tensorflow that simplifies the process of training your model. Despite the support of the open-source community, machine learning as a domain can be daunting, and the learning curve is very steep. For many .NET developers, the steep learning curve, combined with the need to pick up a new programming language, such as Python, has alienated a large community from becoming heavily involved in this fascinating development.
But what if we could train a machine learning model in C#? Let’s take a look at ML.NET.
ML.NET is a cross-platform, open-source library, released in preview by Microsoft at MS Build 2018, and in general availability at MS Build 2019. The library has long been used internally at Microsoft, and you can find it integrated in Power Point and Excel (the slide and chart recommenders). The library bridges the gap between data science and software engineering and enables .NET developers to integrate custom machine learning models into their applications.
In this blog series, we will walk through how to build our own machine learning model in C#, and how the model can be deployed to an Azure Function or ASP.NET Core app. We will also take a look at Deep Learning, and how we can train our own neural networks in Visual Studio or Visual Studio Code.
In my previous blog post, I introduced ML.NET, the open-source, cross-platform library for Machine Learning in...
Microsoft recently reached a major milestone in the development of the open-source cross-platform library ML.NET...