[sebastianraschka] BigDataFr recommends: #datascientist #MachineLearning – Artificial Neurons and Single-Layer Neural Networks

</div><div style="clear:both"></div></div><p><a title="@sebastianraschka.com - Artificial Neurons and Single-Layer Neural Networks" href="http://sebastianraschka.com/Articles/2015_singlelayer_neurons.html" target="_blank">BigDataFr recommends: Artificial Neurons and Single-Layer Neural Networks

‘This article offers a brief glimpse of the history and basic concepts of machine learning. We will take a look at the first algorithmically described neural network and the gradient descent algorithm in context of adaptive linear neurons, which will not only introduce the principles of machine learning but also serve as the basis for modern multilayer neural networks in future articles.’

– How Machine Learning Algorithms Work Part 1

Introduction

Machine learning is one of the hottest and most exciting fields in the modern age of technology. Thanks to machine learning, we enjoy robust email spam filters, convenient text and voice recognition, reliable web search engines, challenging chess players, and, hopefully soon, safe and efficient self-driving cars.
Without any doubt, machine learning has become a big and popular field, and sometimes it may be challenging to see the (random) forest for the (decision) trees. Thus, I thought that it might be worthwhile to explore different machine learning algorithms in more detail by not only discussing the theory but also by implementing them step by step.
To briefly summarize what machine learning is all about: « [Machine learning is the] field of study that gives computers the ability to learn without being explicitly programmed » (Arthur Samuel, 1959). Machine learning is about the development and use of algorithms that can recognize patterns in data in order to make decisions based on statistics, probability theory, combinatorics, and optimization.

The first article in this series will introduce perceptrons and the adaline (ADAptive LINear NEuron), which fall into the category of single-layer neural networks. The perceptron is not only the first algorithmically described learning algorithm [1], but it is also very intuitive, easy to implement, and a good entry point to the (re-discovered) modern state-of-the-art machine learning algorithms: Artificial neural networks (or « deep learning » if you like). As we will see later, the adaline is a consequent improvement of the perceptron algorithm and offers a good opportunity to learn about a popular optimization algorithm in machine learning: gradient descent.

Read more
Written by Sebastian Raschka
Source: http://sebastianraschka.com

Laisser un commentaire