LIVE Instructor-Led Courses
Dismiss
Deep Learning and Machine Learning training courses

3 December 2018

Deep Learning and Machine Learning

Deep Learning and Machine Learning – some practical applications

 

Machine Learning and its Deep Learning subset are evolving rapidly, and getting more accurate all the time. But what problems do they address and could they help your organisation? In this article we’ll explore some practical uses of the technologies and discuss emerging trends. Let’s look at DL first.

Deep Learning

Deep Learning is at the cutting edge of Machine Learning, and is delivering new levels of accuracy for many real-world problems such as:

  • Speech recognition, natural language processing (NLP), natural language understanding (NLU) and natural language generation (NLG)
  • Image recognition, face recognition and face detection
  • Image and video colourisation
  • Medical applications including cancer detection and epileptic seizures predictions
  • Handwriting generation and recognition

DL uses deep neural network (NN) algorithms to solve these problems. These computational models are inspired by the human brain and involve a network of connections – similar to neurons in the brain – arranged in layers. Different types of NNs include convolutional neural networks (CNNs), long-short-term-memory (LSTM) deep neural networks and capsule networks.

So how does it work? Let’s take image recognition using CNN as an example. The image is fed into the network as a 2D array of pixels, with each pixel having a RGB or greyscale value. Nodes in the input layer receive this data and transmit it to nodes they’re connected to in hidden layers. In a CNN, the hidden layers alternate between convolutional layers and pooling & subsampling layers. The result is a deep abstract representation of the image at the output layer. Google Photos, for example, uses a large scale CNN that was developed using TensorFlow and runs on Google servers with powerful tensor processing units (TPUs).

CNNs have a limitation, though, in that they don’t take into account spatial hierarchies between simple and complex objects – which can lead to errors. That has been overcome with the introduction of capsule networks (CapsNet), a new type of NN that process visual information. The CapsNet architecture involves two convolutional layers and one fully connected layer contained in a capsule. The network uses layers of capsules instead of layers of neurons, with each capsule being a set of neurons. CapsNet can maintain hierarchical relationships, giving them an advantage over conventional CNNs.

LSTM is another emerging Deep Learning technology involving a recurrent network with a gradient-based learning algorithm. It’s used in speech recognition, where the sound waves are represented as a spectrogram – a spatial-temporal signal that varies with time. Speech recognition needs a neural network that can recognise and remember the sequences of inputs, which LSTMs can do. They learn to map the spectrogram feeds to words. Google Assistant and Amazon Echo use deep neural networks for speech recognition.

Machine Learning

Machine Learning also uses computational models to solve problems and predict outcomes. Although unlike with DL, Machine Learning algorithms tend to be linear – that is, they can be represented as one node that linearly transforms input to output. In the simplest form, you have input variables (x) and output variables (y), and use algorithms to learn the mapping function between them: y = f(x). The aim is to approximate the mapping function so well that you can predict an outcome y based on new input data x.

With supervised learning, the input data (x) is labelled and the problems can involve binary classification (will this customer buy this product or not?), multiclass classification (is this piece of text abusive?) and regression (what will the temperature in London be tomorrow?). Supervised ML algorithms include random forest, decision trees, support vector machines, logistic regression and linear regression.  

With unsupervised learning, the input data (x) is not labelled and it may not be clear what the outputs (y) will look like. The aim instead is to look for relationships between variables in the data, in order to model the underlying structure and discover more about it. Problems addressed by unsupervised learning can involve clustering (such as grouping customers by purchasing behaviour) and association (such as customers who buy product A also tend to buy product B). Unsupervised ML algorithms include K-means for clustering problems and the Apriori algorithm for association problems.

Here at JBI Training, we provide a range of training courses for Deep Learning and Machine Learning including:

 

FIND OUT MORE

EMAIL US

About the author: Craig Hartzel
Craig is a self-confessed geek who loves to play with and write about technology. Craig's especially interested in systems relating to e-commerce, automation, AI and Analytics.

CONTACT
+44 (0)20 8446 7555

[email protected]

SHARE

 

Copyright © 2023 JBI Training. All Rights Reserved.
JB International Training Ltd  -  Company Registration Number: 08458005
Registered Address: Wohl Enterprise Hub, 2B Redbourne Avenue, London, N3 2BS

Modern Slavery Statement & Corporate Policies | Terms & Conditions | Contact Us

POPULAR

Rust training course                                                                          React training course

Threat modelling training course   Python for data analysts training course

Power BI training course                                   Machine Learning training course

Spring Boot Microservices training course              Terraform training course

Kubernetes training course                                                            C++ training course

Power Automate training course                               Clean Code training course