Introduction to Machine Learning

download

Machine learning is one of the hottest topics in technology today. But why do we need machine learning? What is machine learning, and how is it related to sci-fi’s favorite term, Artificial intelligence? Let’s take a look.

Need for Machine Learning

Introduction to Machine Learning (ML) - The Genius Blog

The term Machine Learning was coined by Arthur Samuel (1959), an American pioneer in the field of computer gaming and artificial intelligence, and stated that “it gives computers the ability to learn without being explicitly programmed.”

Since the internet became popular, the amount of data generated worldwide has been immeasurable. According to Forbes, Americans use 4,416,720 GB of internet data, including 188,000,000 emails, 18,100,000 texts, and 4,497,420 Google searches every minute.

With so much data available and computational processing getting cheaper and much more powerful, something had to be done with this data. That’s when machine learning was born, and it became a way for humans to understand critical aspects of the vast amount of data.

Most top-tier companies build machine learning models to identify profitable opportunities and avoid risks.

These machine learning models improve decision-making by doing tasks such as predicting whether a stock will go up or down, forecasting company sales, etc. They also help uncover patterns and trends in datasets and solve super-complex problems.

Artificial Intelligence and Machine Learning

Introduction To Machine Learning | Machine Learning Basics | Edureka

Artificial intelligence (AI) is a simulation of human intelligence in machines in a way that machines imitate human actions or have the ability to learn and solve problems. Some popular use cases of AI are Siri, Alexa, and Self-driving cars.

Machine Learning (ML) is a subset of AI which allows machines to learn from past data without being programmed explicitly. Tasks such as making predictions and classifying things into categories are a part of ML. Some examples of ML that are seen all around us are:

Netflix’s Recommendation System: Netflix uses a dataset of users who watched similar movies and other factors like genre, actors, etc., to figure out which movies to recommend.

Facebook auto friend tagging: Facebook uses Machine learning and Neural networks to perform facial recognition to recognize who is who.

Google’s spam filter: Google uses Machine learning and Natural language processing to check emails and classify emails as spam or not.

ML is also used in self-driving cars, cyber fraud detection, customer support chatbots, etc.

How does Machine Learning work?

Introduction to Machine Learning #1 | by Rajesh Khadka | Towards Data Science

A machine learning model takes historical data and uses it to make predictions.

Some terms that should be known before moving forward.

Algorithm: A set of rules and techniques to understand the pattern and get information from the dataset. This is the logic part of the machine learning model.

Model: The main component in machine learning is trained by using the algorithm. The model takes an input, runs the algorithm based on it, and produces an output.

Training dataset: This is the dataset that helps identify the trends and patterns and is used to predict the needed output.

Testing dataset: After the model is trained and the model’s accuracy is evaluated using the testing dataset.

Now using these terms:

Historical data, also referred to as training datasets, our machine learning algorithms build a mathematical model that makes predictions without being conventionally programmed. These models are a practical application of statistics.

The model’s performance will increase as more information is provided.

Block diagram on working of a Machine learning algorithm

(Source: https://www.javatpoint.com/machine-learning)

When working with datasets on a machine learning model, the following are some broad steps that can be followed:

Step 1 Importing Data

 There are 4 major types in which data will be available for use, i.e., CSV, JSON, SQLite, and BigQuery.

The following code can be used when working with .csv files in python.

import pandas as pd
df = pd.read_csv (r’Path where the CSV file is stored\File name.csv’)

 

Step 2 Data Preprocessing

Depending on the dataset, this may vary. The cleaning of some datasets may be minimal, such as adding NULL values to missing entries, but some datasets may require a lot of effort.

A good understanding of the data is required for this step. Graphs and charts can help us better understand the data so that we can recognize trends at a glance. Plotly, Matplotlib, or seaborn are examples of python libraries suitable for this purpose.

Step 3 Split the Data into Training/ Testing sets

Train-test splits are a way to evaluate the performance of machine learning algorithms. The procedure involves dividing a dataset into two subsets.

Here is a sample code of how it can be achieved using Sklearn:

from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(df, y, test_size=0.2)

 

Here the dataset, df,  gets split into an 80/20 ratio ( as shown by the 0.2 value at the end of the code). Here, X is the independent variable, and y is the dependent.

To avoid overfitting or underfitting, cross-validation methods are used, such as K-Folds Cross-Validation and Leave One Out Cross-Validation.

Step 4 Creating a model.

The data is now ready to be used with a machine-learning model. Regression, classification, clustering, or deep learning algorithms will be used depending on the dataset.

A detailed discussion of this step will be covered in future lessons.

Step 5 Training a model

Training data is used in this step to try and achieve the desired result. The weights and biases are changed after checking the result (an ideal model would have no loss). Thus, training a model aims to find weights and biases that, on average, have low loss across all examples.

 This process is called empirical risk minimization.

 Step 6 Make Predictions

After the model is trained, it provides a prediction. The word prediction is a little misleading because only in the time series algorithm is the future predicted. The other algorithms might classify the data or determine the estimate of a dependent variable.

Step 7 Evaluate and Improve

Now the testing dataset is sent to the model, and its accuracy is tested the higher the accuracy, the better the algorithm is trained. Various testing methods, such as the F1 test, create a confusion matrix or ROC curve.

Once all this information is available, we repeat whichever above step is necessary to make our model closer to the desired outcome.

Note- No Machine learning model will provide 100% accuracy; if it does, it might be a case of overfitting.

Leave a Reply

Your email address will not be published. Required fields are marked *