Tensorflow – A Complete Guide

Machine and Deep Learning are complex disciplines and two such domains that have been a trend amongst the development community. But implementing such learning models has been a lot less daunting and difficult today than it used to be a few years ago. All thanks to amazing new learning frameworks developed, such as Google’s TensorFlow, that aid in easing the process of acquiring data, training models, serving predictions, and refining its future results. The Google Brain team created it, a core team that develops technologies in the domain of artificial Intelligence and Machine Learning. TensorFlow is an open-source library developed to leverage numerical computation and large-scale machine learning.

TensorFlow bundles itself with a slew of machine learning and deep learning, also known as neural networking, models and algorithms. It uses Python language to provide a convenient front end API for building applications with the framework while executing those applications in high-performance C++. TensorFlow can easily train and run deep neural networks for handwritten digit classification, image recognition, word embeddings, recurrent neural networks, natural language processing, and PDE based simulations.

The most amazing part? TensorFlow supports production prediction at scale, with the same models used for training. It is considered the most famous deep learning library in the world currently. Google uses machine learning in almost all of its products to improve their search engine, translation, image captioning, or provide better recommendations so that Google users can experience a faster and more refined search with AI. For example, if the user types a keyword in the search bar, Google recommends what could be the next word. Google readily uses machine learning to analyze and take advantage of the massive datasets present to give users the best experience.

The aim of building Tensorflow was to let researchers and developers work together to develop better and future-ready AI modes as such advances, once developed and scaled, allow lots of people to use it. The library was first made public in late 2015, while the first stable version was released in 2017. It is open-source and developed under Apache Open Source license. One can use it, modify it and redistribute the modified version for a fee without paying anything to Google! Before developing such high-level libraries, the coding mechanism for machine learning and deep learning was much more complicated. The TensorFlow library provides a high-level API, and complex coding isn’t required to prepare a neural network, configure a neuron or program a neuron. The library is self-sufficient and does all of these tasks.

TensorFlow is also integrated with Java and R. The application of deep learning may become very complicated at times, with the training process requiring a lot of computational power. It might take a long time because of the large data size, as it involves several iterative processes, mathematical calculations, matrix multiplications, and so on. One of the major advantages that TensorFlow provides is that it supports GPUs, as well as CPUs. It also has a lot faster compilation time than other deep learning libraries, like Keras and Torch.   TensorFlow is generally used to create large-scale neural networks with multiple layers. It can be inculcated for deep learning or machine learning problems such as Classification, Perception, Understanding, Discovering, Prediction and Creation. It also finds a role to play in developing text-based applications, image recognition, voice search, and much more.

How TensorFlow Works?

Arrays of data with varying dimensions and ranks fed as input to the neural network are called tensors. TensorFlow allows developers to create dataflow graphs, which are structures that describe how the data moves through a graph, or a series of processing nodes present. Each node in the graph represents a particular mathematical operation, and each connection or edge between nodes is a multidimensional data array or tensor. TensorFlow provides all of this for the programmer through the Python language. Python language is easy to learn and work with and provides convenient ways to express how high-level abstractions generated can be coupled together. Nodes and tensors in TensorFlow are Python objects, and TensorFlow applications themselves can be termed Python applications.

The actual math operations, however, are not performed in Python. The libraries for transformations that are available through TensorFlow are written in the form of high-performance C++ binaries. Python just directs the traffic between pieces and provides high-level programming abstractions to hook them all together. TensorFlow applications can be mostly run on any target that’s convenient: a local machine, a cluster in the cloud, iOS or Android devices, CPUs or GPUs. If you use Google’s cloud, you can easily run TensorFlow on Google’s custom TensorFlow Processing Unit (TPU) silicon for further acceleration. The resulting models created by TensorFlow can be deployed on almost any device where they can be used to serve predictions.

Getting Started With Code

In this article, we will implement a simple artificial neural network model using the TensorFlow library provided components to predict accuracy and get to know how TensorFlow enables faster processing and development of neural networks. The following code is partially inspired by TensorFlow’s official documentation, which can be accessed using the link here.

Installing the Library

Our first step would be to install the necessary library to build our ANN model. To do so, you can use the following lines of code.

!pip install tensorflow
Importing the Dependencies

Further, we will be importing our required dependencies-

#importing dependencies
import pandas as pd
from sklearn.model_selection import train_test_split
Processing the Data

Let us now load our data and process it before loading it into our model.

#Loading the data
df = pd.read_csv('/content/Churn.csv')

#creating X and Y Variables
X = pd.get_dummies(df.drop(['Churn', 'Customer ID'], axis=1))
y = df['Churn'].apply(lambda x: 1 if x=='Yes' else 0)

#splitting the data into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2)


#printing the head for train
y_train.head()

1004    0
2695    0
4202    0
340     1
2770    1
Name: Churn, dtype: int64

Creating the Neural Network

With our data processed, we will be creating our neural network model by defining the parameters and using TensorFlow components. We will also train the created model to derive a prediction and accuracy score from it. You can create a similar model for any data or dataset.

#importing model dependencies
 
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.layers import Dense
from sklearn.metrics import accuracy_score

Let us now define the parameters for the network layers. We are using the sigmoid function in the output layer to predict a binary output from our dataset.

#setting parameters for network layers
 
model = Sequential()
model.add(Dense(units=32, activation='relu', input_dim=len(X_train.columns)))
model.add(Dense(units=64, activation='relu'))
model.add(Dense(units=1, activation='sigmoid'))
 
#setting up the model compiler
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics='accuracy')

Let us now fit and train our model created; we will be training it for 500 epochs in order and aim to achieve optimal accuracy.

#fitting and training our model
 
model.fit(X_train, y_train, epochs=500, batch_size=32)

Output :

……..

Epoch 497/500
177/177 [==============================] - 1s 3ms/step - loss: 0.4246 - accuracy: 0.8956
Epoch 498/500
177/177 [==============================] - 1s 3ms/step - loss: 0.4247 - accuracy: 0.9002
Epoch 494/500
177/177 [==============================] - 1s 3ms/step - loss: 0.4228 - accuracy: 0.9182
Epoch 500/500
177/177 [==============================] - 1s 3ms/step - loss: 0.4222 - accuracy: 0.9388
#predicting the outcome
 
y_hat = model.predict(X_test)
y_hat = [0 if val < 0.5 else 1 for val in y_hat]


#printing the accuracy score
 
accuracy_score(y_test, y_hat)
 
0.946330731014905

We can observe that our model gives us pretty good results as an accuracy score. However, we can further improve this score using the method of hyperparameter tuning.

We can also save the following created model and reload it again if needed for further use

#saving the model
 
model.save('tfmodel')

#reloading model
 
model = load_model('tfmodel')

End Notes

In this article we understood what TensorFlow is and how it can be and is being currently used for future developments. We also got a taste of what it takes to build a neural network model using the TensorFlow Library and how its components help in ease of development. I would encourage my readers to explore the library further for its enormous and wide spectrum of use cases.

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link

Most Popular