PyTorch vs TensorFlow – Explained | What is the difference between PyTorch and TensorFlow?
Contributed by: Arindam Sarkar
LinkedIn Profile: www.linkedin.com/in/arindamxsarkar
While starting with the journey of Deep Learning, also known as Neural Networking, one finds a host of frameworks and libraries in Python. Thus, the obvious dilemma of where to start and which one to pursue comes into the picture. This article is for those who have started or are about to start their journey with Deep Learning.
Amongst the host of Deep Learning libraries or frameworks viz. TensorFlow, Keras, Caffe, PyTorch, Microsoft Cognitive Toolkit, Apache Spark Deep Learning Pipelines, Chainer, etc. There are a couple of them which really stand out from the crowd viz. TensorFlow and PyTorch based on certain parameters. But before we delve deeper into them let us have a glimpse of their evolution to begin with.
PyTorch was developed by Facebook’s AI Research Laboratory (FAIR), and its first public release (0.1.12) was in May 2017. The latest stable release (1.4) was in January 2020. The name is inspired or derived from the popular Deep Learning framework Torch written in Lua programming language and implemented in Python hence the name PyTorch.
Also Read: What is TensorFlow? The ML Library Explained
1. Both TensorFlow and PyTorch are open source but were developed by two different giants in the technology innovation parlance viz. Google and Facebook and used for Machine Learning applications such as Neural Networks. But then PyTorch’s advantage over TensorFlow is on two distinct counts:
- Imperative programming: Tell What AND How. Declarative programming on the other hand Tell What NOT How
- Dynamic Computation Graphs: Built at runtime which lets use standard python statements and there is no distinction between defining the computation graph and compiling for imperative programs
Importance– Static Graphs work well for fixed-size networks whereas Dynamic graphs work well for Dynamic networks.
In the field of Artificial Intelligence, the above mentioned rigidity in the Neural Networks that were defined was a major hindrance. The researchers wanted to make Neural Networks more determined by network at runtime rather than pre-defined or beforehand. The then tools that existed did not really help them to do so.
This is where PyTorch wins over TensorFlow, as it gives a deep learning framework that brings dynamic Neural Network i.e., Define by Run (a graph is created on the fly). This is specifically useful when it comes to variable length inputs in RNNs (Recurrent Neural Networks).
Dynamic computation graphs are built at runtime which let us use standard python statements hence, there is no distinction between defining the computation graph and compiling for imperative programs. Moreover, Dynamic graphs make debugging easy. TensorFlow on the other hand first assembles a graph and uses a session to execute.
2. Learning curve is steeper for TensorFlow as compared to PyTorch.
- Have GPU capabilities like Numpy [and have explicit CPU & GPU control]
- More pythonic in nature
- Easy to debug
Although TensorFlow 2.0 has improved quite a lot and claims that with the Keras integration, and Eager Execution enabled by default, 2.0 is all about ease of use, and simplicity. It provides both new and experienced developers the tools & APIs needed to build and deploy their machine learning models with speed and precision with the help of tf.keras as the high-level API, removal of duplicate functionality, consistent, intuitive syntax across APIs, full lower-level API and inheritable interfaces for variables, checkpoints, layers. This essentially means that the user has to learn a bit more than PyTorch which is more pythonic in nature. Hence, the steeper learning curve for TensorFlow.
3. TensorFlow has the best documentation and a bigger community. Hence, it is the best for beginners. It is best suited for production as it was built with distributed computing in mind.
Following are few links for reference-
* TensorFlow Community: https://www.tensorflow.org/community/
* TensorFlow Community project: https://github.com/tensorflow/ community
Many resources, Tutorials and MOOCs are available for TensorFlow vs PyTorch. One reason for this could be that PyTorch is relatively new. Moreover, PyTorch is best suited for Researchers because of the dynamic graphs.
- TensorFlow’s TensorBoard provides the visualization and tooling required for machine learning:
- Metrics Tracking and Visualization such as loss and accuracy
- Model graph Visualization (ops and layers)
- Viewing histograms of weights, biases and other tensors as they change over time
- Embeddings to a lower dimensional space projection
- Images, text, and audio data display
- TensorFlow programs profiling
- On the other hand PyTorch’s Torchvision library contains all the important datasets as well as models and transformation operations generally used in the field of computer vision. Although native support for integration into TensorBoard is not present but can be used to visualize the results of Neural Network training runs
5. TorchScript- a subset of Python helps in deploying the applications into production at scale but as per the popular user experience TensorFlow is better suited for scalability of production models. But when it comes to building prototypes at a fast pace PyTorch is a better choice as it is lighter to work with.
Also Read: Using PyTorch in Computer Vision
Delving into the Model Creation using PyTorch vs Tensorflow
In general, a simple Neural Network model consists of three layers. Embedding Layer, Global Average Pooling Layer, and Dense Layer. The input is provided to the Embedding Layer and the Predictions are the output from the Dense Layer.
TensorFlow models are generally created with the help of Keras.
Note About Keras– It is an open source library which provides only high level APIs unlike TensorFlow which provides both high and low level APIs. But as Keras is built in Python, developing models is more user-friendly in Keras than TensorFlow.
Any of the three approaches can be adopted to develop the models in Keras viz. Subclassing, Functional API and Sequential model API.
- Subclassing – the class tf.keras.Model can be used to develop customizable models in its completeness and the forward pass logic is implemented in the call method whereas the layers are defined in the _init_() method. Most importantly, the object oriented approach help in reusing layers multiple times and defining extremely complex forward pass.
- Functional API – it is a very user-friendly approach to develop a Neural Network model compared to Subclassing as recommended by the developers’ community. This approach requires a bit of less coding as input of the previous layer is immediately passed on as and when the layer is defined. The model is instantiated through input and output tensor(s).
- Sequential model API – it is a sort of cut-short to a trainable model which consists of only a few common layers and thus a very compact way to define a model. On the flip side, this approach works extremely well in terms of performance when it comes to creating simple Neural Networks but implementing a complicated Neural Networks becomes very challenging in other words inflexible. To learn more, you can also take up free TensorFlow courses that will help you enhance your skills.
In comparison to TensorFlow or Keras, there are only two approaches to develop Neural Network models with the help of PyTorch. Subclassing and Sequential.
- Subclassing – the implementation with this approach is very similar to as in TensorFlow (Keras). Here subclassing is done through the nn.Model module and the layers are defined in the _init_() method but the forward pass creation is done in the method forward instead of call as in TensorFlow (Keras). In PyTorch there is a need to have the exact kernel size so as to make it global average-pooling, the reason being there is only one average-pooling layer available
- Sequential – this approach is also very similar to how it is done in TensorFlow (Keras) and is done through the Sequential module
Note: It is widely recommended that Subclassing approach is adopted instead of Sequential as many recurrent layers viz. RNNs (Recurrent Neural Networks), LSTMs (Long Short-Term Memory) do not work with nn.Sequential in PyTorch.
Training Neural Network in TensorFlow (Keras) vs PyTorch
- TensorFlow (Keras) – it is a prerequisite that the model created must be compiled before training the model with the help of the function model.compile() wherein the loss function and the optimizer are specified. The fit function i.e. the model.fit() is used to train the model which helps in the batch processing as well. It can also take care of the evaluation of the model if mentioned
- PyTorch – there is no predefined function in PyTorch to train the model hence the code for training the model is to be written from scratch. While training the model, the calculation of Loss for each of the batches is done and loss.backward() function is then called so as to propagate the gradient across the layers. To update the parameters in order to optimize, the optimizer.step() function is called.
TensorFlow Vs. PyTorch
|Open source library for dataflow programming across a range of tasksUsed for Machine Learning applications e.g., Neural NetworksDeveloped by Google.||Open source Machine Learning library for python based on TorchUsed for application e.g., NLP (Natural Language Processing)Developed by FAIR (Facebook’s AI Research Laboratory).|
Parameters to Compare:
It is evident from the graph above that PyTorch has been able to bridge the gap immensely with TensorFlow in the past three years.
|Backed by a very large community of technology companies||Strong community support though smaller as compared to TensorFlow as it is newer but is growing at a very fast pace|
But recent study has shown that the number of Research papers published in various forums or conferences have been positive in favour of PyTorch rather than TensorFlow.
Note: In the above graph anything over 50% means more mentions for PyTorch than TensorFlow for that conference.
In the recent NerulIPS conference, there were 166 papers on PyTorch and 74 on TensorFlow.
In 2018 papers written on PyTorch were negligible as compared to TensorFlow but went on to more than doubling TensorFlow’s number in 2019.
|Used for large datasets and high performance models because of better training duration||Used for large datasets and high performance models because of better training duration|
|Provides both high and low level APIs||Provides only Low level APIs|
|Complicated and may not be very helpful for the beginners||Complex and readability is less|
|Provides a reduced size model with high accuracy as the number of lines of code is lesser as compared to PyTorch||Need to write more number of lines of code and is not as simple as TensorFlow|
|Debugging is difficult||Better debugging capabilities|
Let us now discuss the final point regarding the choice of PyTorch vs TensorFlow.
It should be determined with respect to:
- Technical background
TensorFlow should be preferred for large dataset and high performance is mandatory. It provides advanced operations and all general purpose functionalities for building deep learning models.
PyTorch, on the other hand, should be the preferred one when the need is diverse as anything can be implemented using PyTorch as it provides the flexibility for use along with the provision of a better training duration and debugging capabilities. It is the chosen one when it comes to research work and prototype model creation has to be done at a fast pace. I hope you have enjoyed this blog on PyTorch vs TensorFlow.
You can upskill with Great Learning’s PGP Artificial Intelligence vs Machine Learning.
Source : https://www.mygreatlearning.com/blog/artificial-intelligence/