Intel® and Facebook* Collaborate to Boost PyTorch* CPU Performance | AI News | Intel Software


I’m David Shaw, and
this is “AI News.” Did you know that Facebook
makes $200 trillion predictions and over 6 billion language
translations per day? As the world generates so much
data– text, pictures, videos, and more– advances in deep learning have
helped us better understand this information. Applying deep learning to
develop new models consists of model training and then
deploying these models for inference. It helps to make new predictions
within an application. While the competing
intensity of inference is much lower than
that of training, inference is often done
on a much larger data set. This means that
the total computing resources spent on inference
dwarf those spent on training. Intel recently launched the
second generation Intel Xeon scalable processors,
adding Intel deep learning boost technology. It allows end users
to take advantage of this technology with
minimum changes to their code. The optimizations are
abstracted and integrated into deep learning
frameworks, such as PyTorch. Intel and Facebook
are collaborating to bring software optimizations
to the PyTorch community to take advantage
of these changes. In this article,
get more information on the hardware advancements,
software advancements, and the results they give. Intel and Facebook continue to
accelerate PyTorch for CPUs. This benefits the overall
PyTorch ecosystem. Intel MKL-DDN is
included in PyTorch as a default Math Kernel
Library for deep learning. Find it at pytorch.org. Make sure you check out
the other links provided to get a closer look yourself. Thanks for watching, and
I’ll see you next week. [MUSIC PLAYING]

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © 2019 Explore Mellieha. All rights reserved.