Organizations implementing AI applications have several considerations to ponder in choosing the proper infrastructure. But one critical consideration is making a distinction between the training portion of AI and inferencing.
This is the view of Michael Lang, solutions architecture manager at NVIDIA, speaking on a panel discussion on implementing AI at the recent NexGen Connectivity Forum. The forum comprised both industry participants and solution providers.
Keep training and inference apart
The training and learning piece of AI, said Lang, is very different and often requires a different infrastructure environment to the one used for inferencing with AI.
“The training and learning piece is about HPC and data-intensive needs,” said Lang. “That means big data centers and infrastructure and big capability.”
The inferencing piece, however, can be completely different. Often, this…