As Machine Learning and AI technologies continue to advance, the need for efficient and secure methods to store, share, and deploy trained models becomes increasingly critical. Model weights file ...
AUTOSAR C++ compliant deep learning inference with TensorRT
If you are unfamiliar with safety compliance and are debating whether or not to read this blog post, let’s answer the most important question first. Why should you read this blog post?What can we ...
NVIDIA GTC 2022 Day 4 Highlights: Meet the new Jetson Orin
Introduction Today is the final day of our coverage of the NVIDIA GTC conference. The first day of GTC was all about professional training, the second day was about Big Bang announcements. On the ...
NVIDIA GTC 2022 Day 3 Highlights: Deep Dive into Hopper architecture
Introduction Welcome to Day 3 of our coverage of the NVIDIA GTC conference. Yesterday, NVIDIA announced the next generation H100 data center GPU. The keynote did not go into much detail about some ...
Building Industrial embedded deep learning inference pipelines with TensorRT
You can scarcely find a good article on deploying computer vision systems in industrial scenarios. So, we decided to write a blog post series on the topic. The topics we will cover in this ...
How To Run Inference Using TensorRT C++ API
In this post, we continue to consider how to speed up inference quickly and painlessly if we already have a trained model in PyTorch. In the previous post We discussed what ONNX and TensorRT are ...