Which of the following claims is correct about TensorRT and ONNX?
A.
TensorRT is used for model deployment and ONNX is used for model interchange.
B.
TensorRT is used for model deployment and ONNX is used for model creation.
C.
TensorRT is used for model creation and ONNX is used for model interchange.
D.
TensorRT is used for model creation and ONNX is used for model deployment.
The Answer Is:
A
This question includes an explanation.
Explanation:
NVIDIA TensorRT is a deep learning inference library used to optimize and deploy models for high-performance inference, while ONNX (Open Neural Network Exchange) is a format for model interchange, enabling models to be shared across different frameworks, as covered in NVIDIA’s Generative AI and LLMs course. TensorRT optimizes models (e.g., via layer fusion and quantization) for deployment on NVIDIA GPUs, while ONNX ensures portability by providing a standardized model representation. Option B is incorrect, as ONNX is not used for model creation but for interchange. Option C is wrong, as TensorRT is not for model creation but optimization and deployment. Option D is inaccurate, as ONNX is not for deployment but for model sharing. The course notes: “TensorRT optimizes and deploys deep learning models for inference, while ONNX enables model interchange across frameworks for portability.”
[References: NVIDIA Building Transformer-Based Natural Language Processing Applications course; NVIDIA Introduction to Transformer-Based Natural Language Processing., ]
NCA-GENL PDF/Engine
Printable Format
Value of Money
100% Pass Assurance
Verified Answers
Researched by Industry Experts
Based on Real Exams Scenarios
100% Real Questions
Get 65% Discount on All Products,
Use Coupon: "ac4s65"