VIEW PROJECTS ON GITHUB
Open GPU data science
High-performance platform for deep learning inference
Optimized primitives for collective multi-gpu communication
Inference microservice for data center production that maximizes GPU utilization
Data pre-processing in deep learning applications
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
Deep Learning GPU Training System
Examples that show how to use TF-TRT
TensorRT backend for ONNX
Open source Deep Learning Inference Accelerator
Build and run Docker containers leveraging NVIDIA GPUs
Kubernetes on NVIDIA GPUs
Enable GPU support in Kubernetes with the NVIDIA device plugin
Support multiple Linux container runtimes via the NVIDIA Container Runtime
Automatically configure GNU/Linux containers leveraging NVIDIA hardware
Bindings and utilities for monitoring NVIDIA GPUs on Linux
NVIDIA Material Definition Language SDK
Universal Scene Description
Real-time rendering framework
Physics simulation engine
Image classification with NVIDIA TensorRT from TensorFlow models
Perception and AI components for autonomous mobile robotics
CUDA Templates for Linear Algebra Subroutines
Parallel algorithms library
Distributed multi-grid linear solver library on GPU
Toolkit for efficient experimentation with various sequence-to-sequence models
High-resolution photorealistic video-to-video translation
Deep learning for recommender systems
Tool for automating hyper-parameters search for your models on a backend of your choice
PyTorch implementation with faster-than-realtime inference
Tensor Cores optimized code-samples