Version
Cuda
cuda:9.1.85.1
Module
Version
9.1.85.1
Previous versions
Next versions
Other branches
Role
Description

CUDA is a parallel computing platform and application programming interface (API) model created by Nvidia. It allows software developers and software engineers to use a CUDA-enabled graphics processing unit (GPU) for general purpose processing – an approach termed GPGPU (General-Purpose computing on Graphics Processing Units). The CUDA platform is a software layer that gives direct access to the GPU’s virtual instruction set and parallel computational elements, for the execution of compute kernels.

Machine learning education

The runtime environment constructor for the machine learning and deep learning tutorials and courses.

Machine learning
A pre-configured and fully integrated software stack with TensorFlow, an open source software library for machine learning, and the Python programming language. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
tensorflow:1.6.0, python:2.7.14, cuda:9.1.85.1, cudnn:7.0.5, cuda_only-nvidia_drivers:390.25, development_preset:1
Machine learning
A pre-configured and fully integrated software stack with TensorFlow, an open source software library for machine learning, and the Python programming language. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
tensorflow:1.6.0, python:3.6.3, cuda:9.1.85.1, cudnn:7.0.5, cuda_only-nvidia_drivers:390.25, development_preset:1
Machine learning
A pre-configured and fully integrated software stack with TensorFlow, an open source software library for machine learning, and the Python programming language. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
tensorflow:1.5.0, python:3.6.3, cuda:9.1.85.1, cudnn:7.0.5, cuda_only-nvidia_drivers:390.25, development_preset:1
Machine learning
A pre-configured and fully integrated software stack with TensorFlow, an open source software library for machine learning, and the Python programming language. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
tensorflow:1.5.0, python:2.7.14, cuda:9.1.85.1, cudnn:7.0.5, cuda_only-nvidia_drivers:390.25, development_preset:1
Machine learning
A pre-configured and fully integrated software stack with PyTorch, an open source machine learning library, and the Python programming language. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
pytorch:0.3.0, python:3.6.3, cuda:9.1.85.1, cudnn:7.0.5, cuda_only-nvidia_drivers:390.25, development_preset:1