Role
Appliances
A pre-configured and fully integrated software stack with TensorFlow, an open source software library for machine learning, and the Python programming language. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with PyTorch, an open source machine learning library, and the Python programming language. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with PyTorch, an open source machine learning library, and the Python programming language. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with PyTorch, an open source machine learning library, and Python 2.7. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with TensorFlow, an open source software library for machine learning, and Python 3.6. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with TensorFlow, an open source software library for machine learning, and Python 2.7. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with PyTorch, an open source machine learning library, and Python 3.6. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with PyTorch, an open source machine learning library, and Python 2.7. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
The pre-configured and ready-to-use runtime environment for the Udacity's Deep Learning Nanodegree Foundation program (nd101). It includes Python 3.5, TensorFlow 1.0.0 and tflearn 0.30. The stack also includes CUDA and cuDNN, and is optimized for running on NVidia GPU.
The pre-configured and ready-to-use runtime environment for the Udacity's Machine Learning Engineer Nanodegree program (nd009t). It includes Python 3.5, TensorFlow 1.0.0 and Keras 2.0.2. The stack also includes CUDA and cuDNN, and is optimized for running on NVidia GPU.