Role
Appliances
The pre-configured and ready-to-use runtime environment for the CS231n course - Convolutional Neural Networks for Visual Recognition, Stanford University, Spring 2017. It includes original (old) versions of Python, TensorFlow, and PyTorch, used in the course. The stack also includes CUDA and cuDNN, and is optimized for running on NVidia GPU.
The pre-configured and ready-to-use runtime environment for the Open Machine Learning Course, 2018. It includes Python 3.6, TensorFlow 1.4, Keras 2, XGBoost, LightGBM and Vowpal Wabbit. The stack also includes CUDA and cuDNN, and is optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with TensorFlow, an open source software library for machine learning, Python 2.7, and Jupiter Notebook, a browser-based interactive notebook for programming, mathematics, and data science. The stack is designed for research and development tasks and optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with TensorFlow, an open source software library for machine learning, Python 3.6, and Jupiter Notebook, a browser-based interactive notebook for programming, mathematics, and data science. The stack is designed for research and development tasks and optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with Theano, a numerical computation library for Python, and Python 3.6. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with Theano, a numerical computation library for Python, and Python 2.7. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with TensorFlow, an open source software library for machine learning, and Python 3.6. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with TensorFlow, an open source software library for machine learning, and Python 2.7. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with MXNet, an open-source deep learning framework, and Python 3.6. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.
A pre-configured and fully integrated software stack with MXNet, an open-source deep learning framework, and Python 2.7. It provides a stable and tested execution environment for training, inference, or running as an API service. The stack can be easily integrated into continuous integration and deployment workflows. It is designed for short and long-running high-performance tasks and optimized for running on NVidia GPU.