release-2
User documentation
Quickstart Guide
Run a module locally
Run the container
Access the module via API
Train a module on DEEP Pilot Infrastructure
Develop and share your own module
Overview
DEEP architecture
The marketplace
The API
The data storage resources
The dashboards
User roles and workflows
The basic user
The intermediate user
The advanced user
Data Science template
<your_project> repo
<DEEP-OC-your_project>
Step-by-step guide
DEEPaaS API
Integrate your model with the API
Running the API
DEEP Dashboard
Selecting the modules
Making a deployment
Managing the deployments
HowTo’s
Develop a model
1. Prepare DEEP Data Science environment
2. Improve the initial code of the model
3. Connect with a remote storage
4. Create a python installable package
5. Create a docker container for your model
Train a model locally
1. Choose your module
2. Store your data
3. Train the model
Train a model remotely
1. Choose a model
2. Upload your files to Nextcloud
3. Deploy with the Training Dashboard
4. Go to the API, train the model
5. Testing the training
Perform inference locally
1. Choose your module
2. Launch the API and predict
Add module to the DEEP marketplace
Creating the Github repositories
Making the Pull Request (PR)
Use rclone
Installation of rclone in Docker image
Nextcloud configuration for rclone
Creating rclone.conf for your local host
Example code on usage of rclone from python
Install and configure oidc-agent
Deploy with CLI via Orchent
Prepare your TOSCA file (optional)
Orchent submission script
Submit your deployment
Video demos
Modules
Technical documentation
Mesos
Introduction
Testbed Setup
Nodes characteristics
Tested Components Versions
Prepare the agent (slave) node
Verify the nvidia-driver installation
Mesos slave configuration
Testing GPU support in Mesos
Testing Chronos patch for GPU support
Patch compilation
Testing
Testing GPU support in Marathon
Running tensorflow docker container
References
Enabling open-id connect authentication
Kubernetes
DEEP : Installing and testing GPU Node in Kubernetes - CentOS7
Introduction
Cluster Status
Tests
Access PODs from outside the cluster
References
Installing GPU node and adding it to Kubernetes cluster
Step-by-step guide
OpenStack nova-lxd
OpenStack nova-lxd installation via Ansible
Comparison between Openstack Ansible and Juju/conjure-up
Installing a All-in-One Openstack site with nova-lxd via Openstack Ansible
Notes:
References
Deploying OpenStack environment with nova-lxd via DevStack
Installation steps
Handy commands:
Notes:
References
Installing nova-lxd with Juju
Installation
Notes
OpenStack nova-lxd testing configuration
Testing of nova-lxd with different software configurations
Working configuration
uDocker
uDocker new GPU implementation
Test and evaluation of new implementation
References
Miscelaneous
GPU sharing with MPS
How to use MPS service
Testing environment
Test 1. Test with CUDA native sample nbody, without nvidia-cuda-mps service
Test 2. Test with CUDA native sample nbody, with nvidia-cuda-mps service
Test 3. Test with Docker using mariojmdavid/tensorflow-1.5.0-gpu image, without nvidia-cuda-mps service
Test 4. Test with Docker using mariojmdavid/tensorflow-1.5.0-gpu image, with nvidia-cuda-mps service
Test 5. Test with Docker using vykozlov/tf-benchmarks:181004-tf180-gpu image, without and with nvidia-cuda-mps service
Identified reasons why Tensoflow does not work correctly with MPS
Final remarks:
References
DEEP-Hybrid-DataCloud
Docs
»
User documentation
»
HowTo’s
Edit on GitHub
HowTo’s
¶
Develop a model
Train a model locally
Train a model remotely
Perform inference locally
Add module to the DEEP marketplace
Use rclone
Install and configure oidc-agent
Deploy with CLI via Orchent
Video demos
Read the Docs
v: release-2
Versions
latest
stable
release-2
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds
Free document hosting provided by
Read the Docs
.