latest
User documentation
Quickstart Guide
Download module from the marketplace
Run a module locally
Docker Hub way (easy)
Github way (pro)
Run a module on DEEP Pilot Infrastructure
Integrate your model with the API
Overview
Architecture overview
The marketplace
The API
The storage resources
Our different user roles
The basic user
The intermediate user
The advanced user
DEEP Data Science template
your_project repo
DEEP-OC-your_project
DEEPaaS API
Integrate your model with the API
Methods
HowTo’s
Develop a model
1. Prepare DEEP DS environment
2. Improve the initial code of the model
Train a model locally
1. Get Docker
1. Search for a model in the marketplace
3. Get the model
4. Upload your data to storage resources
5. Train the model
6. Testing the training
Train a model remotely
1. Choose a model
2. Prerequisites
3. Upload your files to Nextcloud
4. Orchent submission script
5. The rclone configuration file
6. Prepare your TOSCA file
7. Create the orchent deployment
8. Go to the API, train the model
9. Testing the training
Test a service locally
1. Get Docker
2. Search for a model in the marketplace
3. Get the model
4. Run the model
5. Go to the API, get the results
Use rclone
Installation of rclone in Docker image (pro)
Nextcloud configuration for rclone
Creating rclone.conf
Example code on usage of rclone from python
Install and configure oidc-agent
1. Installing oidc-agent
2. Configuring oidc-agent with DEEP-IAM
Video demos
Modules
Toy example: dog’s breed detection
Description
Local Workflow
DEEP Pilot infrastructure submission
Examples
DEEP Open Catalogue: Image classification on TensorFlow
Workflow
Launching the full DEEPaas API
DEEP Open Catalogue: Massive Online Data Streams
Description
Workflow
Launching the full DEEPaas API
Technical documentation
Mesos
Introduction
Testbed Setup
Nodes characteristics
Tested Components Versions
Prepare the agent (slave) node
Verify the nvidia-driver installation
Mesos slave configuration
Testing GPU support in Mesos
Testing Chronos patch for GPU support
Patch compilation
Testing
Testing GPU support in Marathon
Running tensorflow docker container
References
Enabling open-id connect authentication
Kubernetes
DEEP : Installing and testing GPU Node in Kubernetes - CentOS7
Introduction
Cluster Status
Tests
Access PODs from outside the cluster
References
Installing GPU node and adding it to Kubernetes cluster
Step-by-step guide
OpenStack nova-lxd
OpenStack nova-lxd installation via Ansible
Comparison between Openstack Ansible and Juju/conjure-up
Installing a All-in-One Openstack site with nova-lxd via Openstack Ansible
Notes:
References
Deploying OpenStack environment with nova-lxd via DevStack
Installation steps
Handy commands:
Notes:
References
Installing nova-lxd with Juju
Installation
Notes
OpenStack nova-lxd testing configuration
Testing of nova-lxd with different software configurations
Working configuration
uDocker
uDocker new GPU implementation
Test and evaluation of new implementation
References
Miscelaneous
GPU sharing with MPS
How to use MPS service
Testing environment
Test 1. Test with CUDA native sample nbody, without nvidia-cuda-mps service
Test 2. Test with CUDA native sample nbody, with nvidia-cuda-mps service
Test 3. Test with Docker using mariojmdavid/tensorflow-1.5.0-gpu image, without nvidia-cuda-mps service
Test 4. Test with Docker using mariojmdavid/tensorflow-1.5.0-gpu image, with nvidia-cuda-mps service
Test 5. Test with Docker using vykozlov/tf-benchmarks:181004-tf180-gpu image, without and with nvidia-cuda-mps service
Identified reasons why Tensoflow does not work correctly with MPS
Final remarks:
References
DEEP-Hybrid-DataCloud
Docs
»
Search
Edit on GitHub
Please activate JavaScript to enable the search functionality.
Read the Docs
v: latest
Versions
latest
stable
user-docs
Downloads
pdf
htmlzip
epub
On Read the Docs
Project Home
Builds
Free document hosting provided by
Read the Docs
.