latest
User documentation
Quickstart
Run a module locally
Run the container
Access the module via API
Train a module on DEEP Dashboard
Develop and share your own module
Overview
DEEP architecture
The Marketplace
The DEEPaaS API
The data storage resources
The Dashboard
User roles and workflows
The basic user
The intermediate user
The advanced user
DEEP Modules
CI /CD pipeline
DEEP Modules Template
Create your project based on the template
Project structure
DEEPaaS API
Integrate your model with the API
Running the API
DEEP Dashboard
Selecting the modules
Making a deployment
Managing the deployments
How-to’s
Use a model (basic user)
Perform inference locally
Train a model (intermediate user)
Train a model locally
Train a model remotely
Use rclone
Develop a model (advanced user)
Develop a model
Others
Useful Machine Learning resources
Tutorials
Datasets
Models
Video demos
Technical documentation
HowTo’s (developers)
Module integration workflow for external (non-deephdc) users
1. Name check
2. Fork creation
3. Keep the forks updated
4. Update
<branchname>
in
.gitmodules
Develop Dashboard
Configure oidc-agent
Deployment with CLI (orchent)
Prepare your TOSCA file (optional)
Orchent submission script
Submit your deployment
Using Openstack API with OIDC tokens
Create file for OIDC
Using Openstack CLI
Mesos
Introduction
Testbed Setup
Nodes characteristics
Tested Components Versions
Prepare the agent (slave) node
Verify the nvidia-driver installation
Mesos slave configuration
Testing GPU support in Mesos
Testing Chronos patch for GPU support
Patch compilation
Testing
Testing GPU support in Marathon
Running tensorflow docker container
References
Enabling open-id connect authentication
Kubernetes
DEEP : Installing and testing GPU Node in Kubernetes - CentOS7
Introduction
Cluster Status
Tests
Access PODs from outside the cluster
References
Installing GPU node and adding it to Kubernetes cluster
Step-by-step guide
OpenStack nova-lxd
OpenStack nova-lxd installation via Ansible
Comparison between Openstack Ansible and Juju/conjure-up
Installing a All-in-One Openstack site with nova-lxd via Openstack Ansible
Notes:
References
Deploying OpenStack environment with nova-lxd via DevStack
Installation steps
Handy commands:
Notes:
References
Installing nova-lxd with Juju
Installation
Notes
OpenStack nova-lxd testing configuration
Testing of nova-lxd with different software configurations
Working configuration
uDocker
uDocker new GPU implementation
Test and evaluation of new implementation
References
Miscellaneous
GPU sharing with MPS
How to use MPS service
Testing environment
Test 1. Test with CUDA native sample nbody, without nvidia-cuda-mps service
Test 2. Test with CUDA native sample nbody, with nvidia-cuda-mps service
Test 3. Test with Docker using mariojmdavid/tensorflow-1.5.0-gpu image, without nvidia-cuda-mps service
Test 4. Test with Docker using mariojmdavid/tensorflow-1.5.0-gpu image, with nvidia-cuda-mps service
Test 5. Test with Docker using vykozlov/tf-benchmarks:181004-tf180-gpu image, without and with nvidia-cuda-mps service
Identified reasons why Tensoflow does not work correctly with MPS
Final remarks:
References
DEEP-Hybrid-DataCloud
Search
Please activate JavaScript to enable the search functionality.
Read the Docs
v: latest
Versions
latest
stable
user-docs
release-2
Downloads
On Read the Docs
Project Home
Builds