Quickstart Guide

  1. go to DEEP Marketplace
  2. Browse available modules
  3. Find the module you are interested in and get it

Let’s explore what we can do with it!

Run a module locally

Requirements

  • docker

  • If GPU support is needed:

    • you can install nvidia-docker along with docker, OR
    • install udocker instead of docker. udocker is entirely a user tool, i.e. it can be installed and used without any root privileges, e.g. in a user environment at HPC cluster.
N.B.: Starting from version 19.03 docker supports NVIDIA GPUs, i.e. no need for nvidia-docker (see Release notes and moby/moby#38828)

Run the container

Run the Docker container directly from Docker Hub:

Via docker command:

$ docker run -ti -p 5000:5000 -p 6006:6006 deephdc/deep-oc-module_of_interest

Via udocker:

$ udocker run -p 5000:5000 -p 6006:6006 deephdc/deep-oc-module_of_interest

With GPU support:

$ nvidia-docker run -ti -p 5000:5000 -p 6006:6006 deephdc/deep-oc-module_of_interest

If docker version is 19.03 or above:

$ docker run -ti --gpus all -p 5000:5000 -p 6006:6006 deephdc/deep-oc-module_of_interest

Via udocker with GPU support:

$ udocker pull deephdc/deep-oc-module_of_interest
$ udocker create --name=module_of_interest deephdc/deep-oc-module_of_interest
$ udocker setup --nvidia module_of_interest
$ udocker run -p 5000:5000 -p 6006:6006 module_of_interest

Access the module via API

To access the downloaded module via the DEEPaaS API, direct your web browser to http://0.0.0.0:5000/ui. If you are training a model, you can go to http://0.0.0.0:6006 to monitor the training progress (if such monitoring is available for the model).

For more details on particular models, please read the module’s documentation.

../_images/deepaas2.png

Related HowTo’s:

Train a module on DEEP Pilot Infrastructure

Requirements

Sometimes running a module locally is not enough as one may need more powerful computing resources (like GPUs) in order to train a module faster. You may request DEEP-IAM registration and then use the DEEP Pilot Infrastructure to deploy a module. For that you can use the DEEP Dashboard. There you select a module you want to run and the computing resources you need. Once you have your module deployed, you will be able to train the module and view the training history:

../_images/dashboard-history2.png

Related HowTo’s:

Develop and share your own module

The best way to develop a module is to start from the DEEP Data Science template. It will create a project structure and files necessary for an easy integration with the DEEPaaS API. The DEEPaaS API enables a user-friendly interaction with the underlying Deep Learning modules and can be used both for training models and doing inference with the services. The integration with the API is based on the definition of entrypoints to the model and the creation of standard API methods (eg. train, predict, etc).

Related HowTo’s: