Quickstart Guide

Download module from the marketplace

  1. go to DEEP Open Catalog
  2. Browse available modules
  3. Find the module and get it either from Docker Hub (easy) or Github (pro)

Run a module locally

Docker Hub way (easy)

Prerequisites

  • docker
  • If you want GPU support you can install nvidia-docker along with docker or install udocker instead of docker. udocker is entirely a user tool, i.e. it can be installed and used without any root priveledges, e.g. in a user environment at HPC cluster.
  1. Run the container

To run the Docker container directly from Docker Hub and start using the API simply run the following:

Via docker command:

$ docker run -ti -p 5000:5000 -p 6006:6006 deephdc/deep-oc-module_of_interest

With GPU support:

$ nvidia-docker run -ti -p 5000:5000 -p 6006:6006 deephdc/deep-oc-module_of_interest

Via udocker:

$ udocker run -p 5000:5000 -p 6006:6006 deephdc/deep-oc-module_of_interest

Via udocker with GPU support:

$ udocker pull deephdc/deep-oc-module_of_interest
$ udocker create --name=module_of_interest deephdc/deep-oc-module_of_interest
$ udocker setup --nvidia module_of_interest
$ udocker run -p 5000:5000 -p 6006:6006 module_of_interest
  1. Access the module via API

To access the downloaded module via API, direct your web browser to http://127.0.0.1:5000. If you are training a model, you can go to http://127.0.0.1:6006 to monitor the training progress (if such monitoring is available for the model).

For more details on particular models, please, read model documentation.

Github way (pro)

Prerequisites

Using Github way allows to modify the Dockerfile for including additional packages, for example.

  1. Clone the DEEP-OC-module_of_interest github repository:

    $ git clone https://github.com/indigo-dc/DEEP-OC-module_of_interest
    
  2. Build the container:

    $ cd DEEP-OC-module_of_interest
    $ docker build -t deephdc/deep-oc-module_of_interest .
    
  3. Run the container and access the module via API as described above

Note

One can also clone the source code of the module, usually located in the ‘module_of_interest’ repository.

Run a module on DEEP Pilot Infrastructure

Prerequisites

If your are going to use DEEP-Nextcloud for storing you data you also have to:

In order to submit your job to DEEP Pilot Infrastructure one has to create TOSCA YAML file.

The submission is then done via

$ orchent depcreate ./topology-orchent.yml '{}'

If you also want to access DEEP-Nextcloud from your container via rclone, you can create a following bash script for job submission:

#!/bin/bash

orchent depcreate ./topology-orchent.yml '{ "rclone_url": "https://nc.deep-hybrid-datacloud.eu/remote.php/webdav/",
                                            "rclone_vendor": "nextcloud",
                                            "rclone_user": <your_nextcloud_username>
                                            "rclone_pass": <your_nextcloud_password> }'

To check status of your job

$ orchent depshow <Deployment ID>

Integrate your model with the API

../_images/deepaas1.png

The DEEPaaS API enables a user friendly interaction with the underlying Deep Learning modules and can be used both for training models and doing inference with the services. Check the full API guide for the detailed info.

The integration with the API is based on the definition of entrypoints to the model and the creation of standard API methods (eg. train, predict, etc). An easy way to integrate your model with the API and create Dockerfiles for building the Docker image is to use our DEEP DS template when developing your model.