diff --git a/404.html b/404.html index 1e302c4c..5d6bbfb0 100644 --- a/404.html +++ b/404.html @@ -1040,6 +1040,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/about/index.html b/about/index.html index 5fa79355..4bbef3f5 100644 --- a/about/index.html +++ b/about/index.html @@ -1047,6 +1047,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/additional-config/index.html b/additional-config/index.html new file mode 100644 index 00000000..94a2db33 --- /dev/null +++ b/additional-config/index.html @@ -0,0 +1,1306 @@ + + + + + + + + + + + + + + + + + + + + + + + Additional configuration - OSCAR Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + + Skip to content + + +
    +
    + +
    + + + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + + +
    +
    +
    + + + + +
    +
    + + + + + + + +

    Additional configuration

    +

    To give the administrator a more personalized cluster configuration, the OSCAR manager searches for a config map on the cluster with the additional properties to apply. Since this is still a work in progress, the only configurable property currently is the container images' origin. As seen in the following ConfigMap definition, you can set a list of "prefixes" that you consider secure repositories, so images that do not come from one of these are restricted.

    +
    apiVersion: v1
    +kind: ConfigMap
    +metadata:
    +  name: additional-oscar-config
    +  namespace: oscar-svc
    +data:
    +  config.yaml: |
    +    images:
    +      allowed_prefixes:
    +      - ghcr.io
    +
    +

    Additionally, this property can be added when creating an OSCAR cluster through the IM, which will automatically create the ConfigMap.

    +

    allowed-prefixes

    + + + + + + + + + + + + + +
    +
    + + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + \ No newline at end of file diff --git a/api/index.html b/api/index.html index a21de390..bc9df80a 100644 --- a/api/index.html +++ b/api/index.html @@ -1061,6 +1061,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/deploy-ec3/index.html b/deploy-ec3/index.html index 3a5947e3..8ad3969d 100644 --- a/deploy-ec3/index.html +++ b/deploy-ec3/index.html @@ -1133,6 +1133,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/deploy-helm/index.html b/deploy-helm/index.html index 018ae8b0..f69196c4 100644 --- a/deploy-helm/index.html +++ b/deploy-helm/index.html @@ -1061,6 +1061,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/deploy-im-dashboard/index.html b/deploy-im-dashboard/index.html index f5dcd391..f864a07a 100644 --- a/deploy-im-dashboard/index.html +++ b/deploy-im-dashboard/index.html @@ -1061,6 +1061,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/deploy-k3s-ansible/index.html b/deploy-k3s-ansible/index.html index 80abd669..294000a8 100644 --- a/deploy-k3s-ansible/index.html +++ b/deploy-k3s-ansible/index.html @@ -1169,6 +1169,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/devel-docs/index.html b/devel-docs/index.html index bdf05629..13fa8078 100644 --- a/devel-docs/index.html +++ b/devel-docs/index.html @@ -9,7 +9,7 @@ - + @@ -1049,6 +1049,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/exposed-services/index.html b/exposed-services/index.html index 451d3a82..abd57e65 100644 --- a/exposed-services/index.html +++ b/exposed-services/index.html @@ -1109,6 +1109,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + @@ -1262,7 +1304,6 @@

    How to define an exposed OSCAR s

    Once the service is deployed, you can check if it was created correctly by making an HTTP request to the exposed endpoint:

    https://{oscar_endpoint}/system/services/{service_name}/exposed/{path_resource} 
    -
     

    Notice that if you get a 502 Bad Gateway error, it is most likely because the specified port on the service does not match the API port.

    Additional options can be defined in the "expose" section of the FDL (some previously mentioned), such as:

    @@ -1335,7 +1376,7 @@

    How to define an exposed OSCAR s expose: min_scale: 2 max_scale: 10 - port: 80 + api_port: 80 cpu_threshold: 50

    In case you use the NGINX example above in your local OSCAR cluster, you will see the nginx welcome page in: http://localhost/system/services/nginx/exposed/. diff --git a/faq/index.html b/faq/index.html index 73d6492a..9c6b2a73 100644 --- a/faq/index.html +++ b/faq/index.html @@ -1100,6 +1100,48 @@ + + + + + + +

  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/fdl-composer/index.html b/fdl-composer/index.html index c7e66d32..963dd3c2 100644 --- a/fdl-composer/index.html +++ b/fdl-composer/index.html @@ -1178,6 +1178,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/fdl/index.html b/fdl/index.html index 17065926..0feb1976 100644 --- a/fdl/index.html +++ b/fdl/index.html @@ -1217,6 +1217,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + @@ -1466,7 +1508,7 @@

    Service

    vo
    string -Virtual Organization (VO) in which the user creating the service is enrolled. Optional (default: "") +Virtual Organization (VO) in which the user creating the service is enrolled. (Required for multitenancy) allowed_users
    string array diff --git a/images/im-dashboard/im-additional-config.png b/images/im-dashboard/im-additional-config.png new file mode 100644 index 00000000..5bfa7df8 Binary files /dev/null and b/images/im-dashboard/im-additional-config.png differ diff --git a/images/mount.png b/images/mount.png new file mode 100644 index 00000000..bc635c36 Binary files /dev/null and b/images/mount.png differ diff --git a/index.html b/index.html index 9a165a1b..3755e717 100644 --- a/index.html +++ b/index.html @@ -1116,6 +1116,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/integration-compss/index.html b/integration-compss/index.html index 2ffb24cb..2d0318a6 100644 --- a/integration-compss/index.html +++ b/integration-compss/index.html @@ -1061,6 +1061,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/integration-egi/index.html b/integration-egi/index.html index ddb7dd15..c177d7be 100644 --- a/integration-egi/index.html +++ b/integration-egi/index.html @@ -1142,6 +1142,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/integration-interlink/index.html b/integration-interlink/index.html index 272e42f8..fdfe4c39 100644 --- a/integration-interlink/index.html +++ b/integration-interlink/index.html @@ -1109,6 +1109,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/integration-scone/index.html b/integration-scone/index.html index 3d0c9450..1745db9d 100644 --- a/integration-scone/index.html +++ b/integration-scone/index.html @@ -1061,6 +1061,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/invoking-async/index.html b/invoking-async/index.html index 5276558f..cd5228b1 100644 --- a/invoking-async/index.html +++ b/invoking-async/index.html @@ -1061,6 +1061,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/invoking-sync/index.html b/invoking-sync/index.html index cdb52cb7..90dcd3b2 100644 --- a/invoking-sync/index.html +++ b/invoking-sync/index.html @@ -1133,6 +1133,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/invoking/index.html b/invoking/index.html index 6f8bce55..7581a4f0 100644 --- a/invoking/index.html +++ b/invoking/index.html @@ -1061,6 +1061,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/license/index.html b/license/index.html index 5aa38be4..2a3625ed 100644 --- a/license/index.html +++ b/license/index.html @@ -1049,6 +1049,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/local-testing/index.html b/local-testing/index.html index 6dd86e6c..d221dde5 100644 --- a/local-testing/index.html +++ b/local-testing/index.html @@ -1196,6 +1196,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/minio-bucket-replication/index.html b/minio-bucket-replication/index.html index 968c6093..7fb84a0b 100644 --- a/minio-bucket-replication/index.html +++ b/minio-bucket-replication/index.html @@ -12,7 +12,7 @@ - + @@ -1127,6 +1127,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/mount/index.html b/mount/index.html new file mode 100644 index 00000000..03089350 --- /dev/null +++ b/mount/index.html @@ -0,0 +1,1305 @@ + + + + + + + + + + + + + + + + + + + + + + + Mount external volumes - OSCAR Documentation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + + Skip to content + + +
    +
    + +
    + + + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + + + + +
    +
    +
    + + + + +
    +
    + + + + + + + +

    Mounting external storage on service volumes

    +

    This feature enables the mounting of a folder from a storage provider, such as MinIO or dCache, into the service container. As illustrated in the following diagram, the folder is placed inside the /mnt directory on the container volume, thereby making it accessible to the service. This functionality can be utilized with exposed services, such as those using a Jupyter Notebook, to make the content of the storage bucket accessible directly within the Notebook.

    +

    mount-diagram

    +

    As OSCAR has the credentials of the default MinIO instance internally, if you want to use a different one or a different storage provider, you need to set these credentials on the service FDL. Currently, the storage providers supported on this functionality are:

    + +

    Let's explore these with an FDL example:

    +
    mount:
    +  storage_provider: minio.default
    +  path: /body-pose-detection-async
    +
    +

    The example above means that OSCAR mounts the body-pose-detection-async bucket of the default MinIO inside the OSCAR services. So, the content of the body-pose-detection-async bucket will be found in /mnt/body-pose-detection-async folder inside the execution of OSCAR services.

    + + + + + + + + + + + + + +
    +
    + + + +
    + +
    + + + +
    +
    +
    +
    + + + + + + + + + + \ No newline at end of file diff --git a/multitenancy/index.html b/multitenancy/index.html index 004f445b..ae03e5b5 100644 --- a/multitenancy/index.html +++ b/multitenancy/index.html @@ -1061,6 +1061,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/oscar-cli/index.html b/oscar-cli/index.html index 0a34c632..3c653b29 100644 --- a/oscar-cli/index.html +++ b/oscar-cli/index.html @@ -1370,6 +1370,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/oscar-service/index.html b/oscar-service/index.html index 40225e25..6d02c2ee 100644 --- a/oscar-service/index.html +++ b/oscar-service/index.html @@ -1100,6 +1100,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/search/search_index.json b/search/search_index.json index bb98fb7a..8290a831 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"

    OSCAR is an open-source platform to support the event-driven serverless computing model for data-processing applications. It can be automatically deployed on multi-Clouds, and even on low-powered devices, to create highly-parallel event-driven data-processing serverless applications along the computing continuum. These applications execute on customized runtime environments provided by Docker containers that run on elastic Kubernetes clusters. It is also integrated with the SCAR framework, which supports a High Throughput Computing Programming Model to create highly-parallel event-driven data-processing serverless applications that execute on customized runtime environments provided by Docker containers run on AWS Lambda and AWS Batch.

    "},{"location":"#concepts","title":"Concepts","text":""},{"location":"#rationale","title":"Rationale","text":"

    Users create OSCAR services to:

    An admin user can deploy an OSCAR cluster on a Cloud platform so that other users belonging to a Virtual Organization (VO) can create OSCAR services. A VO is a group of people (e.g. scientists, researchers) with common interests and requirements, who need to work collaboratively and/or share resources (e.g. data, software, expertise, CPU, storage space) regardless of geographical location. OSCAR supports the VOs defined in EGI, which are listed in the 'Operations Portal'. EGI is the European's largest federation of computing and storage resource providers united by a mission of delivering advanced computing and data analytics services for research and innovation.

    "},{"location":"#architecture-components","title":"Architecture & Components","text":"

    OSCAR runs on an elastic Kubernetes cluster that is deployed using:

    The following components are deployed inside the Kubernetes cluster in order to support the OSCAR platform:

    As external storage providers, the following services can be used:

    Note: All of the mentioned storage providers can be used as output, but only MinIO can be used as input.

    An OSCAR cluster can be easily deployed via the IM Dashboard on any major public and on-premises Cloud provider, including the EGI Federated Cloud.

    A summary of the components used:

    An OSCAR cluster can be accessed via its REST API, the web-based OSCAR UI and the command-line interface provided by OSCAR CLI.

    "},{"location":"about/","title":"Acknowledgements","text":"

    OSCAR has been developed by the Grid and High Performance Computing Group (GRyCAP) at the Instituto de Instrumentaci\u00f3n para Imagen Molecular (I3M) from the Universitat Polit\u00e8cnica de Val\u00e8ncia (UPV).

    OSCAR has been supported by the following projects:

    "},{"location":"about/#contact","title":"Contact","text":"

    If you have any trouble please open an issue or email us.

    "},{"location":"api/","title":"OSCAR API","text":"

    OSCAR exposes a secure REST API available at the Kubernetes master's node IP through an Ingress Controller. This API has been described following the OpenAPI Specification and it is available below.

    "},{"location":"deploy-ec3/","title":"Deployment with EC3","text":"

    \u2757\ufe0f The deployment of OSCAR with EC3 is deprecated. Please, consider using the IM Dashboard.

    In order to deploy an elastic Kubernetes cluster with the OSCAR platform, it is preferable to use the IM Dashboard. Alternatively, you can also use EC3, a tool that deploys elastic virtual clusters. EC3 uses the Infrastructure Manager (IM) to deploy such clusters on multiple Cloud back-ends. The installation details can be found here, though this section includes the relevant information to get you started.

    "},{"location":"deploy-ec3/#prepare-ec3","title":"Prepare EC3","text":"

    Clone the EC3 repository:

    git clone https://github.com/grycap/ec3\n

    Download the OSCAR template into the ec3/templates folder:

    cd ec3\nwget -P templates https://raw.githubusercontent.com/grycap/oscar/master/templates/oscar.radl\n

    Create an auth.txt authorization file with valid credentials to access your Cloud provider. As an example, to deploy on an OpenNebula-based Cloud site the contents of the file would be:

    type = OpenNebula; host = opennebula-host:2633;\nusername = your-user;\npassword = you-password\n

    Modify the corresponding RADL template in order to determine the appropriate configuration for your deployment:

    As an example, to deploy in OpenNebula, one would modify the ubuntu-opennebula.radl (or create a new one).

    "},{"location":"deploy-ec3/#deploy-the-cluster","title":"Deploy the cluster","text":"

    To deploy the cluster, execute:

    ./ec3 launch oscar-cluster oscar ubuntu-opennebula -a auth.txt\n

    This will take several minutes until the Kubernetes cluster and all the required services have been deployed. You will obtain the IP of the front-end of the cluster and a confirmation message that the front-end is ready. Notice that it will still take few minutes before the services in the Kubernetes cluster are up & running.

    "},{"location":"deploy-ec3/#check-the-cluster-state","title":"Check the cluster state","text":"

    The cluster will be fully configured when all the Kubernetes pods are in the Running state.

     ./ec3 ssh oscar-cluster\n sudo kubectl get pods --all-namespaces\n

    Notice that initially only the front-end node of the cluster is deployed. As soon as the OSCAR framework is deployed, together with its services, the CLUES elasticity manager powers on a new (working) node on which these services will be run.

    You can see the status of the provisioned node(s) by issuing:

     clues status\n

    which obtains:

    | node            |state| enabled |time stable|(cpu,mem) used |(cpu,mem) total|\n|-----------------|-----|---------|-----------|---------------|---------------|\n| wn1.localdomain | used| enabled | 00h00'49\" | 0.0,825229312 | 1,1992404992  |\n| wn2.localdomain | off | enabled | 00h06'43\" | 0,0           | 1,1073741824  |\n| wn3.localdomain | off | enabled | 00h06'43\" | 0,0           | 1,1073741824  |\n| wn4.localdomain | off | enabled | 00h06'43\" | 0,0           | 1,1073741824  |\n| wn5.localdomain | off | enabled | 00h06'43\" | 0,0           | 1,1073741824  |\n

    The working nodes transition from off to powon and, finally, to the used status.

    "},{"location":"deploy-ec3/#default-service-endpoints","title":"Default Service Endpoints","text":"

    Once the OSCAR framework is running on the Kubernetes cluster, the endpoints described in the following table should be available. Most of the passwords/tokens are dynamically generated at deployment time and made available in the /var/tmp folder of the front-end node of the cluster.

    Service Endpoint Default User Password File OSCAR https://{FRONT_NODE} oscar oscar_password MinIO https://{FRONT_NODE}:30300 minio minio_secret_key OpenFaaS http://{FRONT_NODE}:31112 admin gw_password Kubernetes API https://{FRONT_NODE}:6443 tokenpass Kube. Dashboard https://{FRONT_NODE}:30443 dashboard_token

    Note that {FRONT_NODE} refers to the public IP of the front-end of the Kubernetes cluster.

    For example, to get the OSCAR password, you can execute:

    ./ec3 ssh oscar-cluster cat /var/tmp/oscar_password\n
    "},{"location":"deploy-helm/","title":"Deployment with Helm","text":"

    OSCAR can also be deployed on any existing Kubernetes cluster through its helm chart. However, to make the platform work properly, the following dependencies must be satisfied.

    "},{"location":"deploy-im-dashboard/","title":"Deployment with IM","text":"

    An OSCAR cluster can be easily deployed on multiple Cloud platforms via the Infrastructure Manager's Dashboard (IM Dashboard). This is a managed service provided by EGI and operated by the GRyCAP research group at the Universitat Polit\u00e8cnica de Val\u00e8ncia to deploy customized virtual infrastructures across many Cloud providers.

    Using the IM Dashboard is the easiest and most convenient approach to deploy an OSCAR cluster. It also automatically allocates a DNS entry and TLS certificates to support HTTPS-based access to the OSCAR cluster and companion services (e.g. MinIO).

    This example shows how to deploy an OSCAR cluster on Amazon Web Services (AWS) with two nodes. Thanks to the IM, the very same procedure allows to deploy the OSCAR cluster in an on-premises Cloud (such as OpenStack) or any other Cloud provider supported by the IM.

    These are the steps:

    1. Access the IM Dashboard

      You will need to authenticate via EGI Check-In, which supports mutiple Identity Providers (IdP). There is no need to register and the service is provided free of charge.

    2. Configure the Cloud Credentials

      Once logged in, you need to define the access credentials to the Cloud on which the OSCAR cluster will be deployed. These should be temporary credentials under the principle of least privilege (PoLP).

      In our case, we indicate an identifier for the set of credentials, the Access Key ID and the Secret Access Key for an IAM user that has privileges to deploy Virtual Machines in Amazon EC2. With the default values indicated in this tutorial, you will need privileges to deploy the following instance types: t3a.xlarge for the front-end node and t3a.medium for the working node.

    3. Select the OSCAR template

    There are optional features than can be included in the OSCAR cluster to fit particular user needs. We'll skip them.

    1. Customize and deploy the OSCAR cluster

      In this panel you can specify the number of Working Nodes (WNs) of the cluster together with the computational requirements for each node. We leave the default values.

      In the following panel, specify the passwords to be employed to access the Kubernetes Web UI (Dashboard), to access OSCAR and to access the MinIO dashboard. These passwords/tokens can also be used for programmatic access to the respective services.

      Now, choose the Cloud provider. The ID specified when creating the Cloud credentials will be shown. You will also need to specify the Amazon Machine Image (AMI) identifier. We chose an AMI based on Ubuntu 20.04 provided by Canonical whose identifier for the us-east-1 region is: ami-09e67e426f25ce0d7

      NOTE: You should obtain the AMI identifier for the latest version of the OS. This way, security patches will be already installed. You can obtain this AMI identifier from the AWS Marketplace or the Amazon EC2 service.

      Give the infrastructure a name and press \"Submit\".

    2. Check the status of the deployment OSCAR cluster

      You will see that the OSCAR cluster is being deployed and the infrastructure reaches the status \"running\". The process will finish when it reaches the state \"configured\".

      If you are interested in understanding what is happening under the hood you can see the logs:

    3. Accessing the OSCAR cluster

      Once reached the \"configured\" state, see the \"Outputs\" to obtain the different endpoints:

      The OSCAR UI can be accessed with the username oscar and the password you specified at deployment time.

      The MinIO UI can be accessed with the username minio and the password you specified at deployment time.

      The Kubernetes Dashboard can be accessed with the token you specified at deployment time.

      You can obtain statistics about the Kubernetes cluster:

    4. Terminating the OSCAR cluster

      You can terminate the OSCAR cluster from the IM Dashboard:

    "},{"location":"deploy-k3s-ansible/","title":"Deployment on K3s with Ansible","text":"

    The folder deploy/ansible contains all the necessary files to deploy a K3s cluster together with the OSCAR platform using Ansible. This way, a minified Kubernetes distribution can be used to configure OSCAR on IoT devices located at the Edge, such as Raspberry PIs. Note that this playbook can also be applied to quickly spread the OSCAR platform on top of any machine or already started cloud instance since the playbook is compatible with GNU/Linux on ARM64 and AMD64 architectures.

    "},{"location":"deploy-k3s-ansible/#requirements","title":"Requirements","text":"

    In order to use the playbook, you must install the following components:

    "},{"location":"deploy-k3s-ansible/#usage","title":"Usage","text":""},{"location":"deploy-k3s-ansible/#clone-the-folder","title":"Clone the folder","text":"

    First of all, you must clone the OSCAR repo:

    git clone https://github.com/grycap/oscar.git\n

    And place into the ansible directory:

    cd oscar/deploy/ansible\n
    "},{"location":"deploy-k3s-ansible/#ssh-configuration","title":"SSH configuration","text":"

    As Ansible is an agentless automation tool, you must configure the ~/.ssh/config file for granting access to the hosts to be configured via the SSH protocol. This playbook will use the Host field from SSH configuration to set the hostnames of the nodes, so please take care of naming them properly.

    Below you can find an example of a configuration file for four nodes, being the front the only one with a public IP, so it will be used as a proxy for the SSH connection to the working nodes (ProxyJump option) via its internal network.

    Host front\n  HostName <PUBLIC_IP>\n  User ubuntu\n  IdentityFile ~/.ssh/my_private_key\n\nHost wn1\n  HostName <PRIVATE_IP>\n  User ubuntu\n  IdentityFile ~/.ssh/my_private_key\n  ProxyJump front\n\nHost wn2\n  HostName <PRIVATE_IP>\n  User ubuntu\n  IdentityFile ~/.ssh/my_private_key\n  ProxyJump front\n\nHost wn3\n  HostName <PRIVATE_IP>\n  User ubuntu\n  IdentityFile ~/.ssh/my_private_key\n  ProxyJump front\n
    "},{"location":"deploy-k3s-ansible/#configuration-of-the-inventory-file","title":"Configuration of the inventory file","text":"

    Now, you have to edit the hosts file and add the hosts to be configured. Note that only one node must be set in the [front] section, while one or more nodes can be configured as working nodes of the cluster in the [wn] section. For example, for the previous SSH configuration the hosts inventory file should look like this:

    [front]\n; Put here the frontend node as defined in .ssh/config (Host)\nfront\n\n[wn]\n; Put here the working nodes (one per line) as defined in the .ssh/config (Host)\nwn1\nwn2\nwn3\n
    "},{"location":"deploy-k3s-ansible/#setting-up-the-playbook-variables","title":"Setting up the playbook variables","text":"

    You also need to set up some parameters for the configuration of the cluster and OSCAR components, like OSCAR and MinIO credentials and DNS endpoints to configure the Kubernetes Ingress and cert-manager to securely expose the services. To do it, please edit the vars.yaml file and update the variables:

    ---\n# K3s version to be installed\nkube_version: v1.22.3+k3s1\n# Token to login in K3s and the Kubernetes Dashboard\nkube_admin_token: kube-token123\n# Password for OSCAR\noscar_password: oscar123\n# DNS name for the OSCAR Ingress and Kubernetes Dashboard (path \"/dashboard/\")\ndns_host: oscar-cluster.example.com\n# Password for MinIO\nminio_password: minio123\n# DNS name for the MinIO API Ingress\nminio_dns_host: minio.oscar-cluster.example.com\n# DNS name for the MinIO Console Ingress\nminio_dns_host_console: minio-console.oscar-cluster.example.com\n
    "},{"location":"deploy-k3s-ansible/#installation-of-the-required-ansible-roles","title":"Installation of the required ansible roles","text":"

    To install the required roles you only have to run:

    ansible-galaxy install -r install_roles.yaml --force\n

    The --force argument ensures you have the latest version of the roles.

    "},{"location":"deploy-k3s-ansible/#running-the-playbook","title":"Running the playbook","text":"

    Finally, with the following command the ansible playbook will be executed, configuring the nodes set in the hosts inventory file:

    ansible-playbook -i hosts oscar-k3s.yaml\n
    "},{"location":"devel-docs/","title":"Documentation development","text":"

    OSCAR uses MKDocs for the documentation. In particular, Material for MKDocs.

    Install the following dependencies:

    pip install mkdocs mkdocs-material mkdocs-render-swagger-plugin\n

    The from the main folder oscar run:

    mkdocs serve\n

    The documentation will be available in http://127.0.0.1:8000

    "},{"location":"exposed-services/","title":"Exposed Services","text":"

    OSCAR supports the deployment and elasticity management of long-running stateless services whose internal API or web-based UI must be directly reachable outside the cluster.

    \u2139\ufe0f

    This functionality can be used to support the fast inference of pre-trained AI models that require close to real-time processing with high throughput. In a traditional serverless approach, the AI model weights would be loaded in memory for each service invocation. Exposed services are also helpful when stateless services created out of large containers require too much time to start processing a service invocation. By exposing an OSCAR service, the AI model weights could be loaded just once, and the service would perform the AI model inference for each subsequent request.

    An auto-scaled load-balanced approach for these stateless services is supported. When the average CPU exceeds a certain user-defined threshold, additional service pods are dynamically created (and removed when no longer necessary) within the user-defined boundaries. The user can also define the minimum and maximum replicas of the service to be present on the cluster (see the parameters min_scale and max_scale in ExposeSettings).

    "},{"location":"exposed-services/#prerequisites-in-the-container-image","title":"Prerequisites in the container image","text":"

    The container image needs to have an HTTP server that binds to a specific port (see the parameter port in ExposeSettings). If developing a service from scratch in Python, you can use FastAPI or Flask to create an API. In Go, you can use Gin. For Ruby, you can use Sinatra.

    \u26a0\ufe0f

    If the service exposes a web-based UI, you must ensure that the content cannot only be served from the root document ('/') since the service will be exposed in a certain subpath.

    "},{"location":"exposed-services/#how-to-define-an-exposed-oscar-service","title":"How to define an exposed OSCAR service","text":"

    The minimum definition to expose an OSCAR service is to indicate in the corresponding FDL the port inside the container where the service will be listening.

    expose:\n  api_port: 5000\n

    Once the service is deployed, you can check if it was created correctly by making an HTTP request to the exposed endpoint:

    https://{oscar_endpoint}/system/services/{service_name}/exposed/{path_resource} \n\n

    Notice that if you get a 502 Bad Gateway error, it is most likely because the specified port on the service does not match the API port.

    Additional options can be defined in the \"expose\" section of the FDL (some previously mentioned), such as:

    Below is an example of the expose section of the FDL, showing that there will be between 5 to 15 active pods and that the service will expose an API in port 4578. The number of active pods will grow when the use of CPU increases by more than 50% and the active pods will decrease when the CPU use decreases below that threshold.

    expose:\n  min_scale: 5 \n  max_scale: 15 \n  api_port: 4578  \n  cpu_threshold: 50\n  set_auth: true\n  rewrite_target: true\n  default_command: true\n

    In addition, you can see below a full example of a recipe to expose a service from the AI4EOSC Marketplace:

    functions:\n  oscar:\n  - oscar-cluster:\n     name: body-pose-detection\n     memory: 2Gi\n     cpu: '1.0'\n     image: deephdc/deep-oc-posenet-tf\n     script: script.sh\n     environment:\n        Variables:\n          INPUT_TYPE: json  \n     expose:\n      min_scale: 1 \n      max_scale: 10 \n      api_port: 5000  \n      cpu_threshold: 20 \n      set_auth: true\n     input:\n     - storage_provider: minio.default\n       path: body-pose-detection/input\n     output:\n     - storage_provider: minio.default\n       path: body-pose-detection/output\n

    So, to invoke the API of this example the request will need the following information,

    1. OSCAR endpoint. localhost or https://{OSCAR_endpoint}
    2. Path resource. In this case, it is v2/models/posenetclas/predict/. Please do not forget the final /
    3. Use -k or --insecure if the SSL is false.
    4. Input image with the name people.jpeg
    5. Output. It will create a .zip file that has the outputs

    and will end up looking like this:

    curl {-k} -X POST https://{oscar_endpoint}/system/services/body-pose-detection-async/exposed/{path resource} -H  \"accept: */*\" -H  \"Content-Type: multipart/form-data\" -F \"data=@{input image};type=image/png\" --output {output file}\n

    Finally, the complete command that works in Local Testing with an image called people.jpeg as input and output_posenet.zip as output.

    curl -X POST https://localhost/system/services/body-pose-detection-async/exposed/v3/models/posenetclas/predict/ -H  \"accept: */*\" -H  \"Content-Type: multipart/form-data\" -F \"data=@people.jpeg;type=image/png\" --output output_posenet.zip\n

    Another FDL example shows how to expose a simple NGINX server as an OSCAR service:

    functions:\n  oscar:\n  - oscar-cluster:\n     name: nginx\n     memory: 2Gi\n     cpu: '1.0'\n     image: nginx\n     script: script.sh\n     expose:\n      min_scale: 2 \n      max_scale: 10 \n      port: 80  \n      cpu_threshold: 50 \n

    In case you use the NGINX example above in your local OSCAR cluster, you will see the nginx welcome page in: http://localhost/system/services/nginx/exposed/. Two active pods of the deployment will be shown with the command kubectl get pods -n oscar-svc

    oscar-svc            nginx-dlp-6b9ddddbd7-cm6c9                         1/1     Running     0             2m1s\noscar-svc            nginx-dlp-6b9ddddbd7-f4ml6                         1/1     Running     0             2m1s\n
    "},{"location":"faq/","title":"Frequently Asked Questions (FAQ)","text":""},{"location":"faq/#troubleshooting","title":"Troubleshooting","text":"

    You may have a server running on the :80 port, such as Apache, while the deployment is trying to use it for the OSCAR UI. Restarting it would solve this problem.

    When using oscar-cli, you can get this error if you try to run a service that is not present on the cluster set as default. You can check if you are using the correct default cluster with the following command,

    oscar-cli cluster default

    and set a new default cluster with the following command:

    oscar-cli cluster default -s CLUSTER_ID

    In case it is required the use of secret images, you should create a secret with the docker login configuration with a structure like this:

    apiVersion: v1\nkind: Secret\nmetadata:\n  name: dockersecret\n  namespace: oscar-svc\ndata:\n  .dockerconfigjson: {base64 .docker/config.json}\ntype: kubernetes.io/dockerconfigjson\n

    Apply the file through kubectl into the Kubernetes OSCAR cluster to create the secret. To use it in OSCAR services, you must add the secret name (dockersecret in this example) in the definition of the service, using the API or a FDL, under the image_pull_secrets parameter, or through the \"Docker secret\" field in OSCAR-UI.

    It could happen when an OSCAR cluster is deployed from an IM recipe that does not have certificates or the Let's Encrypt limit has been reached. Only 50 certificates per week can be issued. Those certificates have a 90 days expiration lifetime. The certificates issued can be seen at https://crt.sh/?q=im.grycap.net.

    If the OSCAR cluster has no certificate OSCAR UI will not show the buckets.

    You can fix this by entering in the MinIO endpoint minio.<OSCAR-endpoint>. The browser will block the page because it is unsafe. Once you accept the risk, you will enter the MinIO page. It is not necessary to log in.

    Return to OSCAR UI and, then, you can see the buckets. The buckets will be shown only in the browser you do this process. The results may vary depending on the browser. For example, they will show up in Firefox but not in Chrome.

    "},{"location":"fdl-composer/","title":"FDL Composer","text":"

    OSCAR Services can be aggregated into data-driven workflows where the output data of one service is stored in the object store that triggers another service, potentially in a different OSCAR cluster. This allows to execute the different phases of the workflow in disparate computing infrastructures.

    However, writing an entire workflow in an FDL file can be a difficult task for some users.

    To simplify the process you can use FDL Composer, a web-based application to facilitate the definition of FDL YAML files for OSCAR and SCAR.

    "},{"location":"fdl-composer/#how-to-access-fdl-composer","title":"How to access FDL Composer","text":"

    Just access FDL Composer which is a Single Page Application (SPA) running entirely in your browser. If you prefer to execute it on your computer instead of using the web, clone the git repository by using the following command:

    git clone https://github.com/grycap/fdl-composer\n

    And the run the app with npm:

    npm start\n
    "},{"location":"fdl-composer/#basic-elements","title":"Basic elements","text":"

    Workflows are composed of OSCAR services and Storage providers:

    "},{"location":"fdl-composer/#oscar-services","title":"OSCAR services","text":"

    OSCAR services are responsible for processing the data uploaded to Storage providers.

    Defining a new OSCAR service requires filling at least the name, image, and script fields.

    To define environment variables you must add them as a comma separated string of key=value entries. For example, to create a variable with the name firstName and the value John, the \"Environment variables\" field should look like firstName=John. If you want to assign more than one variable, for example, firstName and lastName with the values John and Keats, the input field should include them all separated by commas (e.g., firstName=John,lastName=Keats).

    "},{"location":"fdl-composer/#storage-providers-and-bucketsfolders","title":"Storage providers and buckets/folders","text":"

    Storage providers are object storage systems responsible for storing both the input files to be processed by OSCAR services and the output files generated as a result of the processing.

    Three types of storage providers can be used in OSCAR FDLs: MinIO, Amazon S3, and OneData.

    To configure them, drag the storage provider from the menu to the canvas and double click on the item created. A window with a single input will appear. Then, insert the path of the folder name. To edit one of the storage providers, move the mouse over the item and select the edit option.

    Remember that only MinIO can be used as input storage provider for OSCAR services.

    "},{"location":"fdl-composer/#download-and-load-state","title":"Download and load state","text":"

    The defined workflow can be saved in a file using the \"Download state\" button. OSCAR services, Storage Providers, and Buckets are kept in the file. The graphic workflow can be edited later by loading it with the \"Load state\" button.

    "},{"location":"fdl-composer/#create-a-yaml-file","title":"Create a YAML file","text":"

    You can easily download the workflow's FDL file (in YAML) through the \"Export YAML\" button.

    "},{"location":"fdl-composer/#connecting-components","title":"Connecting components","text":"

    All components have four ports: The up and left ones are input ports while the right and down ports are used as output. OSCAR Services can only be connected with Storage providers, always linked in the same direction (the output of one element with the input of the other).

    When two services are connected, both will be declared in the FDL file, but they will work separately, and there will be no workflow between them. If two storage providers are connected between them, it will have no effect, but both storages will be declared.

    "},{"location":"fdl-composer/#scar-options","title":"SCAR options","text":"

    FDL Composer can also create FDL files for SCAR. This allows to define workflows that can be executed on the Edge or in on-premises Clouds through OSCAR, and on the public Cloud (AWS Lambda and/or AWS Batch) through SCAR.

    "},{"location":"fdl-composer/#example","title":"Example","text":"

    There is an example of FDL Composer implementing the video-process use case in our blog.

    "},{"location":"fdl/","title":"Functions Definition Language (FDL)","text":"

    OSCAR services are typically defined via the Functions Definition Language (FDL) to be deployed via the OSCAR CLI. Alternative approaches are using the web-based wizard in the OSCAR UI or, for a programmatic integration, via the OSCAR API.

    \u2139\ufe0f

    It is called Functions Definition Language instead of Services Definition Language, because the definition was initially designed for SCAR, which supports Lambda functions.

    Example:

    functions:\n  oscar:\n  - oscar-test:\n      name: plants\n      memory: 2Gi\n      cpu: '1.0'\n      image: grycap/oscar-theano-plants\n      script: plants.sh\n      input:\n      - storage_provider: minio.default\n        path: example-workflow/in\n      output:\n      - storage_provider: minio.default\n        path: example-workflow/med\n  - oscar-test:\n      name: grayify\n      memory: 1Gi\n      cpu: '1.0'\n      image: grycap/imagemagick\n      script: grayify.sh\n      interlink_node_name: vega-new-vk\n      expose:\n        min_scale: 3 \n        max_scale: 7 \n        port: 5000  \n        cpu_threshold: 70 \n        nodePort: 30500\n        set_auth: true\n        rewrite_target: true\n        default_command: true\n      input:\n      - storage_provider: minio.default\n        path: example-workflow/med\n      output:\n      - storage_provider: minio.default\n        path: example-workflow/res\n      - storage_provider: onedata.my_onedata\n        path: result-example-workflow\n      - storage_provider: webdav.dcache\n        path: example-workflow/res\n\nstorage_providers:\n  onedata:\n    my_onedata:\n      oneprovider_host: my_provider.com\n      token: my_very_secret_token\n      space: my_onedata_space\n  webdav:\n    dcache:\n      hostname: my_dcache.com\n      login: my_username\n      password: my_password\n
    "},{"location":"fdl/#top-level-parameters","title":"Top level parameters","text":"Field Description functions Functions Mandatory parameter to define a Functions Definition Language file. Note that \"functions\" instead of \"services\" has been used in order to keep compatibility with SCAR storage_providers StorageProviders Parameter to define the credentials for the storage providers to be used in the services clusters map[string]Cluster Configuration for the OSCAR clusters that can be used as service's replicas, being the key the user-defined identifier for the cluster. Optional"},{"location":"fdl/#functions","title":"Functions","text":"Field Description oscar map[string]Service array Main object with the definition of the OSCAR services to be deployed. The components of the array are Service maps, where the key of every service is the identifier of the cluster where the service (defined as the value of the entry on the map) will be deployed."},{"location":"fdl/#service","title":"Service","text":"Field Description name string The name of the service cluster_id string Identifier for the current cluster, used to specify the cluster's StorageProvider in job delegations. OSCAR-CLI sets it using the cluster_id from the FDL. Optional. (default: \"\") image string Docker image for the service vo string Virtual Organization (VO) in which the user creating the service is enrolled. Optional (default: \"\") allowed_users string array Array of EGI UIDs to grant specific user permissions on the service. If empty, the service is considered as accesible to all the users with access to the OSCAR cluster. (Enabled since OSCAR version v3.0.0). alpine boolean Set if the Docker image is based on Alpine. If true, a custom release of the faas-supervisor will be used. Optional (default: false) script string Local path to the user script to be executed inside the container created out of the service invocation file_stage_in bool Skip the download of the input files by the faas-supervisor (default: false) image_pull_secrets string array Array of Kubernetes secrets. Only needed to use private images located on private registries. memory string Memory limit for the service following the kubernetes format. Optional (default: 256Mi) cpu string CPU limit for the service following the kubernetes format. Optional (default: 0.2) enable_gpu bool Enable the use of GPU. Requires a device plugin deployed on the cluster (More info: Kubernetes device plugins). Optional (default: false) enable_sgx bool Enable the use of SGX plugin on the cluster containers. (More info: SGX plugin documentation). Optional (default: false) image_prefetch bool Enable the use of image prefetching (retrieve the container image in the nodes when creating the service). Optional (default: false) total_memory string Limit for the memory used by all the service's jobs running simultaneously. Apache YuniKorn's' scheduler is required to work. Same format as Memory, but internally translated to MB (integer). Optional (default: \"\") total_cpu string Limit for the virtual CPUs used by all the service's jobs running simultaneously. Apache YuniKorn's' scheduler is required to work. Same format as CPU, but internally translated to millicores (integer). Optional (default: \"\") synchronous SynchronousSettings Struct to configure specific sync parameters. This settings are only applied on Knative ServerlessBackend. Optional. expose ExposeSettings Allows to expose the API or UI of the application run in the OSCAR service outside of the Kubernetes cluster. Optional. replicas Replica array List of replicas to delegate jobs. Optional. rescheduler_threshold string Time (in seconds) that a job (with replicas) can be queued before delegating it. Optional. log_level string Log level for the faas-supervisor. Available levels: NOTSET, DEBUG, INFO, WARNING, ERROR and CRITICAL. Optional (default: INFO) input StorageIOConfig array Array with the input configuration for the service. Optional output StorageIOConfig array Array with the output configuration for the service. Optional environment EnvVarsMap The user-defined environment variables assigned to the service. Optional annotations map[string]string User-defined Kubernetes annotations to be set in job's definition. Optional labels map[string]string User-defined Kubernetes labels to be set in job's definition. Optional interlink_node_name string Name of the virtual kubelet node (if you are using InterLink nodes) Optional"},{"location":"fdl/#synchronoussettings","title":"SynchronousSettings","text":"Field Description min_scale integer Minimum number of active replicas (pods) for the service. Optional. (default: 0) max_scale integer Maximum number of active replicas (pods) for the service. Optional. (default: 0 (Unlimited))"},{"location":"fdl/#exposesettings","title":"ExposeSettings","text":"Field Description min_scale integer Minimum number of active replicas (pods) for the service. Optional. (default: 1) max_scale integer Maximum number of active replicas (pods) for the service. Optional. (default: 10 (Unlimited)) port integer Port inside the container where the API is exposed. (value: 0 , the service wont be exposed.) cpu_threshold integer Percent of use of CPU before creating other pod (default: 80 max:100). Optional. nodePort integer Change the access method from the domain name to the public ip. Optional. set_auth bool Create credentials for the service, composed of the service name as the user and the service token as the password. (default: false). Optional. rewrite_target bool Target the URI where the traffic is redirected. (default: false). Optional. default_command bool Select between executing the container's default command and executing the script inside the container. (default: false). Optional."},{"location":"fdl/#replica","title":"Replica","text":"Field Description type string Type of the replica to re-send events (can be oscar or endpoint) cluster_id string Identifier of the cluster as defined in the \"clusters\" FDL field. Only used if Type is oscar service_name string Name of the service in the replica cluster. Only used if Type is oscar url string URL of the endpoint to re-send events (HTTP POST). Only used if Type is endpoint ssl_verify boolean Parameter to enable or disable the verification of SSL certificates. Only used if Type is endpoint. Optional. (default: true) priority integer Priority value to define delegation priority. Highest priority is defined as 0. If a delegation fails, OSCAR will try to delegate to another replica with lower priority. Optional. (default: 0) headers map[string]string Headers to send in delegation requests. Optional"},{"location":"fdl/#storageioconfig","title":"StorageIOConfig","text":"Field Description storage_provider string Reference to the storage provider defined in storage_providers. This string is composed by the provider's name (minio, s3, onedata) and the identifier (defined by the user), separated by a point (e.g. \"minio.myidentifier\") path string Path in the storage provider. In MinIO and S3 the first directory of the specified path is translated into the bucket's name (e.g. \"bucket/folder/subfolder\") suffix string array Array of suffixes for filtering the files to be uploaded. Only used in the output field. Optional prefix string array Array of prefixes for filtering the files to be uploaded. Only used in the output field. Optional"},{"location":"fdl/#envvarsmap","title":"EnvVarsMap","text":"Field Description Variables map[string]string Map to define the environment variables that will be available in the service container"},{"location":"fdl/#storageproviders","title":"StorageProviders","text":"Field Description minio map[string]MinIOProvider Map to define the credentials for a MinIO storage provider, being the key the user-defined identifier for the provider s3 map[string]S3Provider Map to define the credentials for an Amazon S3 storage provider, being the key the user-defined identifier for the provider onedata map[string]OnedataProvider Map to define the credentials for a Onedata storage provider, being the key the user-defined identifier for the provider webdav map[string]WebDavProvider Map to define the credentials for a storage provider accesible via WebDAV protocol, being the key the user-defined identifier for the provider"},{"location":"fdl/#cluster","title":"Cluster","text":"Field Description endpointstring Endpoint of the OSCAR cluster API auth_userstring Username to connect to the cluster (basic auth) auth_passwordstring Password to connect to the cluster (basic auth) ssl_verifyboolean Parameter to enable or disable the verification of SSL certificates"},{"location":"fdl/#minioprovider","title":"MinIOProvider","text":"Field Description endpoint string MinIO endpoint verify bool Verify MinIO's TLS certificates for HTTPS connections access_key string Access key of the MinIO server secret_key string Secret key of the MinIO server region string Region of the MinIO server"},{"location":"fdl/#s3provider","title":"S3Provider","text":"Field Description access_key string Access key of the AWS S3 service secret_key string Secret key of the AWS S3 service region string Region of the AWS S3 service"},{"location":"fdl/#onedataprovider","title":"OnedataProvider","text":"Field Description oneprovider_host string Endpoint of the Oneprovider token string Onedata access token space string Name of the Onedata space"},{"location":"fdl/#webdavprovider","title":"WebDAVProvider","text":"Field Description hostname string Provider hostname login string Provider account username password string Provider account password"},{"location":"integration-compss/","title":"Integration with COMPSs","text":"

    COMPSs is a task-based programming model which aims to ease the development of applications for distributed infrastructures, such as large High-Performance clusters (HPC), clouds and container managed clusters. COMPSs provides a programming interface for the development of the applications and a runtime system that exploits the inherent parallelism of applications at execution time.

    COMPSs support was introduced in OSCAR for the AI-SPRINT project to tackle the Personalized Healthcare use case in which OSCAR is employed to perform the inference phase of pre-trained models out of sensitive data captured from wearable devices. COMPSs, in particular, its Python binding named PyCOMPSs, was integrated to exploit parallelism across the multiple virtual CPUs of each pod resulting from each OSCAR service asynchronous invocation. This use case was coordinated by the Barcelona Supercomputing Center (BSC)

    There are several examples that showcase the COMPSs integration with OSCAR in the examples/compss folder in GitHub.

    "},{"location":"integration-egi/","title":"Integration with EGI","text":"

    EGI is a federation of many cloud providers and hundreds of data centres, spread across Europe and worldwide that delivers advanced computing services to support scientists, multinational projects and research infrastructures.

    "},{"location":"integration-egi/#deployment-on-the-egi-federated-cloud","title":"Deployment on the EGI Federated Cloud","text":"

    The EGI Federated Cloud is an IaaS-type cloud, made of academic private clouds and virtualised resources and built around open standards. Its development is driven by requirements of the scientific communities.

    The OSCAR platform can be deployed on the EGI Federated Cloud resources through the IM Dashboard.

    You can follow EGI's IM Dashboard documentation or the OSCAR's IM Dasboard documentation.

    "},{"location":"integration-egi/#integration-with-egi-datahub-onedata","title":"Integration with EGI Datahub (Onedata)","text":"

    EGI DataHub, based on Onedata, provides a global data access solution for science. Integrated with the EGI AAI, it allows users to have Onedata spaces supported by providers across Europe for replicated storage and on-demand caching.

    EGI DataHub can be used as an output storage provider for OSCAR, allowing users to store the resulting files of their OSCAR services on a Onedata space. This can be done thanks to the FaaS Supervisor. Used in OSCAR and SCAR, responsible for managing the data Input/Output and the user code execution.

    To deploy a function with Onedata as output storage provider you only have to specify an identifier, the URL of the Oneprovider host, your access token and the name of your Onedata space in the \"Storage\" tab of the service creation wizard:

    And the path where you want to store the files in the \"OUTPUTS\" tab:

    This means that scientists can store their output files on their Onedata space in the EGI DataHub for long-time persistence and easy sharing of experimental results between researchers.

    "},{"location":"integration-egi/#integration-with-egi-check-in-oidc","title":"Integration with EGI Check-In (OIDC)","text":"

    OSCAR API supports OIDC (OpenID Connect) access tokens to authorize users since release v2.5.0. By default, OSCAR clusters deployed via the IM Dashboard are configured to allow authorization via basic auth and OIDC tokens using the EGI Check-in issuer. From the IM Dashboard deployment window, users can add one EGI Virtual Organization to grant access for all users from that VO.

    "},{"location":"integration-egi/#accessing-from-oscar-ui","title":"Accessing from OSCAR UI","text":"

    The static web interface of OSCAR has been integrated with EGI Check-in and published in ui.oscar.grycap.net to facilitate the authorization of users. To login through EGI Check\u00edn using OIDC tokens, users only have to put the endpoint of its OSCAR cluster and click on the \"EGI CHECK-IN\" button.

    "},{"location":"integration-egi/#integration-with-oscar-cli-via-oidc-agent","title":"Integration with OSCAR-CLI via OIDC Agent","text":"

    Since version v1.4.0 OSCAR CLI supports API authorization via OIDC tokens thanks to the integration with oidc-agent.

    Users must install oidc-agent following its instructions and create a new account configuration for the https://aai.egi.eu/auth/realms/egi/ issuer.

    After that, clusters can be added with the command oscar-cli cluster add specifying the oidc-agent account name with the --oidc-account-name flag.

    "},{"location":"integration-interlink/","title":"Integration with interLink","text":"

    interLink is an open-source development that aims to provide an abstraction for executing a Kubernetes pod on any remote resource capable of managing a Container execution lifecycle.

    OSCAR uses the Kubernetes Virtual Node to translate a job request from the Kubernetes pod into a remote call. We have been using Interlink to interact with an HPC cluster. For more infomation check the interLink landing page.

    "},{"location":"integration-interlink/#installation-and-use-of-interlink-node-in-oscar-cluster","title":"Installation and use of Interlink Node in OSCAR cluster","text":"

    The cluster Kubernetes must have at least one virtual kubelet node. Those nodes will have tagged as type=virtual-kubelet. So, follow these steps to add the Virtual node to the Kubernetes cluster. OSCAR detects these nodes by itself.

    Once the Virtual node and OSCAR are installed correctly, you use this node by adding the name of the virtual node in the InterLinkNodeName variable. Otherwise, to use a normal node of the Kubernetes cluster, leave it blank \"\"

    "},{"location":"integration-interlink/#annotations-restrictions-and-other-things-to-keep-in-mind","title":"Annotations, restrictions, and other things to keep in mind","text":"

    Please note that interLink uses singularity to run a container with these characteristics:

    The support for interLink was integrated in the context of the interTwin project, with support from Istituto Nazionale di Fisica Nucleare - INFN, who developed interLink, and CERN, who provided the development of itwinai, used as a platform for advanced AI/ML workflows in digital twin applications and a use case. Special thanks to the IZUM Center in Slovenia for providing access to the HPC Vega supercomputing facility to perform the testing.

    "},{"location":"integration-scone/","title":"Integration with SCONE","text":"

    SCONE is a tool that allows confidential computing on the cloud thus protecting the data, code and application secrets on a Kubernetes cluster. By leveraging hardware-based security features such as Intel SGX (Software Guard Extensions), SCONE ensures that sensitive data and computations remain protected even in potentially untrusted environments. This end-to-end encryption secures data both at rest and in transit, significantly reducing the risk of data breaches. Additionally, SCONE simplifies the development and deployment of secure applications by providing a seamless integration layer for existing software, thus enhancing security without requiring major code changes.

    \u26a0\ufe0f

    Please note that the usage of SCONE introduces a non-negligible overhead when executing the container for the OSCAR service.

    More info about SCONE and Kubernetes here.

    To use SCONE on a Kubernetes cluster, Intel SGX has to be enabled on the machines, and for these, the SGX Kubernetes plugin needs to be present on the cluster. Once the plugin is installed you only need to specify the parameter enable_sgx on the FDL of the services that are going to use a secured container image like in the following example.

    functions:\n  oscar:\n  - oscar-cluster:\n      name: sgx-service\n      memory: 1Gi\n      cpu: '0.6'\n      image: your_image\n      enable_sgx: true\n      script: script.sh\n

    SCONE support was introduced in OSCAR for the AI-SPRINT project to tackle the Personalized Healthcare use case in which OSCAR is employed to perform the inference phase of pre-trained models out of sensitive data captured from wearable devices. This use case was coordinated by the Barcelona Supercomputing Center (BSC) and Technische Universit\u00e4t Dresden \u2014 TU Dresden was involved for the technical activities regarding SCONE.

    "},{"location":"invoking-async/","title":"Asynchronous invocations","text":"

    For event-driven file processing, OSCAR automatically manages the creation and notification system of MinIO buckets in order to allow the event-driven invocation of services using asynchronous requests, generating a Kubernetes job for every file to be processed.

    "},{"location":"invoking-sync/","title":"Synchronous invocations","text":"

    Synchronous invocations allow obtaining the execution output as the response to the HTTP call to the /run/<SERVICE_NAME> path of the OSCAR API. For this, OSCAR delegates the execution to a serverless back-end (e.g. Knative) which uses an auto-scaled set of pods to process the requests.

    \u2139\ufe0f

    You may find references in the documentation or examples to OpenFaaS, which was used in older versions of OSCAR. Recent versions of OSCAR use Knative as the serverless back-end for synchronous invocations, which provides several benefits such as scale-to-zero or load-balanced auto-scaled set of pods.

    Synchronous invocations can be made through OSCAR CLI, using the command oscar-cli service run:

    oscar-cli service run [SERVICE_NAME] {--input | --text-input} {-o | -output }\n

    You can check these examples:

    The input can be sent as a file via the --input flag, and the result of the execution will be displayed directly in the terminal:

    oscar-cli service run plant-classification-sync --input images/image3.jpg\n

    Alternatively, it can be sent as plain text using the --text-input flag and the result stored in a file using the --output flag:

    oscar-cli service run text-to-speech --text-input \"Hello everyone\"  --output output.mp3\n
    "},{"location":"invoking-sync/#synchronous-invocations-via-oscar-cli","title":"Synchronous Invocations via OSCAR CLI","text":"

    OSCAR CLI simplifies the execution of services synchronously via the oscar-cli service run command. This command requires the input to be passed as text through the --text-input flag or directly a file to be sent by passing its path through the --input flag. Both input types are automatically encoded in Base64.

    It also allow setting the --output flag to indicate a path for storing (and decoding if needed) the output body in a file, otherwise the output will be shown in stdout.

    An illustration of triggering a service synchronously through OSCAR-CLI can be found in the cowsay example.

    oscar-cli service run cowsay --text-input '{\"message\":\"Hello World\"}'\n
    "},{"location":"invoking-sync/#synchronous-invocations-via-oscar-api","title":"Synchronous Invocations via OSCAR API","text":"

    OSCAR services can also be invoked via traditional HTTP clients such as cURL using the path /run/<SERVICE_NAME> defined in the OSCAR API . However, you must take care to properly format the input to one of the two supported formats (JSON or Base64 encode) and include the service access token in the request.

    An illustration of triggering a service synchronously through cURL can be found in the cowsay example.

    To send an input file through cURL, you must encode it in base64 or json. To avoid issues with the output in synchronous invocations remember to put the log_level as CRITICAL. Output, which is encoded in base64 or in json, should be decoded as well. Save output in the expected format of the use-case.

    base64 input.png | curl -X POST -H \"Authorization: Bearer <TOKEN>\" \\\n -d @- https://<CLUSTER_ENDPOINT>/run/<OSCAR_SERVICE> | base64 -d > result.png\n
    "},{"location":"invoking-sync/#service-access-tokens","title":"Service access tokens","text":"

    As detailed in the API specification, invocation paths require the service access token in the request header for authentication. Service access tokens are auto-generated in service creation and update, and MinIO eventing system is automatically configured to use them for event-driven file processing. Tokens can be obtained through the API, using the oscar-cli service get command or directly from the web interface.

    "},{"location":"invoking-sync/#limitations","title":"Limitations","text":"

    Although the use of the Knative Serverless Backend for synchronous invocations provides elasticity similar to the one provided by their counterparts in public clouds, such as AWS Lambda, synchronous invocations are not still the best option for run long-running resource-demanding applications, like deep learning inference or video processing.

    The synchronous invocation of long-running resource-demanding applications may lead to timeouts on Knative pods. Therefore, we consider asynchronous invocations (which generate Kubernetes jobs) as the optimal approach to handle event-driven file processing.

    "},{"location":"invoking/","title":"Service Execution Types","text":"

    OSCAR services can be executed:

    "},{"location":"license/","title":"License","text":"
                                     Apache License\n\n                           Version 2.0, January 2004\n\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"{}\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright 2018 GRyCAP - I3M - UPV\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n
    "},{"location":"local-testing/","title":"Local Deployment","text":"

    \u2757\ufe0f

    The local deployment of OSCAR is just recommended for testing. Please, consider using the IM to deploy a fully-featured OSCAR cluster in a Cloud platform.

    The easiest way to test the OSCAR platform locally is using kind. Kind allows the deployment of Kubernetes clusters inside Docker containers and automatically configures kubectl to access them.

    "},{"location":"local-testing/#prerequisites","title":"Prerequisites","text":"

    \u26a0\ufe0f

    Although the use of local Docker images has yet to be implemented as a feature on OSCAR clusters, the local deployment for testing allows you to use a local Docker registry to use this kind of images. The registry uses by default the port 5001, so each image you want to use must be tagged as localhost:5001/[image_name] and pushed to the repository through the docker push localhost:5001/[image_name] command.

    Also, port 80 must be available to avoid errors during the deployment since OSCAR-UI uses it. Check Frequently Asked Questions (FAQ) for more info.

    "},{"location":"local-testing/#automated-local-testing","title":"Automated local testing","text":"

    To set up the enviroment for the platform testing you can run the following command. This script automatically executes all the necessary steps to deploy the local cluster and the OSCAR platform along with all the required tools.

    curl -sSL http://go.oscar.grycap.net | bash\n
    "},{"location":"local-testing/#steps-for-manual-local-testing","title":"Steps for manual local testing","text":"

    If you want to do it manualy you can follow the listed steps.

    "},{"location":"local-testing/#create-the-cluster","title":"Create the cluster","text":"

    To create a single node cluster with MinIO and Ingress controller ports locally accessible, run:

    cat <<EOF | kind create cluster --config=-\nkind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\nnodes:\n- role: control-plane\n  kubeadmConfigPatches:\n  - |\n    kind: InitConfiguration\n    nodeRegistration:\n      kubeletExtraArgs:\n        node-labels: \"ingress-ready=true\"\n  extraPortMappings:\n  - containerPort: 80\n    hostPort: 80\n    protocol: TCP\n  - containerPort: 443\n    hostPort: 443\n    protocol: TCP\n  - containerPort: 30300\n    hostPort: 30300\n    protocol: TCP\n  - containerPort: 30301\n    hostPort: 30301\n    protocol: TCP\nEOF\n
    "},{"location":"local-testing/#deploy-nginx-ingress","title":"Deploy NGINX Ingress","text":"

    To enable Ingress support for accessing the OSCAR server, we must deploy the NGINX Ingress:

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml\n
    "},{"location":"local-testing/#deploy-minio","title":"Deploy MinIO","text":"

    OSCAR depends on MinIO as a storage provider and function trigger. The easy way to run MinIO in a Kubernetes cluster is by installing its helm chart. To install the helm MinIO repo and install the chart, run the following commands replacing <MINIO_PASSWORD> with a password. It must have at least 8 characters:

    helm repo add minio https://charts.min.io\nhelm install minio minio/minio --namespace minio --set rootUser=minio,\\\nrootPassword=<MINIO_PASSWORD>,service.type=NodePort,service.nodePort=30300,\\\nconsoleService.type=NodePort,consoleService.nodePort=30301,mode=standalone,\\\nresources.requests.memory=512Mi,\\\nenvironment.MINIO_BROWSER_REDIRECT_URL=http://localhost:30301 \\\n --create-namespace\n

    Note that the deployment has been configured to use the rootUser minio and the specified password as rootPassword. The NodePort service type has been used in order to allow access from http://localhost:30300 (API) and http://localhost:30301 (Console).

    "},{"location":"local-testing/#deploy-nfs-server-provisioner","title":"Deploy NFS server provisioner","text":"

    NFS server provisioner is required for the creation of ReadWriteMany PersistentVolumes in the kind cluster. This is needed by the OSCAR services to mount the volume with the FaaS Supervisor inside the job containers.

    To deploy it you can use this chart executing:

    helm repo add nfs-ganesha-server-and-external-provisioner https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner/\nhelm install nfs-server-provisioner nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner\n

    Some Linux distributions may have problems using the NFS server provisioner with kind due to its default configuration of kernel-limit file descriptors. To workaround it, please run sudo sysctl -w fs.nr_open=1048576.

    "},{"location":"local-testing/#deploy-knative-serving-as-serverless-backend-optional","title":"Deploy Knative Serving as Serverless Backend (OPTIONAL)","text":"

    OSCAR supports Knative Serving as Serverless Backend to process synchronous invocations. If you want to deploy it in the kind cluster, first you must deploy the Knative Operator

    kubectl apply -f https://github.com/knative/operator/releases/download/knative-v1.3.1/operator.yaml\n

    Note that the above command deploys the version v1.3.1 of the Operator. You can check if there are new versions here.

    Once the Operator has been successfully deployed, you can install the Knative Serving stack with the following command:

    cat <<EOF | kubectl apply -f -\n---\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: knative-serving\n---\napiVersion: operator.knative.dev/v1beta1\nkind: KnativeServing\nmetadata:\n  name: knative-serving\n  namespace: knative-serving\nspec:\n  version: 1.3.0\n  ingress:\n    kourier:\n      enabled: true\n      service-type: ClusterIP\n  config:\n    config-features:\n      kubernetes.podspec-persistent-volume-claim: enabled\n      kubernetes.podspec-persistent-volume-write: enabled\n    network:\n      ingress-class: \"kourier.ingress.networking.knative.dev\"\nEOF\n
    "},{"location":"local-testing/#deploy-oscar","title":"Deploy OSCAR","text":"

    First, create the oscar and oscar-svc namespaces by executing:

    kubectl apply -f https://raw.githubusercontent.com/grycap/oscar/master/deploy/yaml/oscar-namespaces.yaml\n

    Then, add the grycap helm repo and deploy by running the following commands replacing <OSCAR_PASSWORD> with a password of your choice and <MINIO_PASSWORD> with the MinIO rootPassword, and remember to add the flag --set serverlessBackend=knative if you deployed it in the previous step:

    helm repo add grycap https://grycap.github.io/helm-charts/\nhelm install --namespace=oscar oscar grycap/oscar \\\n --set authPass=<OSCAR_PASSWORD> --set service.type=ClusterIP \\\n --set ingress.create=true --set volume.storageClassName=nfs \\\n --set minIO.endpoint=http://minio.minio:9000 --set minIO.TLSVerify=false \\\n --set minIO.accessKey=minio --set minIO.secretKey=<MINIO_PASSWORD>\n

    Now you can access to the OSCAR web interface through https://localhost with user oscar and the specified password.

    Note that the OSCAR server has been configured to use the ClusterIP service of MinIO for internal communication. This blocks the MinIO section in the OSCAR web interface, so to download and upload files you must connect directly to MinIO (http://localhost:30300).

    "},{"location":"local-testing/#delete-the-cluster","title":"Delete the cluster","text":"

    Once you have finished testing the platform, you can remove the local kind cluster by executing:

    kind delete cluster\n

    Remember that if you have more than one cluster created, it may be required to set the --name flag to specify the name of the cluster to be deleted.

    "},{"location":"local-testing/#using-oscar-cli","title":"Using OSCAR-CLI","text":"

    To use OSCAR-CLI in a local deployment, you should set the --disable-ssl flag to disable verification of the self-signed certificates:

    oscar-cli cluster add oscar-cluster https://localhost oscar <OSCAR_PASSWORD> --disable-ssl\n
    "},{"location":"minio-bucket-replication/","title":"MinIO bucket replication","text":"

    In scenarios where you have two linked OSCAR clusters as part of the same workflow defined in FDL, temporary network disconnections cause that data generated on the first cluster during the disconnection time is lost as well.

    To resolve this scenario we propose the use of replicated buckets on MinIO. With this approach, you can have two buckets synchronized on different OSCAR clusters so that, if the connection is lost, they will be re-synchronized when the connection is restored.

    An example of this scenario is shown on the following diagram, where there are two MinIO instances (each one on a different OSCAR cluster), and the output of the execution of service_x on the source serves as input for the service_y on the remote cluster.

    Here is in more detail the data flow between the buckets:

    MinIO instance source

    MinIO instance remote

    "},{"location":"minio-bucket-replication/#considerations","title":"Considerations","text":"

    When you create the service on the remote OSCAR cluster, the intermediate bucket which is both the replica and input of the OSCAR service will have the webhook event for PUT actions enabled so it can trigger the OSCAR service.

    Because, as explained below on Event handling on replication events, there are some specific events for replicated buckets, it is important to delete this event webhook to avoid getting both events every time.

    mc event remove originminio/intermediate arn:aws:sqs::intermediate:webhook --event put\n
    "},{"location":"minio-bucket-replication/#helm-installation","title":"Helm installation","text":"

    To be able to use replication each MinIO instance deployed with Helm has to be configured in distributed mode. This is done by adding the parameters mode=distributed,replicas=NUM_REPLICAS.

    Here is an example of a local MinIO replicated deployment with Helm:

    helm install minio minio/minio --namespace minio --set rootUser=minio,rootPassword=minio123,service.type=NodePort,service.nodePort=30300,consoleService.type=NodePort,consoleService.nodePort=30301,mode=distributed,replicas=2,resources.requests.memory=512Mi,environment.MINIO_BROWSER_REDIRECT_URL=http://localhost:30301 --create-namespace\n
    "},{"location":"minio-bucket-replication/#minio-setup","title":"MinIO setup","text":"

    To use the replication service it is necessary to set up manually both the requirements and the replication, either by command line or via the MinIO console. We created a test environment with replication via the command line as follows.

    First, we define our minIO instances (originminio and remoteminio) on the minio client.

    mc alias set originminio https://localminio minioadminuser minioadminpassword\n\nmc alias set remoteminio https://remoteminio minioadminuser minioadminpassword\n

    A requisite for replication is to enable the versioning on the buckets that will serve as origin and replica. When we create a service through OSCAR and the minIO buckets are created, versioning is not enabled by default, so we have to do it manually.

    mc version enable originminio/intermediate\n\nmc version enable remoteminio/intermediate\n

    Then, you can create the replication remote target

    mc admin bucket remote add originminio/intermediate \\\n  https://RemoteUser:Password@HOSTNAME/intermediate \\\n  --service \"replication\"\n

    and add the bucket replication rule so the actions on the origin bucket get synchronized on the replica.

    mc replicate add originminio/intermediate \\\n   --remote-bucket 'arn:minio:replication::<UUID>:intermediate' \\\n   --replicate \"delete,delete-marker,existing-objects\"\n
    "},{"location":"minio-bucket-replication/#event-handling-on-replication-events","title":"Event handling on replication events","text":"

    Once you have replica instances you can add a specific event webhook for the replica-related events.

    mc event add originminio/intermediate arn:minio:sqs::intermediate:webhook --event replica\n

    The replication events sometimes arrive duplicated. Although this is not yet implemented, a solution to the duplicated events would be to filter them by the userMetadata, which is marked as \"PENDING\" on the events to be discarded.

      \"userMetadata\": {\n    \"X-Amz-Replication-Status\": \"PENDING\"\n  }\n

    MinIO documentation used

    "},{"location":"multitenancy/","title":"Multitenancy support in OSCAR","text":"

    In the context of OSCAR, multi-tenancy support refers to the platform's ability to enable multiple users or organizations (tenants) to deploy and run their applications or functions on the same underlying infrastructure. Support for multitenancy in OSCAR has been available since version v3.0.0. To use this functionality, there are some requisites that the cluster and the users have to fulfill:

    functions:\n  oscar:\n  - oscar-cluster:\n      name: grayify_multitenant\n      memory: 1Gi\n      cpu: '0.6'\n      image: ghcr.io/grycap/imagemagick\n      script: script.sh\n      vo: \"vo.example.eu\" # Needed to create services on OIDC enabled clusters\n      allowed_users: \n      - \"62bb11b40398f73778b66f344d282242debb8ee3ebb106717a123ca213162926@egi.eu\"\n      - \"5e14d33ac4abc96272cc163da6a200c2e18591bfb3b0f32a4c9c867f5e938463@egi.eu\"\n      input:\n      - storage_provider: minio.default\n        path: grayify_multitenant/input\n      output:\n      - storage_provider: minio.default\n        path: grayify_multitenant/output\n

    NOTE: A user can obtain its EGI User Id by login into https://aai.egi.eu/ (for the production instance of EGI Check-In) or https://aai-demo.egi.eu (for the demo instance of EGI Check-In).

    Since OSCAR uses MinIO as the main storage provider, so that the users only have access to their designated bucket's service, MinIO users are created on the fly for each EGI UID. Consequently, each user accessing the cluster will have a MinIO user with its UID as AccessKey and an autogenerated SecretKey.

    "},{"location":"oscar-cli/","title":"OSCAR CLI","text":"

    OSCAR CLI provides a command line interface to interact with OSCAR. It supports cluster registrations, service management, workflows definition from FDL files and the ability to manage files from OSCAR's compatible storage providers (MinIO, AWS S3 and Onedata). The folder example-workflow contains all the necessary files to create a simple workflow to test the tool.

    "},{"location":"oscar-cli/#download","title":"Download","text":""},{"location":"oscar-cli/#releases","title":"Releases","text":"

    The easy way to download OSCAR-CLI is through the GitHub releases page. There are binaries for multiple platforms and OS. If you need a binary for another platform, please open an issue.

    "},{"location":"oscar-cli/#install-from-source","title":"Install from source","text":"

    If you have the Go programming language installed and configured, you can get it directly from the source by executing:

    go install github.com/grycap/oscar-cli@latest\n
    "},{"location":"oscar-cli/#oidc-openid-connect","title":"OIDC (OpenID Connect)","text":"

    If your cluster has OIDC avaliable, follow these steps to use oscar-cli to interact with it using the OpenID Connect.

    oscar-cli cluster add IDENTIFIER ENDPOINT --oidc-account-name SHORTNAME\n
    "},{"location":"oscar-cli/#available-commands","title":"Available commands","text":""},{"location":"oscar-cli/#apply","title":"apply","text":"

    Apply a FDL file to create or edit services in clusters.

    Usage:\n  oscar-cli apply FDL_FILE [flags]\n\nAliases:\n  apply, a\n\nFlags:\n      --config string   set the location of the config file (YAML or JSON)\n  -h, --help            help for apply\n
    "},{"location":"oscar-cli/#cluster","title":"cluster","text":"

    Manages the configuration of clusters.

    "},{"location":"oscar-cli/#subcommands","title":"Subcommands","text":""},{"location":"oscar-cli/#add","title":"add","text":"

    Add a new existing cluster to oscar-cli.

    Usage:\n  oscar-cli cluster add IDENTIFIER ENDPOINT {USERNAME {PASSWORD | \\\n  --password-stdin} | --oidc-account-name ACCOUNT} [flags]\n\nAliases:\n  add, a\n\nFlags:\n      --disable-ssl               disable verification of ssl certificates for the\n                                  added cluster\n  -h, --help                      help for add\n  -o, --oidc-account-name string  OIDC account name to authenticate using\n                                  oidc-agent. Note that oidc-agent must be\n                                  started and properly configured\n                                  (See:https://indigo-dc.gitbook.io/oidc-agent/)\n      --password-stdin            take the password from stdin\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#default","title":"default","text":"

    Show or set the default cluster.

    Usage:\n  oscar-cli cluster default [flags]\n\nAliases:\n  default, d\n\nFlags:\n  -h, --help         help for default\n  -s, --set string   set a default cluster by passing its IDENTIFIER\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#info","title":"info","text":"

    Show information of an OSCAR cluster.

    Usage:\n  oscar-cli cluster info [flags]\n\nAliases:\n  info, i\n\nFlags:\n  -c, --cluster string   set the cluster\n  -h, --help             help for info\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#list","title":"list","text":"

    List the configured OSCAR clusters.

    Usage:\n  oscar-cli cluster list [flags]\n\nAliases:\n  list, ls\n\nFlags:\n  -h, --help   help for list\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#remove","title":"remove","text":"

    Remove a cluster from the configuration file.

    Usage:\n  oscar-cli cluster remove IDENTIFIER [flags]\n\nAliases:\n  remove, rm\n\nFlags:\n  -h, --help   help for remove\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#service","title":"service","text":"

    Manages the services within a cluster.

    "},{"location":"oscar-cli/#subcommands-of-services","title":"Subcommands of services","text":""},{"location":"oscar-cli/#get","title":"get","text":"

    Get the definition of a service.

    Usage:\n  oscar-cli service get SERVICE_NAME [flags]\n\nAliases:\n  get, g\n\nFlags:\n  -c, --cluster string   set the cluster\n  -h, --help             help for get\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#list-services","title":"list services","text":"

    List the available services in a cluster.

    Usage:\n  oscar-cli service list [flags]\n\nAliases:\n  list, ls\n\nFlags:\n  -c, --cluster string   set the cluster\n  -h, --help             help for list\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#remove-services","title":"remove services","text":"

    Remove a service from the cluster.

    Usage:\n  oscar-cli service remove SERVICE_NAME... [flags]\n\nAliases:\n  remove, rm\n\nFlags:\n  -c, --cluster string   set the cluster\n  -h, --help             help for remove\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#run","title":"run","text":"

    Invoke a service synchronously (a Serverless backend in the cluster is required).

    Usage:\n  oscar-cli service run SERVICE_NAME {--input | --text-input} [flags]\n\nAliases:\n  run, invoke, r\n\nFlags:\n  -c, --cluster string      set the cluster\n  -h, --help                help for run\n  -i, --input string        input file for the request\n  -o, --output string       file path to store the output\n  -t, --text-input string   text input string for the request\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#logs-list","title":"logs list","text":"

    List the logs from a service.

    Usage:\n  oscar-cli service logs list SERVICE_NAME [flags]\n\nAliases:\n  list, ls\n\nFlags:\n  -h, --help             help for list\n  -s, --status strings   filter by status (Pending, Running, Succeeded or\n                         Failed), multiple values can be specified by a\n                         comma-separated string\n\nGlobal Flags:\n  -c, --cluster string   set the cluster\n      --config string    set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#logs-get","title":"logs get","text":"

    Get the logs from a service's job.

    Usage:\n  oscar-cli service logs get SERVICE_NAME JOB_NAME [flags]\n\nAliases:\n  get, g\n\nFlags:\n  -h, --help              help for get\n  -t, --show-timestamps   show timestamps in the logs\n\nGlobal Flags:\n  -c, --cluster string   set the cluster\n      --config string    set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#logs-remove","title":"logs remove","text":"

    Remove a service's job along with its logs.

    Usage:\n  oscar-cli service logs remove SERVICE_NAME \\\n   {JOB_NAME... | --succeeded | --all} [flags]\n\nAliases:\n  remove, rm\n\nFlags:\n  -a, --all         remove all logs from the service\n  -h, --help        help for remove\n  -s, --succeeded   remove succeeded logs from the service\n\nGlobal Flags:\n  -c, --cluster string   set the cluster\n      --config string    set the location of the config file (YAML or JSON)\n

    Note The following subcommands will not work with MinIO if you use a local deployment due to DNS resolutions, so if you want to use a command line put/get/list files from your buckets, you can use the MinIO client command line. Once you have the client installed you can define the cluster with the mc alias command like it follows: mc alias set myminio https://localhost:30000 minioadminuser minioadminpassword So, instead of the next subcommands, you would use: - mc cp to put/get files fron a bucket. - mc ls to list files from a bucket.

    "},{"location":"oscar-cli/#get-file","title":"get-file","text":"

    Get a file from a service's storage provider.

    The STORAGE_PROVIDER argument follows the format STORAGE_PROVIDER_TYPE.STORAGE_PROVIDER_NAME, being the STORAGE_PROVIDER_TYPE one of the three supported storage providers (MinIO, S3 or Onedata) and the STORAGE_PROVIDER_NAME is the identifier for the provider set in the service's definition.

    Usage:\n  oscar-cli service get-file SERVICE_NAME STORAGE_PROVIDER REMOTE_FILE \\\n   LOCAL_FILE [flags]\n\nAliases:\n  get-file, gf\n\nFlags:\n  -c, --cluster string   set the cluster\n  -h, --help             help for get-file\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#put-file","title":"put-file","text":"

    Put a file in a service's storage provider.

    The STORAGE_PROVIDER argument follows the format STORAGE_PROVIDER_TYPE.STORAGE_PROVIDER_NAME, being the STORAGE_PROVIDER_TYPE one of the three supported storage providers (MinIO, S3 or Onedata) and the STORAGE_PROVIDER_NAME is the identifier for the provider set in the service's definition.

    NOTE: This command can not be used in a local testing deployment.

    Usage:\n  oscar-cli service put-file SERVICE_NAME STORAGE_PROVIDER LOCAL_FILE \\\n   REMOTE_FILE [flags]\n\nAliases:\n  put-file, pf\n\nFlags:\n  -c, --cluster string   set the cluster\n  -h, --help             help for put-file\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#list-files","title":"list-files","text":"

    List files from a service's storage provider path.

    The STORAGE_PROVIDER argument follows the format STORAGE_PROVIDER_TYPE.STORAGE_PROVIDER_NAME, being the STORAGE_PROVIDER_TYPE one of the three supported storage providers (MinIO, S3 or Onedata) and the STORAGE_PROVIDER_NAME is the identifier for the provider set in the service's definition.

    Usage:\n  oscar-cli service list-files SERVICE_NAME STORAGE_PROVIDER REMOTE_PATH [flags]\n\nAliases:\n  list-files, list-file, lsf\n\nFlags:\n  -c, --cluster string   set the cluster\n  -h, --help             help for list-files\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#version","title":"version","text":"

    Print the version.

    Usage:\n  oscar-cli version [flags]\n\nAliases:\n  version, v\n\nFlags:\n  -h, --help   help for version\n
    "},{"location":"oscar-cli/#help","title":"help","text":"

    Help provides help for any command in the application. Simply type oscar-cli help [path to command] for full details.

    Usage:\n  oscar-cli help [command] [flags]\n\nFlags:\n  -h, --help   help for help\n
    "},{"location":"oscar-service/","title":"OSCAR Service","text":"

    OSCAR allows the creation of serverless file-processing services based on container images. These services require a user-defined script with the commands responsible of the processing. The platform automatically mounts a volume on the containers with the FaaS Supervisor component, which is in charge of:

    "},{"location":"oscar-service/#inputoutput","title":"Input/Output","text":"

    FaaS Supervisor, the component in charge of managing the input and output of services, allows JSON or base64 encoded body in service requests. The body of these requests will be automatically decoded into the invocation's input file available from the script through the $INPUT_FILE_PATH environment variable.

    The output of synchronous invocations will depend on the application itself:

    1. If the script generates a file inside the output dir available through the $TMP_OUTPUT_DIR environment variable, the result will be the file encoded in base64.
    2. If the script generates more than one file inside $TMP_OUTPUT_DIR, the result will be a zip archive containing all files encoded in base64.
    3. If there are no files in $TMP_OUTPUT_DIR, FaaS Supervisor will return its logs, including the stdout of the user script run. To avoid FaaS Supervisor's logs, you must set the service's log_level to CRITICAL.

    This way users can adapt OSCAR's services to their own needs.

    You can follow one of the examples in order to test the OSCAR framework for specific applications. We recommend you to start with the plant classification example.

    "},{"location":"training/","title":"Presentations and Webinars","text":""},{"location":"training/#deploy-your-ai-based-service-for-inference-using-oscar","title":"Deploy your AI-based service for inference using OSCAR","text":"

    Delivered for the AI4EOSC and iMagine projects in March 2024.

    "},{"location":"usage-ui/","title":"OSCAR UI","text":"

    \u2757\ufe0f

    For simple OSCAR services you may use the UI, but its features may not be on par with the latest changes in the FDL. Therefore, it is recommended to use OSCAR CLI to deploy an OSCAR service.

    This section details the usage of the OSCAR UI with the plant classification example, from the OSCAR examples.

    "},{"location":"usage-ui/#login","title":"Login","text":"

    OSCAR UI is exposed via a Kubernetes ingress and it is accessible via the Kubernetes master node IP.

    After a correct login, you should see the main view:

    "},{"location":"usage-ui/#deploying-services","title":"Deploying services","text":"

    In order to create a new service, you must click on the \"DEPLOY NEW SERVICE\" button and follow the wizard. For an OSCAR Service a script must be provided for the processing of files. This script must use the environment variables INPUT_FILE_PATH and TMP_OUTPUT_DIR to refer to the input file and the folder where to save the results respectively:

    #!/bin/bash\n\necho \"SCRIPT: Invoked classify_image.py. File available in $INPUT_FILE_PATH\"\nFILE_NAME=`basename \"$INPUT_FILE_PATH\"`\nOUTPUT_FILE=\"$TMP_OUTPUT_DIR/$FILE_NAME\"\npython2 /opt/plant-classification-theano/classify_image.py \\\n \"$INPUT_FILE_PATH\" -o \"$OUTPUT_FILE\"\n

    You must fill in the fields indicating the container image to use, the name of the service and the script file. In addition, you can add environment variables, specify the resources (RAM and CPUs) and choose the log level of the service.

    Note that specifying a tag in the container image used can be convenient to avoid problems with quotas for certain container registries such as Docker Hub. This is due to the fact that Kubernetes defaults the imagePullPolicy of pods to Always when no tag or the latest tag is set, which checks the version of the image in the registry every time a job is launched.

    Next, the credentials of the storage providers to be used must be introduced. As the platform already has a MinIO deployment to operate, it is not necessary to enter its credentials for using it.

    Multiple MinIO, Onedata and Amazon S3 storage providers can be used. Remember to click the \"ADD\" button after completing each one.

    Then, click the \"NEXT\" button to go to the last section of the wizard.

    In this section, you must first choose the paths of the storage provider to be used as source of events, i.e. the input bucket and/or folder that will trigger the service.

    Only the minio.default provider can be used as input storage provider.

    After filling in each path, remember to click on the \"ADD INPUT\" button.

    Finally, the same must be done to indicate the output paths to be used in the desired storage providers. You can also indicate suffixes and/or prefixes to filter the files uploaded to each path by name.

    The resulting files can be stored in several storage providers, like in the following example, where they are stored in the MinIO server of the platform and in a Onedata space provided by the user.

    After clicking the \"SUBMIT\" button the new service will appear in the main view after a few seconds.

    "},{"location":"usage-ui/#triggering-the-service","title":"Triggering the service","text":""},{"location":"usage-ui/#http-endpoints","title":"HTTP endpoints","text":"

    OSCAR services can be invoked through auto-generated HTTP endpoints. Requests to these endpoints can be made in two ways:

    The content of the HTTP request body will be stored as a file that will be available via the INPUT_FILE_PATH environment variable to process it.

    A detailed specification of the OSCAR's API and its different paths can be found here.

    "},{"location":"usage-ui/#minio-storage-tab","title":"MinIO Storage Tab","text":"

    MinIO Storage Tab is made to manage buckets without using MinIO UI. It simplifies the process. From MinIO Storage Tab, buckets can be created or removed and folders inside them. Furthermore, files can be uploaded to the buckets and downloaded from them. Each time a service is created or submitted an edit, the buckets that are not created will be formed.

    "},{"location":"usage-ui/#uploading-files","title":"Uploading files","text":"

    Once a service has been created, it can be invoked by uploading files to its input bucket/folder. This can be done through the MinIO web interface (accessible from the Kubernetes frontend IP, on port 30300) or from the \"Minio Storage\" section in the side menu of the OSCAR web interface. Expanding down that menu will list the buckets created and, by clicking on their name, you will be able to see their content, upload and download files.

    To upload files, first click on the \"SELECT FILES\" button and choose the files you want to upload from your computer.

    Once you have chosen the files to upload, simply click on the \"UPLOAD\" button and the file will be uploaded, raising an event that will trigger the service.

    Note that the web interface includes a preview button for some file formats, such as images.

    "},{"location":"usage-ui/#service-status-and-logs","title":"Service status and logs","text":"

    When files are being processed by a service, it is important to know their status, as well as to observe the execution logs for testing. For this purpose, OSCAR includes a log view, accessible by clicking on the \"LOGS\" button in a service from the main view.

    In this view you can see all the jobs created for a service, as well as their status (\"Pending\", \"Running\", \"Succeeded\" or \"Failed\") and their creation, start and finish time.

    To view the logs generated by a job, simply click on the drop-down button located on the right.

    The view also features options to refresh the status of one or all jobs, as well as to delete them.

    "},{"location":"usage-ui/#downloading-files-from-minio","title":"Downloading files from MinIO","text":"

    Downloading files from the platform's MinIO storage provider can also be done using the OSCAR web interface. To do it, simply select one or more files and click on the button \"DOWNLOAD OBJECT\" (or \"DOWNLOAD ALL AS A ZIP\" if several files have been selected).

    In the following picture you can see the preview of the resulting file after the execution triggered in the previous step.

    "},{"location":"usage-ui/#deleting-services","title":"Deleting services","text":"

    Services can be deleted by clicking on the trash can icon from the main view.

    Once you have accepted the message shown in the image above, the service will be deleted after a few seconds.

    "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"

    OSCAR is an open-source platform to support the event-driven serverless computing model for data-processing applications. It can be automatically deployed on multi-Clouds, and even on low-powered devices, to create highly-parallel event-driven data-processing serverless applications along the computing continuum. These applications execute on customized runtime environments provided by Docker containers that run on elastic Kubernetes clusters. It is also integrated with the SCAR framework, which supports a High Throughput Computing Programming Model to create highly-parallel event-driven data-processing serverless applications that execute on customized runtime environments provided by Docker containers run on AWS Lambda and AWS Batch.

    "},{"location":"#concepts","title":"Concepts","text":""},{"location":"#rationale","title":"Rationale","text":"

    Users create OSCAR services to:

    An admin user can deploy an OSCAR cluster on a Cloud platform so that other users belonging to a Virtual Organization (VO) can create OSCAR services. A VO is a group of people (e.g. scientists, researchers) with common interests and requirements, who need to work collaboratively and/or share resources (e.g. data, software, expertise, CPU, storage space) regardless of geographical location. OSCAR supports the VOs defined in EGI, which are listed in the 'Operations Portal'. EGI is the European's largest federation of computing and storage resource providers united by a mission of delivering advanced computing and data analytics services for research and innovation.

    "},{"location":"#architecture-components","title":"Architecture & Components","text":"

    OSCAR runs on an elastic Kubernetes cluster that is deployed using:

    The following components are deployed inside the Kubernetes cluster in order to support the OSCAR platform:

    As external storage providers, the following services can be used:

    Note: All of the mentioned storage providers can be used as output, but only MinIO can be used as input.

    An OSCAR cluster can be easily deployed via the IM Dashboard on any major public and on-premises Cloud provider, including the EGI Federated Cloud.

    A summary of the components used:

    An OSCAR cluster can be accessed via its REST API, the web-based OSCAR UI and the command-line interface provided by OSCAR CLI.

    "},{"location":"about/","title":"Acknowledgements","text":"

    OSCAR has been developed by the Grid and High Performance Computing Group (GRyCAP) at the Instituto de Instrumentaci\u00f3n para Imagen Molecular (I3M) from the Universitat Polit\u00e8cnica de Val\u00e8ncia (UPV).

    OSCAR has been supported by the following projects:

    "},{"location":"about/#contact","title":"Contact","text":"

    If you have any trouble please open an issue or email us.

    "},{"location":"additional-config/","title":"Additional configuration","text":"

    To give the administrator a more personalized cluster configuration, the OSCAR manager searches for a config map on the cluster with the additional properties to apply. Since this is still a work in progress, the only configurable property currently is the container images' origin. As seen in the following ConfigMap definition, you can set a list of \"prefixes\" that you consider secure repositories, so images that do not come from one of these are restricted.

    apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: additional-oscar-config\n  namespace: oscar-svc\ndata:\n  config.yaml: |\n    images:\n      allowed_prefixes:\n      - ghcr.io\n

    Additionally, this property can be added when creating an OSCAR cluster through the IM, which will automatically create the ConfigMap.

    "},{"location":"api/","title":"OSCAR API","text":"

    OSCAR exposes a secure REST API available at the Kubernetes master's node IP through an Ingress Controller. This API has been described following the OpenAPI Specification and it is available below.

    "},{"location":"deploy-ec3/","title":"Deployment with EC3","text":"

    \u2757\ufe0f The deployment of OSCAR with EC3 is deprecated. Please, consider using the IM Dashboard.

    In order to deploy an elastic Kubernetes cluster with the OSCAR platform, it is preferable to use the IM Dashboard. Alternatively, you can also use EC3, a tool that deploys elastic virtual clusters. EC3 uses the Infrastructure Manager (IM) to deploy such clusters on multiple Cloud back-ends. The installation details can be found here, though this section includes the relevant information to get you started.

    "},{"location":"deploy-ec3/#prepare-ec3","title":"Prepare EC3","text":"

    Clone the EC3 repository:

    git clone https://github.com/grycap/ec3\n

    Download the OSCAR template into the ec3/templates folder:

    cd ec3\nwget -P templates https://raw.githubusercontent.com/grycap/oscar/master/templates/oscar.radl\n

    Create an auth.txt authorization file with valid credentials to access your Cloud provider. As an example, to deploy on an OpenNebula-based Cloud site the contents of the file would be:

    type = OpenNebula; host = opennebula-host:2633;\nusername = your-user;\npassword = you-password\n

    Modify the corresponding RADL template in order to determine the appropriate configuration for your deployment:

    As an example, to deploy in OpenNebula, one would modify the ubuntu-opennebula.radl (or create a new one).

    "},{"location":"deploy-ec3/#deploy-the-cluster","title":"Deploy the cluster","text":"

    To deploy the cluster, execute:

    ./ec3 launch oscar-cluster oscar ubuntu-opennebula -a auth.txt\n

    This will take several minutes until the Kubernetes cluster and all the required services have been deployed. You will obtain the IP of the front-end of the cluster and a confirmation message that the front-end is ready. Notice that it will still take few minutes before the services in the Kubernetes cluster are up & running.

    "},{"location":"deploy-ec3/#check-the-cluster-state","title":"Check the cluster state","text":"

    The cluster will be fully configured when all the Kubernetes pods are in the Running state.

     ./ec3 ssh oscar-cluster\n sudo kubectl get pods --all-namespaces\n

    Notice that initially only the front-end node of the cluster is deployed. As soon as the OSCAR framework is deployed, together with its services, the CLUES elasticity manager powers on a new (working) node on which these services will be run.

    You can see the status of the provisioned node(s) by issuing:

     clues status\n

    which obtains:

    | node            |state| enabled |time stable|(cpu,mem) used |(cpu,mem) total|\n|-----------------|-----|---------|-----------|---------------|---------------|\n| wn1.localdomain | used| enabled | 00h00'49\" | 0.0,825229312 | 1,1992404992  |\n| wn2.localdomain | off | enabled | 00h06'43\" | 0,0           | 1,1073741824  |\n| wn3.localdomain | off | enabled | 00h06'43\" | 0,0           | 1,1073741824  |\n| wn4.localdomain | off | enabled | 00h06'43\" | 0,0           | 1,1073741824  |\n| wn5.localdomain | off | enabled | 00h06'43\" | 0,0           | 1,1073741824  |\n

    The working nodes transition from off to powon and, finally, to the used status.

    "},{"location":"deploy-ec3/#default-service-endpoints","title":"Default Service Endpoints","text":"

    Once the OSCAR framework is running on the Kubernetes cluster, the endpoints described in the following table should be available. Most of the passwords/tokens are dynamically generated at deployment time and made available in the /var/tmp folder of the front-end node of the cluster.

    Service Endpoint Default User Password File OSCAR https://{FRONT_NODE} oscar oscar_password MinIO https://{FRONT_NODE}:30300 minio minio_secret_key OpenFaaS http://{FRONT_NODE}:31112 admin gw_password Kubernetes API https://{FRONT_NODE}:6443 tokenpass Kube. Dashboard https://{FRONT_NODE}:30443 dashboard_token

    Note that {FRONT_NODE} refers to the public IP of the front-end of the Kubernetes cluster.

    For example, to get the OSCAR password, you can execute:

    ./ec3 ssh oscar-cluster cat /var/tmp/oscar_password\n
    "},{"location":"deploy-helm/","title":"Deployment with Helm","text":"

    OSCAR can also be deployed on any existing Kubernetes cluster through its helm chart. However, to make the platform work properly, the following dependencies must be satisfied.

    "},{"location":"deploy-im-dashboard/","title":"Deployment with IM","text":"

    An OSCAR cluster can be easily deployed on multiple Cloud platforms via the Infrastructure Manager's Dashboard (IM Dashboard). This is a managed service provided by EGI and operated by the GRyCAP research group at the Universitat Polit\u00e8cnica de Val\u00e8ncia to deploy customized virtual infrastructures across many Cloud providers.

    Using the IM Dashboard is the easiest and most convenient approach to deploy an OSCAR cluster. It also automatically allocates a DNS entry and TLS certificates to support HTTPS-based access to the OSCAR cluster and companion services (e.g. MinIO).

    This example shows how to deploy an OSCAR cluster on Amazon Web Services (AWS) with two nodes. Thanks to the IM, the very same procedure allows to deploy the OSCAR cluster in an on-premises Cloud (such as OpenStack) or any other Cloud provider supported by the IM.

    These are the steps:

    1. Access the IM Dashboard

      You will need to authenticate via EGI Check-In, which supports mutiple Identity Providers (IdP). There is no need to register and the service is provided free of charge.

    2. Configure the Cloud Credentials

      Once logged in, you need to define the access credentials to the Cloud on which the OSCAR cluster will be deployed. These should be temporary credentials under the principle of least privilege (PoLP).

      In our case, we indicate an identifier for the set of credentials, the Access Key ID and the Secret Access Key for an IAM user that has privileges to deploy Virtual Machines in Amazon EC2. With the default values indicated in this tutorial, you will need privileges to deploy the following instance types: t3a.xlarge for the front-end node and t3a.medium for the working node.

    3. Select the OSCAR template

    There are optional features than can be included in the OSCAR cluster to fit particular user needs. We'll skip them.

    1. Customize and deploy the OSCAR cluster

      In this panel you can specify the number of Working Nodes (WNs) of the cluster together with the computational requirements for each node. We leave the default values.

      In the following panel, specify the passwords to be employed to access the Kubernetes Web UI (Dashboard), to access OSCAR and to access the MinIO dashboard. These passwords/tokens can also be used for programmatic access to the respective services.

      Now, choose the Cloud provider. The ID specified when creating the Cloud credentials will be shown. You will also need to specify the Amazon Machine Image (AMI) identifier. We chose an AMI based on Ubuntu 20.04 provided by Canonical whose identifier for the us-east-1 region is: ami-09e67e426f25ce0d7

      NOTE: You should obtain the AMI identifier for the latest version of the OS. This way, security patches will be already installed. You can obtain this AMI identifier from the AWS Marketplace or the Amazon EC2 service.

      Give the infrastructure a name and press \"Submit\".

    2. Check the status of the deployment OSCAR cluster

      You will see that the OSCAR cluster is being deployed and the infrastructure reaches the status \"running\". The process will finish when it reaches the state \"configured\".

      If you are interested in understanding what is happening under the hood you can see the logs:

    3. Accessing the OSCAR cluster

      Once reached the \"configured\" state, see the \"Outputs\" to obtain the different endpoints:

      The OSCAR UI can be accessed with the username oscar and the password you specified at deployment time.

      The MinIO UI can be accessed with the username minio and the password you specified at deployment time.

      The Kubernetes Dashboard can be accessed with the token you specified at deployment time.

      You can obtain statistics about the Kubernetes cluster:

    4. Terminating the OSCAR cluster

      You can terminate the OSCAR cluster from the IM Dashboard:

    "},{"location":"deploy-k3s-ansible/","title":"Deployment on K3s with Ansible","text":"

    The folder deploy/ansible contains all the necessary files to deploy a K3s cluster together with the OSCAR platform using Ansible. This way, a minified Kubernetes distribution can be used to configure OSCAR on IoT devices located at the Edge, such as Raspberry PIs. Note that this playbook can also be applied to quickly spread the OSCAR platform on top of any machine or already started cloud instance since the playbook is compatible with GNU/Linux on ARM64 and AMD64 architectures.

    "},{"location":"deploy-k3s-ansible/#requirements","title":"Requirements","text":"

    In order to use the playbook, you must install the following components:

    "},{"location":"deploy-k3s-ansible/#usage","title":"Usage","text":""},{"location":"deploy-k3s-ansible/#clone-the-folder","title":"Clone the folder","text":"

    First of all, you must clone the OSCAR repo:

    git clone https://github.com/grycap/oscar.git\n

    And place into the ansible directory:

    cd oscar/deploy/ansible\n
    "},{"location":"deploy-k3s-ansible/#ssh-configuration","title":"SSH configuration","text":"

    As Ansible is an agentless automation tool, you must configure the ~/.ssh/config file for granting access to the hosts to be configured via the SSH protocol. This playbook will use the Host field from SSH configuration to set the hostnames of the nodes, so please take care of naming them properly.

    Below you can find an example of a configuration file for four nodes, being the front the only one with a public IP, so it will be used as a proxy for the SSH connection to the working nodes (ProxyJump option) via its internal network.

    Host front\n  HostName <PUBLIC_IP>\n  User ubuntu\n  IdentityFile ~/.ssh/my_private_key\n\nHost wn1\n  HostName <PRIVATE_IP>\n  User ubuntu\n  IdentityFile ~/.ssh/my_private_key\n  ProxyJump front\n\nHost wn2\n  HostName <PRIVATE_IP>\n  User ubuntu\n  IdentityFile ~/.ssh/my_private_key\n  ProxyJump front\n\nHost wn3\n  HostName <PRIVATE_IP>\n  User ubuntu\n  IdentityFile ~/.ssh/my_private_key\n  ProxyJump front\n
    "},{"location":"deploy-k3s-ansible/#configuration-of-the-inventory-file","title":"Configuration of the inventory file","text":"

    Now, you have to edit the hosts file and add the hosts to be configured. Note that only one node must be set in the [front] section, while one or more nodes can be configured as working nodes of the cluster in the [wn] section. For example, for the previous SSH configuration the hosts inventory file should look like this:

    [front]\n; Put here the frontend node as defined in .ssh/config (Host)\nfront\n\n[wn]\n; Put here the working nodes (one per line) as defined in the .ssh/config (Host)\nwn1\nwn2\nwn3\n
    "},{"location":"deploy-k3s-ansible/#setting-up-the-playbook-variables","title":"Setting up the playbook variables","text":"

    You also need to set up some parameters for the configuration of the cluster and OSCAR components, like OSCAR and MinIO credentials and DNS endpoints to configure the Kubernetes Ingress and cert-manager to securely expose the services. To do it, please edit the vars.yaml file and update the variables:

    ---\n# K3s version to be installed\nkube_version: v1.22.3+k3s1\n# Token to login in K3s and the Kubernetes Dashboard\nkube_admin_token: kube-token123\n# Password for OSCAR\noscar_password: oscar123\n# DNS name for the OSCAR Ingress and Kubernetes Dashboard (path \"/dashboard/\")\ndns_host: oscar-cluster.example.com\n# Password for MinIO\nminio_password: minio123\n# DNS name for the MinIO API Ingress\nminio_dns_host: minio.oscar-cluster.example.com\n# DNS name for the MinIO Console Ingress\nminio_dns_host_console: minio-console.oscar-cluster.example.com\n
    "},{"location":"deploy-k3s-ansible/#installation-of-the-required-ansible-roles","title":"Installation of the required ansible roles","text":"

    To install the required roles you only have to run:

    ansible-galaxy install -r install_roles.yaml --force\n

    The --force argument ensures you have the latest version of the roles.

    "},{"location":"deploy-k3s-ansible/#running-the-playbook","title":"Running the playbook","text":"

    Finally, with the following command the ansible playbook will be executed, configuring the nodes set in the hosts inventory file:

    ansible-playbook -i hosts oscar-k3s.yaml\n
    "},{"location":"devel-docs/","title":"Documentation development","text":"

    OSCAR uses MKDocs for the documentation. In particular, Material for MKDocs.

    Install the following dependencies:

    pip install mkdocs mkdocs-material mkdocs-render-swagger-plugin\n

    The from the main folder oscar run:

    mkdocs serve\n

    The documentation will be available in http://127.0.0.1:8000

    "},{"location":"exposed-services/","title":"Exposed Services","text":"

    OSCAR supports the deployment and elasticity management of long-running stateless services whose internal API or web-based UI must be directly reachable outside the cluster.

    \u2139\ufe0f

    This functionality can be used to support the fast inference of pre-trained AI models that require close to real-time processing with high throughput. In a traditional serverless approach, the AI model weights would be loaded in memory for each service invocation. Exposed services are also helpful when stateless services created out of large containers require too much time to start processing a service invocation. By exposing an OSCAR service, the AI model weights could be loaded just once, and the service would perform the AI model inference for each subsequent request.

    An auto-scaled load-balanced approach for these stateless services is supported. When the average CPU exceeds a certain user-defined threshold, additional service pods are dynamically created (and removed when no longer necessary) within the user-defined boundaries. The user can also define the minimum and maximum replicas of the service to be present on the cluster (see the parameters min_scale and max_scale in ExposeSettings).

    "},{"location":"exposed-services/#prerequisites-in-the-container-image","title":"Prerequisites in the container image","text":"

    The container image needs to have an HTTP server that binds to a specific port (see the parameter port in ExposeSettings). If developing a service from scratch in Python, you can use FastAPI or Flask to create an API. In Go, you can use Gin. For Ruby, you can use Sinatra.

    \u26a0\ufe0f

    If the service exposes a web-based UI, you must ensure that the content cannot only be served from the root document ('/') since the service will be exposed in a certain subpath.

    "},{"location":"exposed-services/#how-to-define-an-exposed-oscar-service","title":"How to define an exposed OSCAR service","text":"

    The minimum definition to expose an OSCAR service is to indicate in the corresponding FDL the port inside the container where the service will be listening.

    expose:\n  api_port: 5000\n

    Once the service is deployed, you can check if it was created correctly by making an HTTP request to the exposed endpoint:

    https://{oscar_endpoint}/system/services/{service_name}/exposed/{path_resource} \n

    Notice that if you get a 502 Bad Gateway error, it is most likely because the specified port on the service does not match the API port.

    Additional options can be defined in the \"expose\" section of the FDL (some previously mentioned), such as:

    Below is an example of the expose section of the FDL, showing that there will be between 5 to 15 active pods and that the service will expose an API in port 4578. The number of active pods will grow when the use of CPU increases by more than 50% and the active pods will decrease when the CPU use decreases below that threshold.

    expose:\n  min_scale: 5 \n  max_scale: 15 \n  api_port: 4578  \n  cpu_threshold: 50\n  set_auth: true\n  rewrite_target: true\n  default_command: true\n

    In addition, you can see below a full example of a recipe to expose a service from the AI4EOSC Marketplace:

    functions:\n  oscar:\n  - oscar-cluster:\n     name: body-pose-detection\n     memory: 2Gi\n     cpu: '1.0'\n     image: deephdc/deep-oc-posenet-tf\n     script: script.sh\n     environment:\n        Variables:\n          INPUT_TYPE: json  \n     expose:\n      min_scale: 1 \n      max_scale: 10 \n      api_port: 5000  \n      cpu_threshold: 20 \n      set_auth: true\n     input:\n     - storage_provider: minio.default\n       path: body-pose-detection/input\n     output:\n     - storage_provider: minio.default\n       path: body-pose-detection/output\n

    So, to invoke the API of this example the request will need the following information,

    1. OSCAR endpoint. localhost or https://{OSCAR_endpoint}
    2. Path resource. In this case, it is v2/models/posenetclas/predict/. Please do not forget the final /
    3. Use -k or --insecure if the SSL is false.
    4. Input image with the name people.jpeg
    5. Output. It will create a .zip file that has the outputs

    and will end up looking like this:

    curl {-k} -X POST https://{oscar_endpoint}/system/services/body-pose-detection-async/exposed/{path resource} -H  \"accept: */*\" -H  \"Content-Type: multipart/form-data\" -F \"data=@{input image};type=image/png\" --output {output file}\n

    Finally, the complete command that works in Local Testing with an image called people.jpeg as input and output_posenet.zip as output.

    curl -X POST https://localhost/system/services/body-pose-detection-async/exposed/v3/models/posenetclas/predict/ -H  \"accept: */*\" -H  \"Content-Type: multipart/form-data\" -F \"data=@people.jpeg;type=image/png\" --output output_posenet.zip\n

    Another FDL example shows how to expose a simple NGINX server as an OSCAR service:

    functions:\n  oscar:\n  - oscar-cluster:\n     name: nginx\n     memory: 2Gi\n     cpu: '1.0'\n     image: nginx\n     script: script.sh\n     expose:\n      min_scale: 2 \n      max_scale: 10 \n      api_port: 80  \n      cpu_threshold: 50 \n

    In case you use the NGINX example above in your local OSCAR cluster, you will see the nginx welcome page in: http://localhost/system/services/nginx/exposed/. Two active pods of the deployment will be shown with the command kubectl get pods -n oscar-svc

    oscar-svc            nginx-dlp-6b9ddddbd7-cm6c9                         1/1     Running     0             2m1s\noscar-svc            nginx-dlp-6b9ddddbd7-f4ml6                         1/1     Running     0             2m1s\n
    "},{"location":"faq/","title":"Frequently Asked Questions (FAQ)","text":""},{"location":"faq/#troubleshooting","title":"Troubleshooting","text":"

    You may have a server running on the :80 port, such as Apache, while the deployment is trying to use it for the OSCAR UI. Restarting it would solve this problem.

    When using oscar-cli, you can get this error if you try to run a service that is not present on the cluster set as default. You can check if you are using the correct default cluster with the following command,

    oscar-cli cluster default

    and set a new default cluster with the following command:

    oscar-cli cluster default -s CLUSTER_ID

    In case it is required the use of secret images, you should create a secret with the docker login configuration with a structure like this:

    apiVersion: v1\nkind: Secret\nmetadata:\n  name: dockersecret\n  namespace: oscar-svc\ndata:\n  .dockerconfigjson: {base64 .docker/config.json}\ntype: kubernetes.io/dockerconfigjson\n

    Apply the file through kubectl into the Kubernetes OSCAR cluster to create the secret. To use it in OSCAR services, you must add the secret name (dockersecret in this example) in the definition of the service, using the API or a FDL, under the image_pull_secrets parameter, or through the \"Docker secret\" field in OSCAR-UI.

    It could happen when an OSCAR cluster is deployed from an IM recipe that does not have certificates or the Let's Encrypt limit has been reached. Only 50 certificates per week can be issued. Those certificates have a 90 days expiration lifetime. The certificates issued can be seen at https://crt.sh/?q=im.grycap.net.

    If the OSCAR cluster has no certificate OSCAR UI will not show the buckets.

    You can fix this by entering in the MinIO endpoint minio.<OSCAR-endpoint>. The browser will block the page because it is unsafe. Once you accept the risk, you will enter the MinIO page. It is not necessary to log in.

    Return to OSCAR UI and, then, you can see the buckets. The buckets will be shown only in the browser you do this process. The results may vary depending on the browser. For example, they will show up in Firefox but not in Chrome.

    "},{"location":"fdl-composer/","title":"FDL Composer","text":"

    OSCAR Services can be aggregated into data-driven workflows where the output data of one service is stored in the object store that triggers another service, potentially in a different OSCAR cluster. This allows to execute the different phases of the workflow in disparate computing infrastructures.

    However, writing an entire workflow in an FDL file can be a difficult task for some users.

    To simplify the process you can use FDL Composer, a web-based application to facilitate the definition of FDL YAML files for OSCAR and SCAR.

    "},{"location":"fdl-composer/#how-to-access-fdl-composer","title":"How to access FDL Composer","text":"

    Just access FDL Composer which is a Single Page Application (SPA) running entirely in your browser. If you prefer to execute it on your computer instead of using the web, clone the git repository by using the following command:

    git clone https://github.com/grycap/fdl-composer\n

    And the run the app with npm:

    npm start\n
    "},{"location":"fdl-composer/#basic-elements","title":"Basic elements","text":"

    Workflows are composed of OSCAR services and Storage providers:

    "},{"location":"fdl-composer/#oscar-services","title":"OSCAR services","text":"

    OSCAR services are responsible for processing the data uploaded to Storage providers.

    Defining a new OSCAR service requires filling at least the name, image, and script fields.

    To define environment variables you must add them as a comma separated string of key=value entries. For example, to create a variable with the name firstName and the value John, the \"Environment variables\" field should look like firstName=John. If you want to assign more than one variable, for example, firstName and lastName with the values John and Keats, the input field should include them all separated by commas (e.g., firstName=John,lastName=Keats).

    "},{"location":"fdl-composer/#storage-providers-and-bucketsfolders","title":"Storage providers and buckets/folders","text":"

    Storage providers are object storage systems responsible for storing both the input files to be processed by OSCAR services and the output files generated as a result of the processing.

    Three types of storage providers can be used in OSCAR FDLs: MinIO, Amazon S3, and OneData.

    To configure them, drag the storage provider from the menu to the canvas and double click on the item created. A window with a single input will appear. Then, insert the path of the folder name. To edit one of the storage providers, move the mouse over the item and select the edit option.

    Remember that only MinIO can be used as input storage provider for OSCAR services.

    "},{"location":"fdl-composer/#download-and-load-state","title":"Download and load state","text":"

    The defined workflow can be saved in a file using the \"Download state\" button. OSCAR services, Storage Providers, and Buckets are kept in the file. The graphic workflow can be edited later by loading it with the \"Load state\" button.

    "},{"location":"fdl-composer/#create-a-yaml-file","title":"Create a YAML file","text":"

    You can easily download the workflow's FDL file (in YAML) through the \"Export YAML\" button.

    "},{"location":"fdl-composer/#connecting-components","title":"Connecting components","text":"

    All components have four ports: The up and left ones are input ports while the right and down ports are used as output. OSCAR Services can only be connected with Storage providers, always linked in the same direction (the output of one element with the input of the other).

    When two services are connected, both will be declared in the FDL file, but they will work separately, and there will be no workflow between them. If two storage providers are connected between them, it will have no effect, but both storages will be declared.

    "},{"location":"fdl-composer/#scar-options","title":"SCAR options","text":"

    FDL Composer can also create FDL files for SCAR. This allows to define workflows that can be executed on the Edge or in on-premises Clouds through OSCAR, and on the public Cloud (AWS Lambda and/or AWS Batch) through SCAR.

    "},{"location":"fdl-composer/#example","title":"Example","text":"

    There is an example of FDL Composer implementing the video-process use case in our blog.

    "},{"location":"fdl/","title":"Functions Definition Language (FDL)","text":"

    OSCAR services are typically defined via the Functions Definition Language (FDL) to be deployed via the OSCAR CLI. Alternative approaches are using the web-based wizard in the OSCAR UI or, for a programmatic integration, via the OSCAR API.

    \u2139\ufe0f

    It is called Functions Definition Language instead of Services Definition Language, because the definition was initially designed for SCAR, which supports Lambda functions.

    Example:

    functions:\n  oscar:\n  - oscar-test:\n      name: plants\n      memory: 2Gi\n      cpu: '1.0'\n      image: grycap/oscar-theano-plants\n      script: plants.sh\n      input:\n      - storage_provider: minio.default\n        path: example-workflow/in\n      output:\n      - storage_provider: minio.default\n        path: example-workflow/med\n  - oscar-test:\n      name: grayify\n      memory: 1Gi\n      cpu: '1.0'\n      image: grycap/imagemagick\n      script: grayify.sh\n      interlink_node_name: vega-new-vk\n      expose:\n        min_scale: 3 \n        max_scale: 7 \n        port: 5000  \n        cpu_threshold: 70 \n        nodePort: 30500\n        set_auth: true\n        rewrite_target: true\n        default_command: true\n      input:\n      - storage_provider: minio.default\n        path: example-workflow/med\n      output:\n      - storage_provider: minio.default\n        path: example-workflow/res\n      - storage_provider: onedata.my_onedata\n        path: result-example-workflow\n      - storage_provider: webdav.dcache\n        path: example-workflow/res\n\nstorage_providers:\n  onedata:\n    my_onedata:\n      oneprovider_host: my_provider.com\n      token: my_very_secret_token\n      space: my_onedata_space\n  webdav:\n    dcache:\n      hostname: my_dcache.com\n      login: my_username\n      password: my_password\n
    "},{"location":"fdl/#top-level-parameters","title":"Top level parameters","text":"Field Description functions Functions Mandatory parameter to define a Functions Definition Language file. Note that \"functions\" instead of \"services\" has been used in order to keep compatibility with SCAR storage_providers StorageProviders Parameter to define the credentials for the storage providers to be used in the services clusters map[string]Cluster Configuration for the OSCAR clusters that can be used as service's replicas, being the key the user-defined identifier for the cluster. Optional"},{"location":"fdl/#functions","title":"Functions","text":"Field Description oscar map[string]Service array Main object with the definition of the OSCAR services to be deployed. The components of the array are Service maps, where the key of every service is the identifier of the cluster where the service (defined as the value of the entry on the map) will be deployed."},{"location":"fdl/#service","title":"Service","text":"Field Description name string The name of the service cluster_id string Identifier for the current cluster, used to specify the cluster's StorageProvider in job delegations. OSCAR-CLI sets it using the cluster_id from the FDL. Optional. (default: \"\") image string Docker image for the service vo string Virtual Organization (VO) in which the user creating the service is enrolled. (Required for multitenancy) allowed_users string array Array of EGI UIDs to grant specific user permissions on the service. If empty, the service is considered as accesible to all the users with access to the OSCAR cluster. (Enabled since OSCAR version v3.0.0). alpine boolean Set if the Docker image is based on Alpine. If true, a custom release of the faas-supervisor will be used. Optional (default: false) script string Local path to the user script to be executed inside the container created out of the service invocation file_stage_in bool Skip the download of the input files by the faas-supervisor (default: false) image_pull_secrets string array Array of Kubernetes secrets. Only needed to use private images located on private registries. memory string Memory limit for the service following the kubernetes format. Optional (default: 256Mi) cpu string CPU limit for the service following the kubernetes format. Optional (default: 0.2) enable_gpu bool Enable the use of GPU. Requires a device plugin deployed on the cluster (More info: Kubernetes device plugins). Optional (default: false) enable_sgx bool Enable the use of SGX plugin on the cluster containers. (More info: SGX plugin documentation). Optional (default: false) image_prefetch bool Enable the use of image prefetching (retrieve the container image in the nodes when creating the service). Optional (default: false) total_memory string Limit for the memory used by all the service's jobs running simultaneously. Apache YuniKorn's' scheduler is required to work. Same format as Memory, but internally translated to MB (integer). Optional (default: \"\") total_cpu string Limit for the virtual CPUs used by all the service's jobs running simultaneously. Apache YuniKorn's' scheduler is required to work. Same format as CPU, but internally translated to millicores (integer). Optional (default: \"\") synchronous SynchronousSettings Struct to configure specific sync parameters. This settings are only applied on Knative ServerlessBackend. Optional. expose ExposeSettings Allows to expose the API or UI of the application run in the OSCAR service outside of the Kubernetes cluster. Optional. replicas Replica array List of replicas to delegate jobs. Optional. rescheduler_threshold string Time (in seconds) that a job (with replicas) can be queued before delegating it. Optional. log_level string Log level for the faas-supervisor. Available levels: NOTSET, DEBUG, INFO, WARNING, ERROR and CRITICAL. Optional (default: INFO) input StorageIOConfig array Array with the input configuration for the service. Optional output StorageIOConfig array Array with the output configuration for the service. Optional environment EnvVarsMap The user-defined environment variables assigned to the service. Optional annotations map[string]string User-defined Kubernetes annotations to be set in job's definition. Optional labels map[string]string User-defined Kubernetes labels to be set in job's definition. Optional interlink_node_name string Name of the virtual kubelet node (if you are using InterLink nodes) Optional"},{"location":"fdl/#synchronoussettings","title":"SynchronousSettings","text":"Field Description min_scale integer Minimum number of active replicas (pods) for the service. Optional. (default: 0) max_scale integer Maximum number of active replicas (pods) for the service. Optional. (default: 0 (Unlimited))"},{"location":"fdl/#exposesettings","title":"ExposeSettings","text":"Field Description min_scale integer Minimum number of active replicas (pods) for the service. Optional. (default: 1) max_scale integer Maximum number of active replicas (pods) for the service. Optional. (default: 10 (Unlimited)) port integer Port inside the container where the API is exposed. (value: 0 , the service wont be exposed.) cpu_threshold integer Percent of use of CPU before creating other pod (default: 80 max:100). Optional. nodePort integer Change the access method from the domain name to the public ip. Optional. set_auth bool Create credentials for the service, composed of the service name as the user and the service token as the password. (default: false). Optional. rewrite_target bool Target the URI where the traffic is redirected. (default: false). Optional. default_command bool Select between executing the container's default command and executing the script inside the container. (default: false). Optional."},{"location":"fdl/#replica","title":"Replica","text":"Field Description type string Type of the replica to re-send events (can be oscar or endpoint) cluster_id string Identifier of the cluster as defined in the \"clusters\" FDL field. Only used if Type is oscar service_name string Name of the service in the replica cluster. Only used if Type is oscar url string URL of the endpoint to re-send events (HTTP POST). Only used if Type is endpoint ssl_verify boolean Parameter to enable or disable the verification of SSL certificates. Only used if Type is endpoint. Optional. (default: true) priority integer Priority value to define delegation priority. Highest priority is defined as 0. If a delegation fails, OSCAR will try to delegate to another replica with lower priority. Optional. (default: 0) headers map[string]string Headers to send in delegation requests. Optional"},{"location":"fdl/#storageioconfig","title":"StorageIOConfig","text":"Field Description storage_provider string Reference to the storage provider defined in storage_providers. This string is composed by the provider's name (minio, s3, onedata) and the identifier (defined by the user), separated by a point (e.g. \"minio.myidentifier\") path string Path in the storage provider. In MinIO and S3 the first directory of the specified path is translated into the bucket's name (e.g. \"bucket/folder/subfolder\") suffix string array Array of suffixes for filtering the files to be uploaded. Only used in the output field. Optional prefix string array Array of prefixes for filtering the files to be uploaded. Only used in the output field. Optional"},{"location":"fdl/#envvarsmap","title":"EnvVarsMap","text":"Field Description Variables map[string]string Map to define the environment variables that will be available in the service container"},{"location":"fdl/#storageproviders","title":"StorageProviders","text":"Field Description minio map[string]MinIOProvider Map to define the credentials for a MinIO storage provider, being the key the user-defined identifier for the provider s3 map[string]S3Provider Map to define the credentials for an Amazon S3 storage provider, being the key the user-defined identifier for the provider onedata map[string]OnedataProvider Map to define the credentials for a Onedata storage provider, being the key the user-defined identifier for the provider webdav map[string]WebDavProvider Map to define the credentials for a storage provider accesible via WebDAV protocol, being the key the user-defined identifier for the provider"},{"location":"fdl/#cluster","title":"Cluster","text":"Field Description endpointstring Endpoint of the OSCAR cluster API auth_userstring Username to connect to the cluster (basic auth) auth_passwordstring Password to connect to the cluster (basic auth) ssl_verifyboolean Parameter to enable or disable the verification of SSL certificates"},{"location":"fdl/#minioprovider","title":"MinIOProvider","text":"Field Description endpoint string MinIO endpoint verify bool Verify MinIO's TLS certificates for HTTPS connections access_key string Access key of the MinIO server secret_key string Secret key of the MinIO server region string Region of the MinIO server"},{"location":"fdl/#s3provider","title":"S3Provider","text":"Field Description access_key string Access key of the AWS S3 service secret_key string Secret key of the AWS S3 service region string Region of the AWS S3 service"},{"location":"fdl/#onedataprovider","title":"OnedataProvider","text":"Field Description oneprovider_host string Endpoint of the Oneprovider token string Onedata access token space string Name of the Onedata space"},{"location":"fdl/#webdavprovider","title":"WebDAVProvider","text":"Field Description hostname string Provider hostname login string Provider account username password string Provider account password"},{"location":"integration-compss/","title":"Integration with COMPSs","text":"

    COMPSs is a task-based programming model which aims to ease the development of applications for distributed infrastructures, such as large High-Performance clusters (HPC), clouds and container managed clusters. COMPSs provides a programming interface for the development of the applications and a runtime system that exploits the inherent parallelism of applications at execution time.

    COMPSs support was introduced in OSCAR for the AI-SPRINT project to tackle the Personalized Healthcare use case in which OSCAR is employed to perform the inference phase of pre-trained models out of sensitive data captured from wearable devices. COMPSs, in particular, its Python binding named PyCOMPSs, was integrated to exploit parallelism across the multiple virtual CPUs of each pod resulting from each OSCAR service asynchronous invocation. This use case was coordinated by the Barcelona Supercomputing Center (BSC)

    There are several examples that showcase the COMPSs integration with OSCAR in the examples/compss folder in GitHub.

    "},{"location":"integration-egi/","title":"Integration with EGI","text":"

    EGI is a federation of many cloud providers and hundreds of data centres, spread across Europe and worldwide that delivers advanced computing services to support scientists, multinational projects and research infrastructures.

    "},{"location":"integration-egi/#deployment-on-the-egi-federated-cloud","title":"Deployment on the EGI Federated Cloud","text":"

    The EGI Federated Cloud is an IaaS-type cloud, made of academic private clouds and virtualised resources and built around open standards. Its development is driven by requirements of the scientific communities.

    The OSCAR platform can be deployed on the EGI Federated Cloud resources through the IM Dashboard.

    You can follow EGI's IM Dashboard documentation or the OSCAR's IM Dasboard documentation.

    "},{"location":"integration-egi/#integration-with-egi-datahub-onedata","title":"Integration with EGI Datahub (Onedata)","text":"

    EGI DataHub, based on Onedata, provides a global data access solution for science. Integrated with the EGI AAI, it allows users to have Onedata spaces supported by providers across Europe for replicated storage and on-demand caching.

    EGI DataHub can be used as an output storage provider for OSCAR, allowing users to store the resulting files of their OSCAR services on a Onedata space. This can be done thanks to the FaaS Supervisor. Used in OSCAR and SCAR, responsible for managing the data Input/Output and the user code execution.

    To deploy a function with Onedata as output storage provider you only have to specify an identifier, the URL of the Oneprovider host, your access token and the name of your Onedata space in the \"Storage\" tab of the service creation wizard:

    And the path where you want to store the files in the \"OUTPUTS\" tab:

    This means that scientists can store their output files on their Onedata space in the EGI DataHub for long-time persistence and easy sharing of experimental results between researchers.

    "},{"location":"integration-egi/#integration-with-egi-check-in-oidc","title":"Integration with EGI Check-In (OIDC)","text":"

    OSCAR API supports OIDC (OpenID Connect) access tokens to authorize users since release v2.5.0. By default, OSCAR clusters deployed via the IM Dashboard are configured to allow authorization via basic auth and OIDC tokens using the EGI Check-in issuer. From the IM Dashboard deployment window, users can add one EGI Virtual Organization to grant access for all users from that VO.

    "},{"location":"integration-egi/#accessing-from-oscar-ui","title":"Accessing from OSCAR UI","text":"

    The static web interface of OSCAR has been integrated with EGI Check-in and published in ui.oscar.grycap.net to facilitate the authorization of users. To login through EGI Check\u00edn using OIDC tokens, users only have to put the endpoint of its OSCAR cluster and click on the \"EGI CHECK-IN\" button.

    "},{"location":"integration-egi/#integration-with-oscar-cli-via-oidc-agent","title":"Integration with OSCAR-CLI via OIDC Agent","text":"

    Since version v1.4.0 OSCAR CLI supports API authorization via OIDC tokens thanks to the integration with oidc-agent.

    Users must install oidc-agent following its instructions and create a new account configuration for the https://aai.egi.eu/auth/realms/egi/ issuer.

    After that, clusters can be added with the command oscar-cli cluster add specifying the oidc-agent account name with the --oidc-account-name flag.

    "},{"location":"integration-interlink/","title":"Integration with interLink","text":"

    interLink is an open-source development that aims to provide an abstraction for executing a Kubernetes pod on any remote resource capable of managing a Container execution lifecycle.

    OSCAR uses the Kubernetes Virtual Node to translate a job request from the Kubernetes pod into a remote call. We have been using Interlink to interact with an HPC cluster. For more infomation check the interLink landing page.

    "},{"location":"integration-interlink/#installation-and-use-of-interlink-node-in-oscar-cluster","title":"Installation and use of Interlink Node in OSCAR cluster","text":"

    The cluster Kubernetes must have at least one virtual kubelet node. Those nodes will have tagged as type=virtual-kubelet. So, follow these steps to add the Virtual node to the Kubernetes cluster. OSCAR detects these nodes by itself.

    Once the Virtual node and OSCAR are installed correctly, you use this node by adding the name of the virtual node in the InterLinkNodeName variable. Otherwise, to use a normal node of the Kubernetes cluster, leave it blank \"\"

    "},{"location":"integration-interlink/#annotations-restrictions-and-other-things-to-keep-in-mind","title":"Annotations, restrictions, and other things to keep in mind","text":"

    Please note that interLink uses singularity to run a container with these characteristics:

    The support for interLink was integrated in the context of the interTwin project, with support from Istituto Nazionale di Fisica Nucleare - INFN, who developed interLink, and CERN, who provided the development of itwinai, used as a platform for advanced AI/ML workflows in digital twin applications and a use case. Special thanks to the IZUM Center in Slovenia for providing access to the HPC Vega supercomputing facility to perform the testing.

    "},{"location":"integration-scone/","title":"Integration with SCONE","text":"

    SCONE is a tool that allows confidential computing on the cloud thus protecting the data, code and application secrets on a Kubernetes cluster. By leveraging hardware-based security features such as Intel SGX (Software Guard Extensions), SCONE ensures that sensitive data and computations remain protected even in potentially untrusted environments. This end-to-end encryption secures data both at rest and in transit, significantly reducing the risk of data breaches. Additionally, SCONE simplifies the development and deployment of secure applications by providing a seamless integration layer for existing software, thus enhancing security without requiring major code changes.

    \u26a0\ufe0f

    Please note that the usage of SCONE introduces a non-negligible overhead when executing the container for the OSCAR service.

    More info about SCONE and Kubernetes here.

    To use SCONE on a Kubernetes cluster, Intel SGX has to be enabled on the machines, and for these, the SGX Kubernetes plugin needs to be present on the cluster. Once the plugin is installed you only need to specify the parameter enable_sgx on the FDL of the services that are going to use a secured container image like in the following example.

    functions:\n  oscar:\n  - oscar-cluster:\n      name: sgx-service\n      memory: 1Gi\n      cpu: '0.6'\n      image: your_image\n      enable_sgx: true\n      script: script.sh\n

    SCONE support was introduced in OSCAR for the AI-SPRINT project to tackle the Personalized Healthcare use case in which OSCAR is employed to perform the inference phase of pre-trained models out of sensitive data captured from wearable devices. This use case was coordinated by the Barcelona Supercomputing Center (BSC) and Technische Universit\u00e4t Dresden \u2014 TU Dresden was involved for the technical activities regarding SCONE.

    "},{"location":"invoking-async/","title":"Asynchronous invocations","text":"

    For event-driven file processing, OSCAR automatically manages the creation and notification system of MinIO buckets in order to allow the event-driven invocation of services using asynchronous requests, generating a Kubernetes job for every file to be processed.

    "},{"location":"invoking-sync/","title":"Synchronous invocations","text":"

    Synchronous invocations allow obtaining the execution output as the response to the HTTP call to the /run/<SERVICE_NAME> path of the OSCAR API. For this, OSCAR delegates the execution to a serverless back-end (e.g. Knative) which uses an auto-scaled set of pods to process the requests.

    \u2139\ufe0f

    You may find references in the documentation or examples to OpenFaaS, which was used in older versions of OSCAR. Recent versions of OSCAR use Knative as the serverless back-end for synchronous invocations, which provides several benefits such as scale-to-zero or load-balanced auto-scaled set of pods.

    Synchronous invocations can be made through OSCAR CLI, using the command oscar-cli service run:

    oscar-cli service run [SERVICE_NAME] {--input | --text-input} {-o | -output }\n

    You can check these examples:

    The input can be sent as a file via the --input flag, and the result of the execution will be displayed directly in the terminal:

    oscar-cli service run plant-classification-sync --input images/image3.jpg\n

    Alternatively, it can be sent as plain text using the --text-input flag and the result stored in a file using the --output flag:

    oscar-cli service run text-to-speech --text-input \"Hello everyone\"  --output output.mp3\n
    "},{"location":"invoking-sync/#synchronous-invocations-via-oscar-cli","title":"Synchronous Invocations via OSCAR CLI","text":"

    OSCAR CLI simplifies the execution of services synchronously via the oscar-cli service run command. This command requires the input to be passed as text through the --text-input flag or directly a file to be sent by passing its path through the --input flag. Both input types are automatically encoded in Base64.

    It also allow setting the --output flag to indicate a path for storing (and decoding if needed) the output body in a file, otherwise the output will be shown in stdout.

    An illustration of triggering a service synchronously through OSCAR-CLI can be found in the cowsay example.

    oscar-cli service run cowsay --text-input '{\"message\":\"Hello World\"}'\n
    "},{"location":"invoking-sync/#synchronous-invocations-via-oscar-api","title":"Synchronous Invocations via OSCAR API","text":"

    OSCAR services can also be invoked via traditional HTTP clients such as cURL using the path /run/<SERVICE_NAME> defined in the OSCAR API . However, you must take care to properly format the input to one of the two supported formats (JSON or Base64 encode) and include the service access token in the request.

    An illustration of triggering a service synchronously through cURL can be found in the cowsay example.

    To send an input file through cURL, you must encode it in base64 or json. To avoid issues with the output in synchronous invocations remember to put the log_level as CRITICAL. Output, which is encoded in base64 or in json, should be decoded as well. Save output in the expected format of the use-case.

    base64 input.png | curl -X POST -H \"Authorization: Bearer <TOKEN>\" \\\n -d @- https://<CLUSTER_ENDPOINT>/run/<OSCAR_SERVICE> | base64 -d > result.png\n
    "},{"location":"invoking-sync/#service-access-tokens","title":"Service access tokens","text":"

    As detailed in the API specification, invocation paths require the service access token in the request header for authentication. Service access tokens are auto-generated in service creation and update, and MinIO eventing system is automatically configured to use them for event-driven file processing. Tokens can be obtained through the API, using the oscar-cli service get command or directly from the web interface.

    "},{"location":"invoking-sync/#limitations","title":"Limitations","text":"

    Although the use of the Knative Serverless Backend for synchronous invocations provides elasticity similar to the one provided by their counterparts in public clouds, such as AWS Lambda, synchronous invocations are not still the best option for run long-running resource-demanding applications, like deep learning inference or video processing.

    The synchronous invocation of long-running resource-demanding applications may lead to timeouts on Knative pods. Therefore, we consider asynchronous invocations (which generate Kubernetes jobs) as the optimal approach to handle event-driven file processing.

    "},{"location":"invoking/","title":"Service Execution Types","text":"

    OSCAR services can be executed:

    "},{"location":"license/","title":"License","text":"
                                     Apache License\n\n                           Version 2.0, January 2004\n\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"{}\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright 2018 GRyCAP - I3M - UPV\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n
    "},{"location":"local-testing/","title":"Local Deployment","text":"

    \u2757\ufe0f

    The local deployment of OSCAR is just recommended for testing. Please, consider using the IM to deploy a fully-featured OSCAR cluster in a Cloud platform.

    The easiest way to test the OSCAR platform locally is using kind. Kind allows the deployment of Kubernetes clusters inside Docker containers and automatically configures kubectl to access them.

    "},{"location":"local-testing/#prerequisites","title":"Prerequisites","text":"

    \u26a0\ufe0f

    Although the use of local Docker images has yet to be implemented as a feature on OSCAR clusters, the local deployment for testing allows you to use a local Docker registry to use this kind of images. The registry uses by default the port 5001, so each image you want to use must be tagged as localhost:5001/[image_name] and pushed to the repository through the docker push localhost:5001/[image_name] command.

    Also, port 80 must be available to avoid errors during the deployment since OSCAR-UI uses it. Check Frequently Asked Questions (FAQ) for more info.

    "},{"location":"local-testing/#automated-local-testing","title":"Automated local testing","text":"

    To set up the enviroment for the platform testing you can run the following command. This script automatically executes all the necessary steps to deploy the local cluster and the OSCAR platform along with all the required tools.

    curl -sSL http://go.oscar.grycap.net | bash\n
    "},{"location":"local-testing/#steps-for-manual-local-testing","title":"Steps for manual local testing","text":"

    If you want to do it manualy you can follow the listed steps.

    "},{"location":"local-testing/#create-the-cluster","title":"Create the cluster","text":"

    To create a single node cluster with MinIO and Ingress controller ports locally accessible, run:

    cat <<EOF | kind create cluster --config=-\nkind: Cluster\napiVersion: kind.x-k8s.io/v1alpha4\nnodes:\n- role: control-plane\n  kubeadmConfigPatches:\n  - |\n    kind: InitConfiguration\n    nodeRegistration:\n      kubeletExtraArgs:\n        node-labels: \"ingress-ready=true\"\n  extraPortMappings:\n  - containerPort: 80\n    hostPort: 80\n    protocol: TCP\n  - containerPort: 443\n    hostPort: 443\n    protocol: TCP\n  - containerPort: 30300\n    hostPort: 30300\n    protocol: TCP\n  - containerPort: 30301\n    hostPort: 30301\n    protocol: TCP\nEOF\n
    "},{"location":"local-testing/#deploy-nginx-ingress","title":"Deploy NGINX Ingress","text":"

    To enable Ingress support for accessing the OSCAR server, we must deploy the NGINX Ingress:

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml\n
    "},{"location":"local-testing/#deploy-minio","title":"Deploy MinIO","text":"

    OSCAR depends on MinIO as a storage provider and function trigger. The easy way to run MinIO in a Kubernetes cluster is by installing its helm chart. To install the helm MinIO repo and install the chart, run the following commands replacing <MINIO_PASSWORD> with a password. It must have at least 8 characters:

    helm repo add minio https://charts.min.io\nhelm install minio minio/minio --namespace minio --set rootUser=minio,\\\nrootPassword=<MINIO_PASSWORD>,service.type=NodePort,service.nodePort=30300,\\\nconsoleService.type=NodePort,consoleService.nodePort=30301,mode=standalone,\\\nresources.requests.memory=512Mi,\\\nenvironment.MINIO_BROWSER_REDIRECT_URL=http://localhost:30301 \\\n --create-namespace\n

    Note that the deployment has been configured to use the rootUser minio and the specified password as rootPassword. The NodePort service type has been used in order to allow access from http://localhost:30300 (API) and http://localhost:30301 (Console).

    "},{"location":"local-testing/#deploy-nfs-server-provisioner","title":"Deploy NFS server provisioner","text":"

    NFS server provisioner is required for the creation of ReadWriteMany PersistentVolumes in the kind cluster. This is needed by the OSCAR services to mount the volume with the FaaS Supervisor inside the job containers.

    To deploy it you can use this chart executing:

    helm repo add nfs-ganesha-server-and-external-provisioner https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner/\nhelm install nfs-server-provisioner nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner\n

    Some Linux distributions may have problems using the NFS server provisioner with kind due to its default configuration of kernel-limit file descriptors. To workaround it, please run sudo sysctl -w fs.nr_open=1048576.

    "},{"location":"local-testing/#deploy-knative-serving-as-serverless-backend-optional","title":"Deploy Knative Serving as Serverless Backend (OPTIONAL)","text":"

    OSCAR supports Knative Serving as Serverless Backend to process synchronous invocations. If you want to deploy it in the kind cluster, first you must deploy the Knative Operator

    kubectl apply -f https://github.com/knative/operator/releases/download/knative-v1.3.1/operator.yaml\n

    Note that the above command deploys the version v1.3.1 of the Operator. You can check if there are new versions here.

    Once the Operator has been successfully deployed, you can install the Knative Serving stack with the following command:

    cat <<EOF | kubectl apply -f -\n---\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: knative-serving\n---\napiVersion: operator.knative.dev/v1beta1\nkind: KnativeServing\nmetadata:\n  name: knative-serving\n  namespace: knative-serving\nspec:\n  version: 1.3.0\n  ingress:\n    kourier:\n      enabled: true\n      service-type: ClusterIP\n  config:\n    config-features:\n      kubernetes.podspec-persistent-volume-claim: enabled\n      kubernetes.podspec-persistent-volume-write: enabled\n    network:\n      ingress-class: \"kourier.ingress.networking.knative.dev\"\nEOF\n
    "},{"location":"local-testing/#deploy-oscar","title":"Deploy OSCAR","text":"

    First, create the oscar and oscar-svc namespaces by executing:

    kubectl apply -f https://raw.githubusercontent.com/grycap/oscar/master/deploy/yaml/oscar-namespaces.yaml\n

    Then, add the grycap helm repo and deploy by running the following commands replacing <OSCAR_PASSWORD> with a password of your choice and <MINIO_PASSWORD> with the MinIO rootPassword, and remember to add the flag --set serverlessBackend=knative if you deployed it in the previous step:

    helm repo add grycap https://grycap.github.io/helm-charts/\nhelm install --namespace=oscar oscar grycap/oscar \\\n --set authPass=<OSCAR_PASSWORD> --set service.type=ClusterIP \\\n --set ingress.create=true --set volume.storageClassName=nfs \\\n --set minIO.endpoint=http://minio.minio:9000 --set minIO.TLSVerify=false \\\n --set minIO.accessKey=minio --set minIO.secretKey=<MINIO_PASSWORD>\n

    Now you can access to the OSCAR web interface through https://localhost with user oscar and the specified password.

    Note that the OSCAR server has been configured to use the ClusterIP service of MinIO for internal communication. This blocks the MinIO section in the OSCAR web interface, so to download and upload files you must connect directly to MinIO (http://localhost:30300).

    "},{"location":"local-testing/#delete-the-cluster","title":"Delete the cluster","text":"

    Once you have finished testing the platform, you can remove the local kind cluster by executing:

    kind delete cluster\n

    Remember that if you have more than one cluster created, it may be required to set the --name flag to specify the name of the cluster to be deleted.

    "},{"location":"local-testing/#using-oscar-cli","title":"Using OSCAR-CLI","text":"

    To use OSCAR-CLI in a local deployment, you should set the --disable-ssl flag to disable verification of the self-signed certificates:

    oscar-cli cluster add oscar-cluster https://localhost oscar <OSCAR_PASSWORD> --disable-ssl\n
    "},{"location":"minio-bucket-replication/","title":"MinIO bucket replication","text":"

    In scenarios where you have two linked OSCAR clusters as part of the same workflow defined in FDL, temporary network disconnections cause that data generated on the first cluster during the disconnection time is lost as well.

    To resolve this scenario we propose the use of replicated buckets on MinIO. With this approach, you can have two buckets synchronized on different OSCAR clusters so that, if the connection is lost, they will be re-synchronized when the connection is restored.

    An example of this scenario is shown on the following diagram, where there are two MinIO instances (each one on a different OSCAR cluster), and the output of the execution of service_x on the source serves as input for the service_y on the remote cluster.

    Here is in more detail the data flow between the buckets:

    MinIO instance source

    MinIO instance remote

    "},{"location":"minio-bucket-replication/#considerations","title":"Considerations","text":"

    When you create the service on the remote OSCAR cluster, the intermediate bucket which is both the replica and input of the OSCAR service will have the webhook event for PUT actions enabled so it can trigger the OSCAR service.

    Because, as explained below on Event handling on replication events, there are some specific events for replicated buckets, it is important to delete this event webhook to avoid getting both events every time.

    mc event remove originminio/intermediate arn:aws:sqs::intermediate:webhook --event put\n
    "},{"location":"minio-bucket-replication/#helm-installation","title":"Helm installation","text":"

    To be able to use replication each MinIO instance deployed with Helm has to be configured in distributed mode. This is done by adding the parameters mode=distributed,replicas=NUM_REPLICAS.

    Here is an example of a local MinIO replicated deployment with Helm:

    helm install minio minio/minio --namespace minio --set rootUser=minio,rootPassword=minio123,service.type=NodePort,service.nodePort=30300,consoleService.type=NodePort,consoleService.nodePort=30301,mode=distributed,replicas=2,resources.requests.memory=512Mi,environment.MINIO_BROWSER_REDIRECT_URL=http://localhost:30301 --create-namespace\n
    "},{"location":"minio-bucket-replication/#minio-setup","title":"MinIO setup","text":"

    To use the replication service it is necessary to set up manually both the requirements and the replication, either by command line or via the MinIO console. We created a test environment with replication via the command line as follows.

    First, we define our minIO instances (originminio and remoteminio) on the minio client.

    mc alias set originminio https://localminio minioadminuser minioadminpassword\n\nmc alias set remoteminio https://remoteminio minioadminuser minioadminpassword\n

    A requisite for replication is to enable the versioning on the buckets that will serve as origin and replica. When we create a service through OSCAR and the minIO buckets are created, versioning is not enabled by default, so we have to do it manually.

    mc version enable originminio/intermediate\n\nmc version enable remoteminio/intermediate\n

    Then, you can create the replication remote target

    mc admin bucket remote add originminio/intermediate \\\n  https://RemoteUser:Password@HOSTNAME/intermediate \\\n  --service \"replication\"\n

    and add the bucket replication rule so the actions on the origin bucket get synchronized on the replica.

    mc replicate add originminio/intermediate \\\n   --remote-bucket 'arn:minio:replication::<UUID>:intermediate' \\\n   --replicate \"delete,delete-marker,existing-objects\"\n
    "},{"location":"minio-bucket-replication/#event-handling-on-replication-events","title":"Event handling on replication events","text":"

    Once you have replica instances you can add a specific event webhook for the replica-related events.

    mc event add originminio/intermediate arn:minio:sqs::intermediate:webhook --event replica\n

    The replication events sometimes arrive duplicated. Although this is not yet implemented, a solution to the duplicated events would be to filter them by the userMetadata, which is marked as \"PENDING\" on the events to be discarded.

      \"userMetadata\": {\n    \"X-Amz-Replication-Status\": \"PENDING\"\n  }\n

    MinIO documentation used

    "},{"location":"mount/","title":"Mounting external storage on service volumes","text":"

    This feature enables the mounting of a folder from a storage provider, such as MinIO or dCache, into the service container. As illustrated in the following diagram, the folder is placed inside the /mnt directory on the container volume, thereby making it accessible to the service. This functionality can be utilized with exposed services, such as those using a Jupyter Notebook, to make the content of the storage bucket accessible directly within the Notebook.

    As OSCAR has the credentials of the default MinIO instance internally, if you want to use a different one or a different storage provider, you need to set these credentials on the service FDL. Currently, the storage providers supported on this functionality are:

    Let's explore these with an FDL example:

    mount:\n  storage_provider: minio.default\n  path: /body-pose-detection-async\n

    The example above means that OSCAR mounts the body-pose-detection-async bucket of the default MinIO inside the OSCAR services. So, the content of the body-pose-detection-async bucket will be found in /mnt/body-pose-detection-async folder inside the execution of OSCAR services.

    "},{"location":"multitenancy/","title":"Multitenancy support in OSCAR","text":"

    In the context of OSCAR, multi-tenancy support refers to the platform's ability to enable multiple users or organizations (tenants) to deploy and run their applications or functions on the same underlying infrastructure. Support for multitenancy in OSCAR has been available since version v3.0.0. To use this functionality, there are some requisites that the cluster and the users have to fulfill:

    functions:\n  oscar:\n  - oscar-cluster:\n      name: grayify_multitenant\n      memory: 1Gi\n      cpu: '0.6'\n      image: ghcr.io/grycap/imagemagick\n      script: script.sh\n      vo: \"vo.example.eu\" # Needed to create services on OIDC enabled clusters\n      allowed_users: \n      - \"62bb11b40398f73778b66f344d282242debb8ee3ebb106717a123ca213162926@egi.eu\"\n      - \"5e14d33ac4abc96272cc163da6a200c2e18591bfb3b0f32a4c9c867f5e938463@egi.eu\"\n      input:\n      - storage_provider: minio.default\n        path: grayify_multitenant/input\n      output:\n      - storage_provider: minio.default\n        path: grayify_multitenant/output\n

    NOTE: A user can obtain its EGI User Id by login into https://aai.egi.eu/ (for the production instance of EGI Check-In) or https://aai-demo.egi.eu (for the demo instance of EGI Check-In).

    Since OSCAR uses MinIO as the main storage provider, so that the users only have access to their designated bucket's service, MinIO users are created on the fly for each EGI UID. Consequently, each user accessing the cluster will have a MinIO user with its UID as AccessKey and an autogenerated SecretKey.

    "},{"location":"oscar-cli/","title":"OSCAR CLI","text":"

    OSCAR CLI provides a command line interface to interact with OSCAR. It supports cluster registrations, service management, workflows definition from FDL files and the ability to manage files from OSCAR's compatible storage providers (MinIO, AWS S3 and Onedata). The folder example-workflow contains all the necessary files to create a simple workflow to test the tool.

    "},{"location":"oscar-cli/#download","title":"Download","text":""},{"location":"oscar-cli/#releases","title":"Releases","text":"

    The easy way to download OSCAR-CLI is through the GitHub releases page. There are binaries for multiple platforms and OS. If you need a binary for another platform, please open an issue.

    "},{"location":"oscar-cli/#install-from-source","title":"Install from source","text":"

    If you have the Go programming language installed and configured, you can get it directly from the source by executing:

    go install github.com/grycap/oscar-cli@latest\n
    "},{"location":"oscar-cli/#oidc-openid-connect","title":"OIDC (OpenID Connect)","text":"

    If your cluster has OIDC avaliable, follow these steps to use oscar-cli to interact with it using the OpenID Connect.

    oscar-cli cluster add IDENTIFIER ENDPOINT --oidc-account-name SHORTNAME\n
    "},{"location":"oscar-cli/#available-commands","title":"Available commands","text":""},{"location":"oscar-cli/#apply","title":"apply","text":"

    Apply a FDL file to create or edit services in clusters.

    Usage:\n  oscar-cli apply FDL_FILE [flags]\n\nAliases:\n  apply, a\n\nFlags:\n      --config string   set the location of the config file (YAML or JSON)\n  -h, --help            help for apply\n
    "},{"location":"oscar-cli/#cluster","title":"cluster","text":"

    Manages the configuration of clusters.

    "},{"location":"oscar-cli/#subcommands","title":"Subcommands","text":""},{"location":"oscar-cli/#add","title":"add","text":"

    Add a new existing cluster to oscar-cli.

    Usage:\n  oscar-cli cluster add IDENTIFIER ENDPOINT {USERNAME {PASSWORD | \\\n  --password-stdin} | --oidc-account-name ACCOUNT} [flags]\n\nAliases:\n  add, a\n\nFlags:\n      --disable-ssl               disable verification of ssl certificates for the\n                                  added cluster\n  -h, --help                      help for add\n  -o, --oidc-account-name string  OIDC account name to authenticate using\n                                  oidc-agent. Note that oidc-agent must be\n                                  started and properly configured\n                                  (See:https://indigo-dc.gitbook.io/oidc-agent/)\n      --password-stdin            take the password from stdin\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#default","title":"default","text":"

    Show or set the default cluster.

    Usage:\n  oscar-cli cluster default [flags]\n\nAliases:\n  default, d\n\nFlags:\n  -h, --help         help for default\n  -s, --set string   set a default cluster by passing its IDENTIFIER\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#info","title":"info","text":"

    Show information of an OSCAR cluster.

    Usage:\n  oscar-cli cluster info [flags]\n\nAliases:\n  info, i\n\nFlags:\n  -c, --cluster string   set the cluster\n  -h, --help             help for info\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#list","title":"list","text":"

    List the configured OSCAR clusters.

    Usage:\n  oscar-cli cluster list [flags]\n\nAliases:\n  list, ls\n\nFlags:\n  -h, --help   help for list\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#remove","title":"remove","text":"

    Remove a cluster from the configuration file.

    Usage:\n  oscar-cli cluster remove IDENTIFIER [flags]\n\nAliases:\n  remove, rm\n\nFlags:\n  -h, --help   help for remove\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#service","title":"service","text":"

    Manages the services within a cluster.

    "},{"location":"oscar-cli/#subcommands-of-services","title":"Subcommands of services","text":""},{"location":"oscar-cli/#get","title":"get","text":"

    Get the definition of a service.

    Usage:\n  oscar-cli service get SERVICE_NAME [flags]\n\nAliases:\n  get, g\n\nFlags:\n  -c, --cluster string   set the cluster\n  -h, --help             help for get\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#list-services","title":"list services","text":"

    List the available services in a cluster.

    Usage:\n  oscar-cli service list [flags]\n\nAliases:\n  list, ls\n\nFlags:\n  -c, --cluster string   set the cluster\n  -h, --help             help for list\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#remove-services","title":"remove services","text":"

    Remove a service from the cluster.

    Usage:\n  oscar-cli service remove SERVICE_NAME... [flags]\n\nAliases:\n  remove, rm\n\nFlags:\n  -c, --cluster string   set the cluster\n  -h, --help             help for remove\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#run","title":"run","text":"

    Invoke a service synchronously (a Serverless backend in the cluster is required).

    Usage:\n  oscar-cli service run SERVICE_NAME {--input | --text-input} [flags]\n\nAliases:\n  run, invoke, r\n\nFlags:\n  -c, --cluster string      set the cluster\n  -h, --help                help for run\n  -i, --input string        input file for the request\n  -o, --output string       file path to store the output\n  -t, --text-input string   text input string for the request\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#logs-list","title":"logs list","text":"

    List the logs from a service.

    Usage:\n  oscar-cli service logs list SERVICE_NAME [flags]\n\nAliases:\n  list, ls\n\nFlags:\n  -h, --help             help for list\n  -s, --status strings   filter by status (Pending, Running, Succeeded or\n                         Failed), multiple values can be specified by a\n                         comma-separated string\n\nGlobal Flags:\n  -c, --cluster string   set the cluster\n      --config string    set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#logs-get","title":"logs get","text":"

    Get the logs from a service's job.

    Usage:\n  oscar-cli service logs get SERVICE_NAME JOB_NAME [flags]\n\nAliases:\n  get, g\n\nFlags:\n  -h, --help              help for get\n  -t, --show-timestamps   show timestamps in the logs\n\nGlobal Flags:\n  -c, --cluster string   set the cluster\n      --config string    set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#logs-remove","title":"logs remove","text":"

    Remove a service's job along with its logs.

    Usage:\n  oscar-cli service logs remove SERVICE_NAME \\\n   {JOB_NAME... | --succeeded | --all} [flags]\n\nAliases:\n  remove, rm\n\nFlags:\n  -a, --all         remove all logs from the service\n  -h, --help        help for remove\n  -s, --succeeded   remove succeeded logs from the service\n\nGlobal Flags:\n  -c, --cluster string   set the cluster\n      --config string    set the location of the config file (YAML or JSON)\n

    Note The following subcommands will not work with MinIO if you use a local deployment due to DNS resolutions, so if you want to use a command line put/get/list files from your buckets, you can use the MinIO client command line. Once you have the client installed you can define the cluster with the mc alias command like it follows: mc alias set myminio https://localhost:30000 minioadminuser minioadminpassword So, instead of the next subcommands, you would use: - mc cp to put/get files fron a bucket. - mc ls to list files from a bucket.

    "},{"location":"oscar-cli/#get-file","title":"get-file","text":"

    Get a file from a service's storage provider.

    The STORAGE_PROVIDER argument follows the format STORAGE_PROVIDER_TYPE.STORAGE_PROVIDER_NAME, being the STORAGE_PROVIDER_TYPE one of the three supported storage providers (MinIO, S3 or Onedata) and the STORAGE_PROVIDER_NAME is the identifier for the provider set in the service's definition.

    Usage:\n  oscar-cli service get-file SERVICE_NAME STORAGE_PROVIDER REMOTE_FILE \\\n   LOCAL_FILE [flags]\n\nAliases:\n  get-file, gf\n\nFlags:\n  -c, --cluster string   set the cluster\n  -h, --help             help for get-file\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#put-file","title":"put-file","text":"

    Put a file in a service's storage provider.

    The STORAGE_PROVIDER argument follows the format STORAGE_PROVIDER_TYPE.STORAGE_PROVIDER_NAME, being the STORAGE_PROVIDER_TYPE one of the three supported storage providers (MinIO, S3 or Onedata) and the STORAGE_PROVIDER_NAME is the identifier for the provider set in the service's definition.

    NOTE: This command can not be used in a local testing deployment.

    Usage:\n  oscar-cli service put-file SERVICE_NAME STORAGE_PROVIDER LOCAL_FILE \\\n   REMOTE_FILE [flags]\n\nAliases:\n  put-file, pf\n\nFlags:\n  -c, --cluster string   set the cluster\n  -h, --help             help for put-file\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#list-files","title":"list-files","text":"

    List files from a service's storage provider path.

    The STORAGE_PROVIDER argument follows the format STORAGE_PROVIDER_TYPE.STORAGE_PROVIDER_NAME, being the STORAGE_PROVIDER_TYPE one of the three supported storage providers (MinIO, S3 or Onedata) and the STORAGE_PROVIDER_NAME is the identifier for the provider set in the service's definition.

    Usage:\n  oscar-cli service list-files SERVICE_NAME STORAGE_PROVIDER REMOTE_PATH [flags]\n\nAliases:\n  list-files, list-file, lsf\n\nFlags:\n  -c, --cluster string   set the cluster\n  -h, --help             help for list-files\n\nGlobal Flags:\n      --config string   set the location of the config file (YAML or JSON)\n
    "},{"location":"oscar-cli/#version","title":"version","text":"

    Print the version.

    Usage:\n  oscar-cli version [flags]\n\nAliases:\n  version, v\n\nFlags:\n  -h, --help   help for version\n
    "},{"location":"oscar-cli/#help","title":"help","text":"

    Help provides help for any command in the application. Simply type oscar-cli help [path to command] for full details.

    Usage:\n  oscar-cli help [command] [flags]\n\nFlags:\n  -h, --help   help for help\n
    "},{"location":"oscar-service/","title":"OSCAR Service","text":"

    OSCAR allows the creation of serverless file-processing services based on container images. These services require a user-defined script with the commands responsible of the processing. The platform automatically mounts a volume on the containers with the FaaS Supervisor component, which is in charge of:

    "},{"location":"oscar-service/#inputoutput","title":"Input/Output","text":"

    FaaS Supervisor, the component in charge of managing the input and output of services, allows JSON or base64 encoded body in service requests. The body of these requests will be automatically decoded into the invocation's input file available from the script through the $INPUT_FILE_PATH environment variable.

    The output of synchronous invocations will depend on the application itself:

    1. If the script generates a file inside the output dir available through the $TMP_OUTPUT_DIR environment variable, the result will be the file encoded in base64.
    2. If the script generates more than one file inside $TMP_OUTPUT_DIR, the result will be a zip archive containing all files encoded in base64.
    3. If there are no files in $TMP_OUTPUT_DIR, FaaS Supervisor will return its logs, including the stdout of the user script run. To avoid FaaS Supervisor's logs, you must set the service's log_level to CRITICAL.

    This way users can adapt OSCAR's services to their own needs.

    You can follow one of the examples in order to test the OSCAR framework for specific applications. We recommend you to start with the plant classification example.

    "},{"location":"training/","title":"Presentations and Webinars","text":""},{"location":"training/#deploy-your-ai-based-service-for-inference-using-oscar","title":"Deploy your AI-based service for inference using OSCAR","text":"

    Delivered for the AI4EOSC and iMagine projects in March 2024.

    "},{"location":"usage-ui/","title":"OSCAR UI","text":"

    \u2757\ufe0f

    For simple OSCAR services you may use the UI, but its features may not be on par with the latest changes in the FDL. Therefore, it is recommended to use OSCAR CLI to deploy an OSCAR service.

    This section details the usage of the OSCAR UI with the plant classification example, from the OSCAR examples.

    "},{"location":"usage-ui/#login","title":"Login","text":"

    OSCAR UI is exposed via a Kubernetes ingress and it is accessible via the Kubernetes master node IP.

    After a correct login, you should see the main view:

    "},{"location":"usage-ui/#deploying-services","title":"Deploying services","text":"

    In order to create a new service, you must click on the \"DEPLOY NEW SERVICE\" button and follow the wizard. For an OSCAR Service a script must be provided for the processing of files. This script must use the environment variables INPUT_FILE_PATH and TMP_OUTPUT_DIR to refer to the input file and the folder where to save the results respectively:

    #!/bin/bash\n\necho \"SCRIPT: Invoked classify_image.py. File available in $INPUT_FILE_PATH\"\nFILE_NAME=`basename \"$INPUT_FILE_PATH\"`\nOUTPUT_FILE=\"$TMP_OUTPUT_DIR/$FILE_NAME\"\npython2 /opt/plant-classification-theano/classify_image.py \\\n \"$INPUT_FILE_PATH\" -o \"$OUTPUT_FILE\"\n

    You must fill in the fields indicating the container image to use, the name of the service and the script file. In addition, you can add environment variables, specify the resources (RAM and CPUs) and choose the log level of the service.

    Note that specifying a tag in the container image used can be convenient to avoid problems with quotas for certain container registries such as Docker Hub. This is due to the fact that Kubernetes defaults the imagePullPolicy of pods to Always when no tag or the latest tag is set, which checks the version of the image in the registry every time a job is launched.

    Next, the credentials of the storage providers to be used must be introduced. As the platform already has a MinIO deployment to operate, it is not necessary to enter its credentials for using it.

    Multiple MinIO, Onedata and Amazon S3 storage providers can be used. Remember to click the \"ADD\" button after completing each one.

    Then, click the \"NEXT\" button to go to the last section of the wizard.

    In this section, you must first choose the paths of the storage provider to be used as source of events, i.e. the input bucket and/or folder that will trigger the service.

    Only the minio.default provider can be used as input storage provider.

    After filling in each path, remember to click on the \"ADD INPUT\" button.

    Finally, the same must be done to indicate the output paths to be used in the desired storage providers. You can also indicate suffixes and/or prefixes to filter the files uploaded to each path by name.

    The resulting files can be stored in several storage providers, like in the following example, where they are stored in the MinIO server of the platform and in a Onedata space provided by the user.

    After clicking the \"SUBMIT\" button the new service will appear in the main view after a few seconds.

    "},{"location":"usage-ui/#triggering-the-service","title":"Triggering the service","text":""},{"location":"usage-ui/#http-endpoints","title":"HTTP endpoints","text":"

    OSCAR services can be invoked through auto-generated HTTP endpoints. Requests to these endpoints can be made in two ways:

    The content of the HTTP request body will be stored as a file that will be available via the INPUT_FILE_PATH environment variable to process it.

    A detailed specification of the OSCAR's API and its different paths can be found here.

    "},{"location":"usage-ui/#minio-storage-tab","title":"MinIO Storage Tab","text":"

    MinIO Storage Tab is made to manage buckets without using MinIO UI. It simplifies the process. From MinIO Storage Tab, buckets can be created or removed and folders inside them. Furthermore, files can be uploaded to the buckets and downloaded from them. Each time a service is created or submitted an edit, the buckets that are not created will be formed.

    "},{"location":"usage-ui/#uploading-files","title":"Uploading files","text":"

    Once a service has been created, it can be invoked by uploading files to its input bucket/folder. This can be done through the MinIO web interface (accessible from the Kubernetes frontend IP, on port 30300) or from the \"Minio Storage\" section in the side menu of the OSCAR web interface. Expanding down that menu will list the buckets created and, by clicking on their name, you will be able to see their content, upload and download files.

    To upload files, first click on the \"SELECT FILES\" button and choose the files you want to upload from your computer.

    Once you have chosen the files to upload, simply click on the \"UPLOAD\" button and the file will be uploaded, raising an event that will trigger the service.

    Note that the web interface includes a preview button for some file formats, such as images.

    "},{"location":"usage-ui/#service-status-and-logs","title":"Service status and logs","text":"

    When files are being processed by a service, it is important to know their status, as well as to observe the execution logs for testing. For this purpose, OSCAR includes a log view, accessible by clicking on the \"LOGS\" button in a service from the main view.

    In this view you can see all the jobs created for a service, as well as their status (\"Pending\", \"Running\", \"Succeeded\" or \"Failed\") and their creation, start and finish time.

    To view the logs generated by a job, simply click on the drop-down button located on the right.

    The view also features options to refresh the status of one or all jobs, as well as to delete them.

    "},{"location":"usage-ui/#downloading-files-from-minio","title":"Downloading files from MinIO","text":"

    Downloading files from the platform's MinIO storage provider can also be done using the OSCAR web interface. To do it, simply select one or more files and click on the button \"DOWNLOAD OBJECT\" (or \"DOWNLOAD ALL AS A ZIP\" if several files have been selected).

    In the following picture you can see the preview of the resulting file after the execution triggered in the previous step.

    "},{"location":"usage-ui/#deleting-services","title":"Deleting services","text":"

    Services can be deleted by clicking on the trash can icon from the main view.

    Once you have accepted the message shown in the image above, the service will be deleted after a few seconds.

    "}]} \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 279b1f52..ae9e6994 100644 Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ diff --git a/training/index.html b/training/index.html index 339141cc..df6ba5b3 100644 --- a/training/index.html +++ b/training/index.html @@ -1100,6 +1100,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + + diff --git a/usage-ui/index.html b/usage-ui/index.html index 34d3f680..c96a8c50 100644 --- a/usage-ui/index.html +++ b/usage-ui/index.html @@ -1184,6 +1184,48 @@ + + + + + + +
  • + + + + + Mount external volumes + + + + +
  • + + + + + + + + + + +
  • + + + + + Additional configuration + + + + +
  • + + + +