From cd9efd7ddadd42d41929e6740c8e8c592e639fb5 Mon Sep 17 00:00:00 2001 From: PhilippPlank <32519998+PhilippPlank@users.noreply.github.com> Date: Mon, 5 Feb 2024 16:47:48 +0000 Subject: [PATCH] =?UTF-8?q?Deploying=20to=20main=20from=20@=20lava-nc/lava?= =?UTF-8?q?-docs@4b668c65581cd3fa499c0d9eef9d7c7c27ab8959=20=F0=9F=9A=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../slayer/notebooks/pilotnet/train.html | 2 +- ...-shot_learning_with_novelty_detection.html | 16 +++---- .../clp/tutorial02_clp_on_coil100.html | 4 +- .../in_depth/tutorial02_processes.html | 6 +-- .../in_depth/tutorial02_processes.ipynb | 48 +++++++++---------- .../in_depth/tutorial03_process_models.html | 4 +- .../in_depth/tutorial03_process_models.ipynb | 44 ++++++++--------- .../tutorial05_connect_processes.html | 8 ++-- .../tutorial06_hierarchical_processes.html | 2 +- .../tutorial07_remote_memory_access.html | 6 +-- .../in_depth/tutorial11_serialization.html | 2 +- 11 files changed, 71 insertions(+), 71 deletions(-) diff --git a/lava-lib-dl/slayer/notebooks/pilotnet/train.html b/lava-lib-dl/slayer/notebooks/pilotnet/train.html index d38d2d0..3629043 100644 --- a/lava-lib-dl/slayer/notebooks/pilotnet/train.html +++ b/lava-lib-dl/slayer/notebooks/pilotnet/train.html @@ -824,7 +824,7 @@

What are SDNNs?

Drawing

Sigma-delta neural networks consists of two main units: sigma decoder in the dendrite and delta encoder in the axon. Delta encoder uses differential encoding on the output activation of a regular ANN activation, for e.g. ReLU. In addition it only sends activation to the next layer when the encoded message magnitude is larger than its threshold. The sigma unit accumulates the sparse event messages and accumulates it to restore the original value.

-

e14975ee1cce44e5a76831004644964a

+

7407cb930aed4352aa8fc77c6ecd90f0

A sigma-delta neuron is simply a regular activation wrapped around by a sigma unit at it’s input and a delta unit at its output.

When the input to the network is a temporal sequence, the activations do not change much. Therefore, the message between the layers are reduced which in turn reduces the synaptic computation in the next layer. In addition, the graded event values can encode the change in magnitude in one time-step. Therefore there is no increase in latency at the cost of time-steps unlike the rate coded Spiking Neural Networks.

Drawing

diff --git a/lava/notebooks/in_depth/clp/tutorial01_one-shot_learning_with_novelty_detection.html b/lava/notebooks/in_depth/clp/tutorial01_one-shot_learning_with_novelty_detection.html index fc2ed0f..7bed4e7 100644 --- a/lava/notebooks/in_depth/clp/tutorial01_one-shot_learning_with_novelty_detection.html +++ b/lava/notebooks/in_depth/clp/tutorial01_one-shot_learning_with_novelty_detection.html @@ -841,11 +841,11 @@

Online Continual Learning with Open-set Recognition

Finally, we want our system to be able to perform open-set recognition, i.e. rather than performing recognition purely on the known classes (close-set recognition), we also want to recognize instances from unknown classes (Figure 2, Shu et al. 2020). For this purpose, we will use prototypes with a radius (or margin) of recognition defined around them (Figure 3). In terms of the spiking (LIF) neurons, this margin translates to the voltage threshold for spiking.

-

6f37d3d670dd4adb8d9be92e10b98021

+

9130784cdee24c3f98a0a5a723c0d856

Figure 1. Continual Learning (image credit: Matthias De Lange)

-

852d1264cfcb4c5c94759c814a789e6b

+

3bbc35a4f7ee415496dec8b77ec89a2d

Figure 2. Open-set recognition

-

d82e4401b8354767b604da2362d3df16

+

38147e66865143da9d90c6cbc12e2d1e

Figure 3. The concept of prototypes in the brain. (Left) Learning by hard all examples is akin to k-NN. (Right) Learning an abstract concept from the seen examples is the idea behind the prototypes and what CLP uses

@@ -853,12 +853,12 @@

Similarity Measure: Why Cosine Similarity?

+

8fe731d1bb49490f897fe1abbcc01a13

From the literature, we know that cosine similarity works well for high-dimensional spaces (Hersche et al. 2022). As our main problem domain is visual processing, which generally has high-dimensional input, implicitly using cosine-similarity fits our purposes. Implicitly because we arrive at cosine similarity using the dot-product of normalized vectors. From the CLP’s perspective, it still performs a dot product similarity. This will play an essential role in deriving the learning rules and interpreting the similarity measure.

In Figure 4, we see prototypes with some recognition margin around them. On the left panel, a prototype with Euclidean similarity defines a point in Euclidean space and a circle in 2D (or a sphere in 3D) with a given radius (i.e., recognition threshold) around it. On the middle panel, we the prototype vectors with cosine similarity. In this case, both input and prototype vectors can have different lengths: the similarity is measured as the angle between the input vector and the weight vectors. Hence the recognition margins are circular and spherical sectors in 2D and 3D, respectively. On the right panel, we see our case: all input and weight vectors are normalized, i.e., they fall on the surface of a hyper-sphere. The recognition margins are defined solely on this surface: an arc of a circle in 2D and a piece (cap) on the sphere’s surface. Therefore, CLP will perform its computation always on this kind of surface.

-

fec979988ba54fdf989b6f4faeb6c609

+

c56f71cb27d54c5f9194d40f73ddcaa6

Figure 4. Prototypes with different similarity measures in 2D and 3D. (a) Euclidean distance/similarity, (b) cosine similarity, (c) Dot product similarity on normalized vectors. Figures are adopted from [Liu et al. 2017].

References: Zhu et al. 2022, Liu et al. 2017

@@ -1011,7 +1011,7 @@

Data generation and visualization -

b377fb1b7ec9416e8a45d6ae348b7718

+

60a67ff3bab84ad7bd0c0409f5919dd1

Figure 5. CLP Diagram. The main components of the CLP algorithm and their connections are depicted.

This first version of CLP includes the features like novelty detection, one-shot learning, and supervised and unsupervised labeling of the learned prototypes. The normalized input vectors are one-by-one injected into the prototype neurons through prototype weights/synapses. Suppose none of the prototype neurons spike after the input injection. In that case, this is detected by the novelty detector via coincidence detection between the presence of the input and the absence of any output. Once this is detected, the novelty detector sends a strong third-factor signal to the prototype population. It aims explicitly at the unallocated neuron in this population. As a result, this prototype neuron learns the input in one shot, effectively memorizing the input as a prototype. The outputs of prototype neurons are sent to the Readout module, which interprets the prototype index as a prediction label. Note that one class may have more than one prototype. Therefore, the Readout module keeps a @@ -1019,7 +1019,7 @@

Data generation and visualizationthe reward-modulated learning rule.

The system described in Figure 6 runs on a CPU, though with the Loihi protocol. The Loihi 2 implementation will be coming soon.

-

53bb33f846cf467cac7af699a145d5fc

+

2cd69a218dbc4c2db0965fade47f84c8

Figure 6. CLP Lava Diagram.

Let’s first set the parameters for all the processes.

@@ -1422,7 +1422,7 @@

Network Outputs: Novelty Detection, Prototypes, Predictions, LabelingThe 3rd input pattern is from cluster 1, and as expected, the novelty detector spikes again, allocating the Prototype Neuron 1this time. However, the label is provided together this time, so the labeling of the newly allocated neuron happens immediately (with the label 2). The next pattern is also from cluster 1, so the newly allocated neuron spikes and the correct label is predicted.

Finally, the fifth input pattern is from an unsee cluster (cluster 2). Hence, a novelty spike follows it after some time and allocates the Prototype Neuron 2 and has been assigned the pseudo-label -3. All the next input patterns from this cluster are recognized as -3. After one of those input patterns, the actual label (3) is provided so that the label of the Prototype Neuron 2 is updated to 3. After that, the patterns from this cluster are predicted with the label 3. One can validate the correct prototype neuron spikes for each input by comparing it to the table above of the accurate cluster ids and labels.

-

31f1145f7b8a4fb991f15e6150b57c51

+

cc7b7a363dcc4856ad9007fcb70e124e

Figure 7. Readout and labeling analysis.

diff --git a/lava/notebooks/in_depth/clp/tutorial02_clp_on_coil100.html b/lava/notebooks/in_depth/clp/tutorial02_clp_on_coil100.html index fd5a3d5..dcc36ff 100644 --- a/lava/notebooks/in_depth/clp/tutorial02_clp_on_coil100.html +++ b/lava/notebooks/in_depth/clp/tutorial02_clp_on_coil100.html @@ -867,7 +867,7 @@

Overview

Dasaset: COIL-100

-

f6bdc61cf51a44a8a081b75a7d51eb41

+

c41831dd921d4b74a9e46f02a4d5050e

Figure 1. COIL-100 dataset. The dataset includes 72 frames for each of the 100 objects rotated on a turntable.

@@ -1046,7 +1046,7 @@

Visualization of the similarity between the training and testing samples

Lava diagram of CLP for COIL-100 experiment

-

43d342307ba84d51941ac966ddab4981

+

77b51c84b39142aca7649133b85f32e1

Figure 1. Lava process diagram of the CLP that is capable of both unsupervised and supervised learning

Compared to the introductory CLP tutorial, we are using an upgraded version of the algorithm here. Specifically, this version is concurrently capable of both unsupervised and supervised learning. This is possible thanks to error feedback from the Readout process to the Allocator process. Basically, when there is a mismatch between the predicted label and the user-provided true label, the Readout process generates an error signal, which triggers the allocation of a new prototype neuron for this mistaken pattern. As we will see later, this will improve the performance. This error feedback can be turned on or off, and based on this the CLP performs either supervised or unsupervised learning.

diff --git a/lava/notebooks/in_depth/tutorial02_processes.html b/lava/notebooks/in_depth/tutorial02_processes.html index 30dea2c..c20c9d9 100644 --- a/lava/notebooks/in_depth/tutorial02_processes.html +++ b/lava/notebooks/in_depth/tutorial02_processes.html @@ -835,7 +835,7 @@

What is a Process?
  • ports that share data with other Processes, and

  • an API that facilitates user interaction.

  • A Process can thus be as simple as a single neuron or a synapse, as complex as a full neural network, and as non-neuromorphic as a streaming interface for a peripheral device or an executed instance of regular program code.

    -

    479a7f5b34db482eacecde67b1b5bef1

    +

    f40e6a9e197447eda022bdc1f7587c85

    Processes are independent from each other as they primarily operate on their own local memory while they pass messages between each other via channels. Different Processes thus proceed their computations simultaneously and asynchronously, mirroring the high parallelism inherent in neuromorphic hardware. The parallel Processes are furthermore safe against side effects from shared-memory interaction.

    Once a Process has been coded in Python, Lava allows to run it seamlessly across different backends such as a CPU, a GPU, or neuromorphic cores. Developers can thus easily test and benchmark their applications on classical computing hardware and then deploy it to neuromorphic hardware. Furthermore, Lava takes advantage of distributed, heterogeneous hardware such as Loihi as it can run some Processes on neuromorphic cores and in parallel others on embedded conventional CPUs and GPUs.

    While Lava provides a growing library of Processes, you can easily write your own processes that suit your needs.

    @@ -844,12 +844,12 @@

    What is a Process?How to build a Process?

    Overall architecture

    -

    All Processes in Lava share a universal architecture as they inherit from the same AbstractProcess class. Each Process consists of the following four key components. 12b0dafb3bce42d4bea15df4ec9defca

    +

    All Processes in Lava share a universal architecture as they inherit from the same AbstractProcess class. Each Process consists of the following four key components. 17336f19ec1841948609d7892862f08a

    AbstractProcess: Defining Vars, Ports, and the API

    When you create your own new process, you need to inherit from the AbstractProcess class. As an example, we will implement the class LIF, a group of leaky integrate-and-fire (LIF) neurons.

    -

    c1d35a4d99834e48af997ab76495ad44

    +

    167e3660b133496dbae9fb98c5d477b8

    diff --git a/lava/notebooks/in_depth/tutorial02_processes.ipynb b/lava/notebooks/in_depth/tutorial02_processes.ipynb index cba743e..14081bd 100644 --- a/lava/notebooks/in_depth/tutorial02_processes.ipynb +++ b/lava/notebooks/in_depth/tutorial02_processes.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "7073530b", + "id": "9c9d7867", "metadata": {}, "source": [ "*Copyright (C) 2021 Intel Corporation*
    \n", @@ -18,7 +18,7 @@ }, { "cell_type": "markdown", - "id": "a7abe987", + "id": "246140a2", "metadata": {}, "source": [ "## Recommended tutorials before starting:\n", @@ -49,7 +49,7 @@ }, { "cell_type": "markdown", - "id": "b480ac77", + "id": "77ded863", "metadata": {}, "source": [ "## How to build a _Process_?\n", @@ -62,7 +62,7 @@ }, { "cell_type": "markdown", - "id": "26ce52fb", + "id": "cbef2d63", "metadata": {}, "source": [ "#### _AbstractProcess_: Defining _Vars_, _Ports_, and the API\n", @@ -91,7 +91,7 @@ { "cell_type": "code", "execution_count": 1, - "id": "018183e5", + "id": "7e0c28aa", "metadata": {}, "outputs": [], "source": [ @@ -134,7 +134,7 @@ }, { "cell_type": "markdown", - "id": "accbc26c", + "id": "93bd4ffe", "metadata": {}, "source": [ "You may have noticed that most of the _Vars_ were initialized by scalar integers. But the synaptic current _u_ illustrates that _Vars_ can in general be initialized with numeric objects that have a dimensionality equal or less than specified by its _shape_ argument. The initial value will be scaled up to match the _Var_ dimension at run time.\n", @@ -148,7 +148,7 @@ }, { "cell_type": "markdown", - "id": "403f63f5", + "id": "2dcbdfea", "metadata": {}, "source": [ "#### _ProcessModel_: Defining the behavior of a _Process_\n", @@ -163,7 +163,7 @@ { "cell_type": "code", "execution_count": 2, - "id": "996c3887", + "id": "b8648def", "metadata": {}, "outputs": [], "source": [ @@ -201,7 +201,7 @@ }, { "cell_type": "markdown", - "id": "3c9dc3ce", + "id": "f4cd6e22", "metadata": {}, "source": [ "#### Instantiating the _Process_\n", @@ -212,7 +212,7 @@ { "cell_type": "code", "execution_count": 3, - "id": "5404dcf5", + "id": "16c35c5d", "metadata": {}, "outputs": [], "source": [ @@ -223,7 +223,7 @@ }, { "cell_type": "markdown", - "id": "fdf77489", + "id": "d0e3b96e", "metadata": {}, "source": [ "## Interacting with _Processes_\n", @@ -238,7 +238,7 @@ { "cell_type": "code", "execution_count": 4, - "id": "3a264d01", + "id": "4b19fb28", "metadata": {}, "outputs": [ { @@ -255,7 +255,7 @@ }, { "cell_type": "markdown", - "id": "2c5b2dab", + "id": "6f7ab017", "metadata": {}, "source": [ "As described above, the _Var_ _v_ has in this example been initialized as a scalar value that describes the membrane voltage of all three neurons simultaneously." @@ -263,7 +263,7 @@ }, { "cell_type": "markdown", - "id": "cb913233", + "id": "af6bab8c", "metadata": {}, "source": [ "#### Using custom APIs\n", @@ -274,7 +274,7 @@ { "cell_type": "code", "execution_count": 5, - "id": "03d0e4be", + "id": "1c061b7c", "metadata": {}, "outputs": [ { @@ -297,7 +297,7 @@ }, { "cell_type": "markdown", - "id": "f43f5500", + "id": "8218b31d", "metadata": {}, "source": [ "#### Executing a _Process_\n", @@ -310,7 +310,7 @@ { "cell_type": "code", "execution_count": 6, - "id": "37e78570", + "id": "ba96ae67", "metadata": {}, "outputs": [], "source": [ @@ -322,7 +322,7 @@ }, { "cell_type": "markdown", - "id": "6f30e19b", + "id": "cf03945e", "metadata": {}, "source": [ "The voltage of each LIF neuron should now have increased by the bias value, 3, from their initial values of 0. Check if the neurons have evolved as expected." @@ -331,7 +331,7 @@ { "cell_type": "code", "execution_count": 7, - "id": "192fbacc", + "id": "f6777853", "metadata": {}, "outputs": [ { @@ -348,7 +348,7 @@ }, { "cell_type": "markdown", - "id": "cefae139", + "id": "685486d8", "metadata": {}, "source": [ "#### Update _Vars_\n", @@ -359,7 +359,7 @@ { "cell_type": "code", "execution_count": 8, - "id": "c569dd5b", + "id": "b3ef0f56", "metadata": {}, "outputs": [ { @@ -377,7 +377,7 @@ }, { "cell_type": "markdown", - "id": "9e075ac2", + "id": "6d374d27", "metadata": {}, "source": [ "Note that the _set()_ method becomes available once the _Process_ has been run. Prior to the first run, use the *\\_\\_init\\_\\_* function of the _Process_ to set _Vars_.\n", @@ -390,7 +390,7 @@ { "cell_type": "code", "execution_count": 9, - "id": "8cc46d6e", + "id": "f9d98f6e", "metadata": {}, "outputs": [], "source": [ @@ -399,7 +399,7 @@ }, { "cell_type": "markdown", - "id": "0cd1da99", + "id": "62d0de4f", "metadata": {}, "source": [ "## How to learn more?\n", diff --git a/lava/notebooks/in_depth/tutorial03_process_models.html b/lava/notebooks/in_depth/tutorial03_process_models.html index f29411d..cd39b28 100644 --- a/lava/notebooks/in_depth/tutorial03_process_models.html +++ b/lava/notebooks/in_depth/tutorial03_process_models.html @@ -823,7 +823,7 @@

    ProcessModels

    +

    921e0da13851433d88f443e1abb1a96c

    In this tutorial, we walk through the creation of multiple LeafProcessModels that could be used to implement the behavior of a Leaky Integrate-and-Fire (LIF) neuron Process.

    How to learn more?

    diff --git a/lava/notebooks/in_depth/tutorial03_process_models.ipynb b/lava/notebooks/in_depth/tutorial03_process_models.ipynb index dc85042..321d8a5 100644 --- a/lava/notebooks/in_depth/tutorial03_process_models.ipynb +++ b/lava/notebooks/in_depth/tutorial03_process_models.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "4b46f7d6", + "id": "e74e8b24", "metadata": {}, "source": [ "*Copyright (C) 2021 Intel Corporation*
    \n", @@ -20,7 +20,7 @@ }, { "cell_type": "markdown", - "id": "9d9d60a1", + "id": "c3c1095c", "metadata": {}, "source": [ "" @@ -28,7 +28,7 @@ }, { "cell_type": "markdown", - "id": "e5a09365", + "id": "3e6fc6bb", "metadata": {}, "source": [ "In this tutorial, we walk through the creation of multiple _LeafProcessModels_ that could be used to implement the behavior of a Leaky Integrate-and-Fire (LIF) neuron _Process_." @@ -36,7 +36,7 @@ }, { "cell_type": "markdown", - "id": "8ddbf3d8", + "id": "fb2871b8", "metadata": {}, "source": [ "## Recommended tutorials before starting: \n", @@ -46,7 +46,7 @@ }, { "cell_type": "markdown", - "id": "e05ac0b8", + "id": "67114ad6", "metadata": {}, "source": [ "## Create a LIF _Process_" @@ -54,7 +54,7 @@ }, { "cell_type": "markdown", - "id": "b1a380aa", + "id": "36293038", "metadata": {}, "source": [ "First, we will define our LIF _Process_ exactly as it is defined in the `Magma` core library of Lava. (For more information on defining Lava Processes, see the [previous tutorial](./tutorial02_processes.ipynb).) Here the LIF neural _Process_ accepts activity from synaptic inputs via _InPort_ `a_in` and outputs spiking activity via _OutPort_ `s_out`." @@ -63,7 +63,7 @@ { "cell_type": "code", "execution_count": 1, - "id": "e06732ce", + "id": "b794b94d", "metadata": {}, "outputs": [], "source": [ @@ -113,7 +113,7 @@ }, { "cell_type": "markdown", - "id": "b3bd92c3", + "id": "99ef92c7", "metadata": {}, "source": [ "## Create a Python _LeafProcessModel_ that implements the LIF _Process_" @@ -121,7 +121,7 @@ }, { "cell_type": "markdown", - "id": "96e440ac", + "id": "4bb2d3e3", "metadata": {}, "source": [ "Now, we will create a Python _ProcessModel_, or _PyProcessModel_, that runs on a CPU compute resource and implements the LIF _Process_ behavior." @@ -129,7 +129,7 @@ }, { "cell_type": "markdown", - "id": "c0aef587", + "id": "12644ba3", "metadata": {}, "source": [ "#### Setup" @@ -137,7 +137,7 @@ }, { "cell_type": "markdown", - "id": "4b3130d3", + "id": "ba22172d", "metadata": {}, "source": [ "We begin by importing the required Lava classes.\n", @@ -147,7 +147,7 @@ { "cell_type": "code", "execution_count": 2, - "id": "3d8fa081", + "id": "a46f5f3e", "metadata": {}, "outputs": [], "source": [ @@ -159,7 +159,7 @@ }, { "cell_type": "markdown", - "id": "0dc6ba51", + "id": "0ad7d044", "metadata": { "tags": [] }, @@ -170,7 +170,7 @@ { "cell_type": "code", "execution_count": 3, - "id": "b36b9db8", + "id": "122cad39", "metadata": {}, "outputs": [], "source": [ @@ -181,7 +181,7 @@ }, { "cell_type": "markdown", - "id": "9e5ca66f", + "id": "e5f0ef77", "metadata": {}, "source": [ "#### Defining a _PyLifModel_ for LIF" @@ -189,7 +189,7 @@ }, { "cell_type": "markdown", - "id": "483ebd85", + "id": "e99c5a21", "metadata": { "tags": [] }, @@ -206,7 +206,7 @@ { "cell_type": "code", "execution_count": 4, - "id": "21009606", + "id": "db0455f5", "metadata": {}, "outputs": [], "source": [ @@ -245,7 +245,7 @@ }, { "cell_type": "markdown", - "id": "d8b6b52a", + "id": "95f3eb36", "metadata": {}, "source": [ "#### Compile and run _PyLifModel_" @@ -254,7 +254,7 @@ { "cell_type": "code", "execution_count": 5, - "id": "b03e263b", + "id": "ae4b5474", "metadata": {}, "outputs": [ { @@ -278,7 +278,7 @@ }, { "cell_type": "markdown", - "id": "2dffe4e3", + "id": "161ff13c", "metadata": {}, "source": [ "## Selecting 1 _ProcessModel_: More on _LeafProcessModel_ attributes and relations" @@ -286,7 +286,7 @@ }, { "cell_type": "markdown", - "id": "feb4f57a", + "id": "0c1ccff7", "metadata": {}, "source": [ "We have demonstrated multiple _ProcessModel_ implementations of a single LIF _Process_. How is one of several _ProcessModels_ then selected as the implementation of a _Process_ during runtime? To answer that question, we take a deeper dive into the attributes of a _LeafProcessModel_ and the relationship between a _LeafProcessModel_, a _Process_, and a _SyncProtocol_. \n", @@ -298,7 +298,7 @@ }, { "cell_type": "markdown", - "id": "7fc54a14", + "id": "2dee4f40", "metadata": { "tags": [] }, diff --git a/lava/notebooks/in_depth/tutorial05_connect_processes.html b/lava/notebooks/in_depth/tutorial05_connect_processes.html index 98829de..adb374c 100644 --- a/lava/notebooks/in_depth/tutorial05_connect_processes.html +++ b/lava/notebooks/in_depth/tutorial05_connect_processes.html @@ -837,7 +837,7 @@

    Building a network of Processes

    Create a connection

    The objective is to connect Process P1 with Process P2. P1 has an output Port OutPort called out and P2 has an input port InPort called inp. Data from P1 provided to the Port out should be transfered to P2 and received from Port inp.

    -

    8c102fa19274469bb5c63e5d2895eaab

    +

    e3208a4915de43db9b04c0b9bf6ac215

    [1]:
     
    @@ -969,7 +969,7 @@

    Create a connection

    Possible connections

    This first example was very simple. In principle, Processes can have multiple input and output Ports which can be freely connected with each other. Also, Processes which execute on different compute resources can be connected in the same way.

    -

    97cf3a45448044288f934868d22133b2

    +

    519f2fdcd7e94f06814f905327a5f507

    There are some things to consider though:

    Component