Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move Compute provider package from CRISP to Enclave #186

Merged
merged 1 commit into from
Nov 27, 2024

Conversation

hmzakhalid
Copy link
Member

@hmzakhalid hmzakhalid commented Nov 26, 2024

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced the "FHE Compute Manager" framework for managing Secure Programs within the Enclave Protocol.
    • Added support for both sequential and parallel Fully Homomorphic Encryption (FHE) computations.
    • New ComputeManager struct for managing computations with a specified provider.
    • New MerkleTree struct for managing and computing hashes.
  • Documentation

    • Added Readme.md with installation instructions and usage guidelines for the "FHE Compute Manager."
  • Bug Fixes

    • Improved handling of input processing and error management in FHE computations.

Copy link
Contributor

coderabbitai bot commented Nov 26, 2024

Walkthrough

This pull request introduces several changes to the compute_provider package, including the addition of a .gitignore file to exclude the target/ directory, a new Cargo.toml file defining the package and its dependencies, and a Readme.md file that outlines the framework's functionality. Additionally, multiple Rust source files are created, defining traits, structs, and methods for handling Fully Homomorphic Encryption (FHE) computations, including a ComputeManager for managing these processes and a MerkleTree for hash management.

Changes

File Path Change Summary
packages/compute_provider/.gitignore Added entry to ignore target/ directory.
packages/compute_provider/Cargo.toml Introduced Cargo.toml for compute-provider, defining package name, version, edition, and dependencies (serde, lean-imt, sha3, num-bigint, num-traits, hex, light-poseidon, ark-ff, ark-bn254, rayon).
packages/compute_provider/Readme.md Created Readme.md detailing the "FHE Compute Manager," its capabilities, installation instructions, usage guidelines, and examples of integrating with Risc0Provider.
packages/compute_provider/src/ciphertext_output.rs Added ComputeProvider trait and ComputeResult struct, defining associated types and fields for ciphertext management.
packages/compute_provider/src/compute_input.rs Introduced functionality for handling FHE inputs with FHEInputs and ComputeInput structs, including a process method for processing inputs and generating results.
packages/compute_provider/src/compute_manager.rs Added ComputeManager struct for managing computations, with methods for sequential and parallel execution of FHE processes.
packages/compute_provider/src/lib.rs Introduced four new modules: compute_input, compute_manager, merkle_tree, and ciphertext_output, exporting public entities from the first three modules.
packages/compute_provider/src/merkle_tree.rs Created MerkleTree struct for managing and computing hashes for a Merkle tree, with methods for computing leaf hashes and building the tree.

Possibly related PRs

  • Evm Integration and integration script #107: Involves modifications to Cargo.toml files and the introduction of new packages, related to the overall project structure and dependencies.
  • Modularization #133: Focuses on modularization and restructuring the codebase into distinct modules, aligning with the changes in this PR.
  • Consolidate providers #168: Reflects an intent to streamline and enhance the architecture of the project, similar to the organizational changes in the main PR.

Suggested reviewers

  • nginnever
  • auryn-macmillan

Poem

🐇 In the compute garden, we hop and play,
New files and functions brighten the day.
With trees of Merkle and inputs so bright,
FHE computations take flight in the night!
So let’s code and create, with joy and delight,
For in every change, our future is bright! 🌟


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Outside diff range and nitpick comments (21)
packages/compute_provider/src/ciphertext_output.rs (1)

1-18: Consider additional security measures for Enclave environment

Since this code is being moved to an Enclave environment, consider implementing:

  1. Constant-time operations for hash comparisons to prevent timing attacks
  2. Memory zeroization for sensitive data
  3. Audit logging for proof generation attempts

Would you like assistance in implementing these security measures?

packages/compute_provider/Readme.md (5)

3-3: Fix the markdown link and enhance the introduction

The link to the Enclave Protocol is not properly formatted. Additionally, consider adding a brief explanation of FHE for newcomers.

-This project provides a flexible and efficient framework for managing Secure Programs (SP) of the [Enclave Protocol](enclave.gg). It supports both sequential and parallel processing, with the ability to integrate various compute providers.
+This project provides a flexible and efficient framework for managing Secure Programs (SP) of the [Enclave Protocol](https://enclave.gg). It supports both sequential and parallel processing of Fully Homomorphic Encryption (FHE) computations, enabling secure data processing while maintaining privacy. The framework is designed to integrate various compute providers.

16-19: Enhance dependency specification

Consider adding version constraints and improving the repository specification for better dependency management.

 [dependencies]
-compute-provider = { git = "https://github.com/gnosisguild/enclave.git", path = "packages/compute_provider"}
+compute-provider = { 
+    git = "https://github.com/gnosisguild/enclave.git", 
+    branch = "main",  # or specify a version tag
+    path = "packages/compute_provider"
+}

44-74: Add security considerations and implementation details

The Risc0 example should include:

  1. Security considerations for ZK proof generation
  2. Implementation details for the prove method
  3. Error handling best practices

Consider adding a security considerations section and expanding the implementation details:

impl ComputeProvider for Risc0Provider {
    type Output = Risc0Output;
    
    fn prove(&self, input: &ComputeInput) -> Result<Self::Output> {
        // Set up the executor environment with security parameters
        let env = ExecutorEnv::builder()
            .add_input(&input.serialize()?)
            .build()?;

        // Generate the ZK proof
        let prover = default_prover();
        let receipt = prover.prove_elf(env, VOTING_ELF)?;
        
        // Verify the proof locally before returning
        receipt.verify(VerifierContext::default())?;
        
        Ok(Risc0Output {
            result: deserialize_result(&receipt.journal)?,
            seal: receipt.seal
        })
    }
}

79-87: Enhance configuration documentation

The configuration section should include:

  1. Parameter type information
  2. Recommended values and constraints
  3. Performance implications of different configurations

Consider expanding the configuration section:

 ## Configuration
 
 The `ComputeManager::new()` function takes several parameters:
 
-  - `provider`: An instance of your compute provider (e.g., `Risc0Provider`)
-  - `fhe_inputs`: The FHE inputs for the computation
-  - `fhe_processor`: A function to process the FHE inputs
-  - `use_parallel`: A boolean indicating whether to use parallel processing
-  - `batch_size`: An optional batch size for parallel processing, must be a power of 2
+### Parameters
+
+- `provider: T where T: ComputeProvider`
+  - Your compute provider implementation (e.g., `Risc0Provider`)
+  - Must implement the `ComputeProvider` trait
+
+- `fhe_inputs: FHEInputs`
+  - The encrypted inputs for computation
+  - Must be properly formatted and encrypted
+
+- `fhe_processor: Fn(FHEInputs) -> Result<ComputeResult>`
+  - Function to process the FHE inputs
+  - Should handle errors appropriately
+
+- `use_parallel: bool`
+  - Enable parallel processing for better performance
+  - Recommended for large computations
+
+- `batch_size: Option<usize>`
+  - Must be a power of 2 when specified
+  - Recommended values: 32, 64, or 128 depending on input size
+  - Affects memory usage and processing speed
🧰 Tools
🪛 LanguageTool

[uncategorized] ~83-~83: Loose punctuation mark.
Context: ... takes several parameters: - provider: An instance of your compute provider (e...

(UNLIKELY_OPENING_PUNCTUATION)


87-87: Add error handling and troubleshooting sections

Consider adding sections for:

  1. Common errors and their solutions
  2. Debugging tips
  3. Performance optimization guidelines

Would you like me to provide a template for these sections?

packages/compute_provider/src/compute_input.rs (4)

5-5: Consider using a trait instead of a function pointer for FHEProcessor

Defining FHEProcessor as a function pointer restricts flexibility. Using a trait bound allows for passing closures and other callable types, enhancing adaptability.

Apply this change:

-pub type FHEProcessor = fn(&FHEInputs) -> Vec<u8>;
+pub trait FHEProcessor {
+    fn process(&self, inputs: &FHEInputs) -> Vec<u8>;
+}

Update the process method signature:

-    pub fn process(&self, fhe_processor: FHEProcessor) -> ComputeResult {
+    pub fn process<P: FHEProcessor>(&self, fhe_processor: &P) -> ComputeResult {

And adjust the invocation accordingly:

-        let processed_ciphertext = (fhe_processor)(&self.fhe_inputs);
+        let processed_ciphertext = fhe_processor.process(&self.fhe_inputs);

9-9: Define a dedicated struct for ciphertext pairs to improve clarity

Currently, ciphertexts is defined as Vec<(Vec<u8>, u64)>, which may reduce readability. Defining a struct for the ciphertext and its identifier enhances code clarity.

Define a new struct:

pub struct CiphertextPair {
    pub data: Vec<u8>,
    pub id: u64,
}

Update the FHEInputs struct:

-    pub ciphertexts: Vec<(Vec<u8>, u64)>,
+    pub ciphertexts: Vec<CiphertextPair>,

35-40: Update return type to Result and handle all possible errors

With the changes to error handling, ensure that the function returns a Result and that all code paths are covered.

Modify the return statement:

-        ComputeResult {
+        Ok(ComputeResult {
             ciphertext_hash: processed_hash,
             params_hash,
             merkle_root,
-        }
+        })

7-11: Derive PartialEq and Eq for structs where appropriate

If instances of FHEInputs or ComputeInput need to be compared, deriving PartialEq and Eq can be beneficial.

Add derives to the structs:

 #[derive(Clone, Debug, serde::Serialize, serde::Deserialize)]
+#[derive(PartialEq, Eq)]
 pub struct FHEInputs { /* ... */ }

 #[derive(Clone, Debug, serde::Serialize, serde::Deserialize)]
+#[derive(PartialEq, Eq)]
 pub struct ComputeInput { /* ... */ }

Also applies to: 13-18

packages/compute_provider/src/merkle_tree.rs (5)

10-12: Consider making leaf_hashes a private field.

Exposing leaf_hashes as a public field allows external modification, which could compromise the integrity of the MerkleTree. Making it private and providing necessary accessor methods will enhance encapsulation and maintainability.


23-24: Remove unnecessary trim_start_matches("0x") call.

The hex::encode function does not prepend a "0x" to the output. Therefore, calling trim_start_matches("0x") is redundant and can be safely removed.

Suggested change:

 let hex_output = hex::encode(Keccak256::digest(&item.0));
-let sanitized_hex = hex_output.trim_start_matches("0x");
+let sanitized_hex = hex_output;

25-28: Optimize conversion from BigUint to Fr.

To improve performance and reduce unnecessary conversions, you can convert BigUint directly to Fr without converting to a string.

Suggested change:

-let numeric_value = BigUint::from_str_radix(sanitized_hex, 16)
-    .unwrap()
-    .to_string();
-let fr_element = Fr::from_str(&numeric_value).unwrap();
+let numeric_value = BigUint::from_str_radix(sanitized_hex, 16).unwrap();
+let fr_element = Fr::from(numeric_value);

30-30: Instantiate Poseidon instance outside the loop to improve performance.

Creating a new Poseidon instance in each iteration can be inefficient. Instantiate it once before the loop and reuse it for all items.

Suggested change:

+        let mut poseidon_instance = Poseidon::<Fr>::new_circom(2).unwrap();
         for item in data {
             // Existing code inside the loop
-            let mut poseidon_instance = Poseidon::<Fr>::new_circom(2).unwrap();

45-45: Remove unnecessary trim_start_matches("0x") call.

Since the hex::encode function does not add a "0x" prefix, trimming it is unnecessary.

Suggested change:

-let sanitized_node = node.trim_start_matches("0x");
+let sanitized_node = node;
packages/compute_provider/src/compute_manager.rs (6)

111-111: Use explicit integer suffixes instead of type casting

In the mapping of results, you use 0 as u64, which involves a type cast.

For clarity and to avoid unnecessary casting, use an explicit integer suffix:

 .map(|result| (result.1.clone(), 0 as u64)) // The index is not used for the final computation in the parallel case
+.map(|result| (result.1.clone(), 0u64)) // The index is not used for the final computation in the parallel case

85-85: Avoid unnecessary cloning of chunks in parallel processing

In the start_parallel method, you clone each chunk when creating fhe_inputs, which can be inefficient for large datasets.

Consider borrowing the chunk data instead of cloning:

 let fhe_inputs = FHEInputs {
-    ciphertexts: chunk.clone(),
+    ciphertexts: chunk,
     params: params.to_vec(),
 };

You may need to adjust the definitions and lifetimes to accommodate borrowed data.


69-70: Evaluate the necessity of cloning and using Arc for shared data

You clone ciphertexts and params and wrap them in Arc, which might introduce unnecessary overhead.

If ciphertexts and params are not modified within the parallel iterator, consider removing the clones:

 let ciphertexts = Arc::new(self.input.fhe_inputs.ciphertexts.clone());
 let params = Arc::new(self.input.fhe_inputs.params.clone());
+// If possible, avoid cloning:
+let ciphertexts = Arc::new(self.input.fhe_inputs.ciphertexts);
+let params = Arc::new(self.input.fhe_inputs.params.clone());

Ensure that the data is not mutated to maintain thread safety.


28-29: Validate the default batch size for parallel execution

The default batch_size is hardcoded as 2 when not provided, which may not be optimal.

Consider making the default batch size configurable or calculating it based on input size:

 let batch_size = self.batch_size.unwrap_or(2);
+// Alternatively, set a more suitable default or calculate dynamically
+let batch_size = self.batch_size.unwrap_or_else(|| calculate_optimal_batch_size());

5-5: Confirm that the rayon prelude is necessary

You include use rayon::prelude::*;, which imports a lot into the namespace.

To keep the namespace clean, consider importing only what's needed:

-use rayon::prelude::*;
+use rayon::iter::ParallelIterator;
+use rayon::prelude::IntoParallelIterator;

61-61: Ensure hash function meets security requirements

You use Keccak256 for hashing, but depending on security requirements, SHA3-256 might be preferred.

Consider using sha3::Sha3_256 if compliance with the SHA3 standard is required:

-use sha3::{Digest, Keccak256};
+use sha3::{Digest, Sha3_256};
...
-    self.input.ciphertext_hash = Keccak256::digest(&ciphertext).to_vec();
+    self.input.ciphertext_hash = Sha3_256::digest(&ciphertext).to_vec();
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 4f1756a and 5c4d60a.

⛔ Files ignored due to path filters (1)
  • packages/compute_provider/Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (8)
  • packages/compute_provider/.gitignore (1 hunks)
  • packages/compute_provider/Cargo.toml (1 hunks)
  • packages/compute_provider/Readme.md (1 hunks)
  • packages/compute_provider/src/ciphertext_output.rs (1 hunks)
  • packages/compute_provider/src/compute_input.rs (1 hunks)
  • packages/compute_provider/src/compute_manager.rs (1 hunks)
  • packages/compute_provider/src/lib.rs (1 hunks)
  • packages/compute_provider/src/merkle_tree.rs (1 hunks)
✅ Files skipped from review due to trivial changes (2)
  • packages/compute_provider/.gitignore
  • packages/compute_provider/Cargo.toml
🧰 Additional context used
🪛 LanguageTool
packages/compute_provider/Readme.md

[uncategorized] ~83-~83: Loose punctuation mark.
Context: ... takes several parameters: - provider: An instance of your compute provider (e...

(UNLIKELY_OPENING_PUNCTUATION)

🔇 Additional comments (6)
packages/compute_provider/src/lib.rs (2)

1-4: LGTM! Well-organized module structure.

The module organization logically separates concerns for FHE computation functionality, with clear separation between input handling, computation management, merkle tree operations, and output processing.


6-8: Consider using explicit exports instead of wildcards.

Using wildcard exports (*) can lead to namespace pollution and make it harder to track dependencies. Consider explicitly listing the types, traits, and functions you want to expose as public API.

Example refactor:

-pub use compute_manager::*;
-pub use compute_input::*;
-pub use ciphertext_output::*;
+pub use compute_manager::{ComputeManager, /* other specific items */};
+pub use compute_input::{ComputeInput, FHEInputs, /* other specific items */};
+pub use ciphertext_output::{ComputeProvider, ComputeResult, /* other specific items */};

Let's verify what public items are available in these modules:

✅ Verification successful

Let me try a different approach to find the public items in these modules.


Consider using explicit exports instead of wildcards

Based on the search results, here's the recommended explicit export list:

pub use compute_manager::ComputeManager;
pub use compute_input::{FHEProcessor, FHEInputs, ComputeInput};
pub use ciphertext_output::{ComputeProvider, ComputeResult};
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Find public items in the modules that are being exported
# This helps identify what specific items should be explicitly exported

echo "Searching for public items in compute_manager..."
ast-grep --pattern 'pub $type $name' packages/compute_provider/src/compute_manager.rs

echo "Searching for public items in compute_input..."
ast-grep --pattern 'pub $type $name' packages/compute_provider/src/compute_input.rs

echo "Searching for public items in ciphertext_output..."
ast-grep --pattern 'pub $type $name' packages/compute_provider/src/ciphertext_output.rs

Length of output: 584


Script:

#!/bin/bash
# Let's try using ripgrep to find public declarations
echo "Searching in compute_manager.rs..."
rg "^pub\s+(struct|enum|type|trait|fn|const|static)" packages/compute_provider/src/compute_manager.rs -A 1

echo -e "\nSearching in compute_input.rs..."
rg "^pub\s+(struct|enum|type|trait|fn|const|static)" packages/compute_provider/src/compute_input.rs -A 1

echo -e "\nSearching in ciphertext_output.rs..."
rg "^pub\s+(struct|enum|type|trait|fn|const|static)" packages/compute_provider/src/ciphertext_output.rs -A 1

Length of output: 912

packages/compute_provider/src/ciphertext_output.rs (1)

1-1: LGTM!

The import is correctly scoped and necessary for the trait definition.

packages/compute_provider/src/compute_input.rs (1)

3-3: Verify the appropriateness of using Keccak256

Ensure that using Keccak256 aligns with the project's cryptographic standards and interoperability requirements.

Run the following script to check for existing cryptographic standards in the codebase:

✅ Verification successful

Let me generate another script to get a clearer picture of how Keccak256 is being used in the codebase.

This will help us understand:

  1. Where Keccak256 is being used
  2. What other hash functions are in use
  3. If there's any documentation about cryptographic standards

Let me check the actual usage context of Keccak256 in the relevant files to understand its purpose.


Based on the gathered information, I can now provide a final response about the appropriateness of using Keccak256.

Using Keccak256 is appropriate and aligned with the project's cryptographic standards

The usage of Keccak256 in this codebase is appropriate for several reasons:

  • It's used consistently for hashing ciphertexts and parameters in the compute provider implementation
  • It's complemented by Poseidon hash (PoseidonT3) in the smart contracts for on-chain operations
  • The implementation follows a clear pattern where:
    • Keccak256 is used for off-chain cryptographic operations and data integrity verification
    • Poseidon is used for on-chain operations where gas efficiency is important

The codebase demonstrates a well-thought-out hybrid approach using both Keccak256 and Poseidon hash functions in their appropriate contexts.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Search for hash functions used in the codebase to ensure consistency.

# Search for imports of hash functions
rg 'use .*::{.*};' | rg 'sha|blake|keccak'

# List all instances where hashing is performed
rg -A 2 'digest\(|hash\('

Length of output: 143738


Script:

#!/bin/bash
# Search for Keccak256 imports and usages
rg -l "Keccak256"

# Search for other hash function imports
rg -l "use.*hash|use.*digest"

# Search for specific files that might contain cryptographic standards or documentation
fd -t f "README|CRYPTO|SECURITY|spec"

Length of output: 1286


Script:

#!/bin/bash
# Check the context of Keccak256 usage in compute-related files
rg -B2 -A2 "Keccak256" packages/compute_provider/src/

# Check README files for any cryptographic standards documentation
cat README.md packages/*/README.md 2>/dev/null

# Look for any cryptographic specifications in test files
rg -B2 -A2 "hash|digest|keccak" packages/evm/test/

Length of output: 17217

packages/compute_provider/src/compute_manager.rs (2)

119-123: Ensure consistency in the final_input construction

In constructing final_input, confirm that the leaf_hashes are correctly aggregated and that all necessary data is included.

Run the following script to check for consistency in ComputeInput usage:

✅ Verification successful

All ComputeInput constructions are consistent and include required fields

The verification shows that all instances of ComputeInput construction in the codebase consistently include the three required fields: fhe_inputs, ciphertext_hash, and leaf_hashes. The struct definition in compute_input.rs matches these fields, and all initializations follow the same pattern.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that all instances of `ComputeInput` include necessary fields.

# Expected: All `ComputeInput` initializations include `fhe_inputs`, `ciphertext_hash`, and `leaf_hashes`.
rg -A 5 'ComputeInput \{' --glob '**/*.rs'

Length of output: 2503


89-89: Handle potential errors from the FHE processor

The call to the FHEProcessor might result in errors that are not currently handled.

Verify whether the FHEProcessor can fail and, if so, handle the potential errors appropriately. Consider modifying the function signature to return a Result:

-fn start_parallel(&self) -> (P::Output, Vec<u8>) {
+fn start_parallel(&self) -> Result<(P::Output, Vec<u8>), YourErrorType> {

Run the following script to check for error handling in FHEProcessor implementations:

Copy link
Contributor

@ryardley ryardley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only thing I would suggest is that we implement some automated tests at least for happy path here as a quick follow up.

@hmzakhalid hmzakhalid merged commit 0cf28d9 into main Nov 27, 2024
3 checks passed
@hmzakhalid hmzakhalid deleted the hmza/add-compute-provider branch December 11, 2024 17:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants