Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Move to the Ingonyama MSM #374

Open
wants to merge 5 commits into
base: dev
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ rayon-scan = "0.1.0"
# grumpkin-msm has been patched to support MSMs for the pasta curve cycle
# see: https://github.com/lurk-lab/grumpkin-msm/pull/3
grumpkin-msm = { git = "https://github.com/lurk-lab/grumpkin-msm", branch = "dev" }
ingonyama-grumpkin-msm = { git = "https://github.com/lurk-lab/ingonyama-grumpkin-msm", optional = true }

[target.'cfg(target_arch = "wasm32")'.dependencies]
getrandom = { version = "0.2.0", default-features = false, features = ["js"] }
Expand Down Expand Up @@ -116,7 +117,7 @@ abomonate = []
asm = ["halo2curves/asm"]
# Compiles in portable mode, w/o ISA extensions => binary can be executed on all systems.
portable = ["grumpkin-msm/portable"]
cuda = ["grumpkin-msm/cuda"]
cuda = ["grumpkin-msm/cuda", "ingonyama-grumpkin-msm"]
flamegraph = ["pprof/flamegraph", "pprof/criterion"]

[profile.dev-ci]
Expand Down
42 changes: 34 additions & 8 deletions deny.toml
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,7 @@ allow = [
"Apache-2.0",
"Apache-2.0 WITH LLVM-exception",
"Unicode-DFS-2016",
"ISC",
]
# List of explicitly disallowed licenses
# See https://spdx.org/licenses/ for list of possible licenses
Expand Down Expand Up @@ -147,22 +148,47 @@ exceptions = [
# Some crates don't have (easily) machine readable licensing information,
# adding a clarification entry for it allows you to manually specify the
# licensing information
#[[licenses.clarify]]
[[licenses.clarify]]
# The name of the crate the clarification applies to
#name = "ring"
name = "icicle-cuda-runtime"
# The optional version constraint for the crate
#version = "*"
version = "*"
# The SPDX expression for the license requirements of the crate
#expression = "MIT AND ISC AND OpenSSL"
expression = "MIT"
# One or more files in the crate's source used as the "source of truth" for
# the license expression. If the contents match, the clarification will be used
# when running the license check, otherwise the clarification will be ignored
# and the crate will be checked normally, which may produce warnings or errors
# depending on the rest of your configuration
#license-files = [
license-files = [
# Each entry is a crate relative path, and the (opaque) hash of its contents
#{ path = "LICENSE", hash = 0xbd0eed23 }
#]
{ path = "../../../LICENSE", hash = 0xbd0eed23 }
]

[[licenses.clarify]]
name = "icicle-bn254"
version = "*"
expression = "MIT"
license-files = [
{ path = "../../../../LICENSE", hash = 0xbd0eed23 }
]

[[licenses.clarify]]
name = "icicle-grumpkin"
version = "*"
expression = "MIT"
license-files = [
{ path = "../../../../LICENSE", hash = 0xbd0eed23 }
]


[[licenses.clarify]]
name = "icicle-core"
version = "*"
expression = "MIT"
license-files = [
{ path = "../../../LICENSE", hash = 0xbd0eed23 }
]

[licenses.private]
# If true, ignores workspace crates that aren't published, or are only
Expand Down Expand Up @@ -267,7 +293,7 @@ allow-git = []

[sources.allow-org]
# 1 or more github.com organizations to allow git sources for
github = ["lurk-lab"]
github = ["lurk-lab", "ingonyama-zk"]
# 1 or more gitlab.com organizations to allow git sources for
# gitlab = [""]
# 1 or more bitbucket.org organizations to allow git sources for
Expand Down
28 changes: 26 additions & 2 deletions src/provider/bn256_grumpkin.rs
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,6 @@ use crate::{
use digest::{ExtendableOutput, Update};
use ff::{FromUniformBytes, PrimeField};
use group::{cofactor::CofactorCurveAffine, Curve, Group as AnotherGroup};
#[cfg(any(target_arch = "x86_64", target_arch = "aarch64"))]
use grumpkin_msm::{bn256 as bn256_msm, grumpkin as grumpkin_msm};
// Remove this when https://github.com/zcash/pasta_curves/issues/41 resolves
use halo2curves::{bn256::G2Affine, CurveAffine, CurveExt};
use num_bigint::BigInt;
Expand All @@ -35,6 +33,32 @@ pub mod grumpkin {
};
}

#[cfg(any(target_arch = "x86_64", target_arch = "aarch64"))]
fn bn256_msm(points: &[bn256::Affine], scalars: &[bn256::Scalar]) -> bn256::Point {
cfg_if::cfg_if! {
if #[cfg(feature = "cuda")] {
let stream = ingonyama_grumpkin_msm::Config::new();
let cfg = ingonyama_grumpkin_msm::default_config(&stream.stream);
Comment on lines +40 to +41
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we make this a static to not create it on every call?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried this, but ran into issues like

error[E0277]: `*mut icicle_cuda_runtime::bindings::CUstream_st` cannot be sent between threads safely
   --> src/provider/bn256_grumpkin.rs:39:16
    |
39  | static STREAM: Lazy<ingonyama_grumpkin_msm::Config> = Lazy::new(|| ingonyama_grumpkin_msm::Config::new());
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ `*mut icicle_cuda_runtime::bindings::CUstream_st` cannot be sent between threads safely
    |
    = help: within `ingonyama_grumpkin_msm::Config`, the trait `Send` is not implemented for `*mut icicle_cuda_runtime::bindings::CUstream_st`

Dmytro mentioned that the first stream creation will have some delay, but each time after that, the Icicle/CUDA runtime will be able to reuse them with little overhead.

Yeah, unfortunately the first CUDA call in any application always incurs some latency, it's one-time and fairly constant, 70 ms. in your case. If you try to do the same operations (create stream, alloc memory) again, the time will no longer be noticeable.

ingonyama_grumpkin_msm::bn256_msm(&points, &scalars, &cfg)
} else {
grumpkin_msm::bn256(points, scalars)
}
}
}

#[cfg(any(target_arch = "x86_64", target_arch = "aarch64"))]
fn grumpkin_msm(points: &[grumpkin::Affine], scalars: &[grumpkin::Scalar]) -> grumpkin::Point {
cfg_if::cfg_if! {
if #[cfg(feature = "cuda")] {
let stream = ingonyama_grumpkin_msm::Config::new();
let cfg = ingonyama_grumpkin_msm::default_config(&stream.stream);
Comment on lines +53 to +54
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we make this a static in this configuration ?

ingonyama_grumpkin_msm::grumpkin_msm(&points, &scalars, &cfg)
} else {
grumpkin_msm::grumpkin(points, scalars)
}
}
}

#[cfg(any(target_arch = "x86_64", target_arch = "aarch64"))]
impl_traits!(
bn256,
Expand Down