Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ABY2.0 examples #2

Open
lu562 opened this issue Nov 17, 2021 · 5 comments
Open

ABY2.0 examples #2

lu562 opened this issue Nov 17, 2021 · 5 comments

Comments

@lu562
Copy link

lu562 commented Nov 17, 2021

Hi,

Thanks for providing this library! and I have some questions:

(1) As mentioned this repo provides some implementation of ABY2.0 protocols. However, I'm not able to find some example code that uses ABY2.0 protocols, may I know how to use them or benchmark them? especially, I may want to test the performance of the secure comparison protocol(Bit extraction) of ABY2.0.

(2) As ABY is supported by the MOTION, I think additive secret sharing is supported, right? I assume if I initialize an arithmetic sharing, it will be additive sharing, is it the case?

(3) For arithmetic sharing, is there an API such that parties can provide their share as input? (such API in ABY is called PutSharedINGate()) e.g. If we assume additive secret sharing, where the secret is 10, party 0 has share value 3 and party 1 has share value 7, so that 3+7=10. Is there an API such that there is a share and party 0 can take 3 as its input and party 1 can take 7 as its input?

Thank you! Looking forward to your reply!

@lenerd
Copy link
Collaborator

lenerd commented Nov 22, 2021

Hi,

(1) Current, not all protocols presented in the ABY2.0 paper are implement here, and we have unfortunately not yet an implementation of bit extraction. We have the primitive operations for hybrid circuits (Share, Reconstruct, XOR, AND, Addition, Multiplication, Square, Bit-Integer Multiplication, and Conversions), as well as several tensor operations for neural networks. In the source code, they are still referred to as BEAVY / beavy since that was a working title of ABY2.0 before publication.

(2) Yes, additive secret sharing (arithmetic GMW) is implemented. What kind of sharing you get depends on whether you call the make_arithmetic_$bitlen_input_gate_{my,other} method of the GMWProvider or the BEAVYProvider. I will try to create a brief tutorial soon-ish.

(3) There is currently no extra API for this (we should add one, though :), but in principle it would be sufficient to create the corresponding ArithmeticGMWWire<T> objects. You can then combine them the usual way. When you have the shares, you can store them in the wires and mark them as ready.

@lu562
Copy link
Author

lu562 commented Nov 23, 2021

Thank you for the reply!

@lu562
Copy link
Author

lu562 commented Nov 24, 2021

I see that there are some example codes for neural network training, may I know if the training is done without secure comparison protocols? May I know a little more about how the activation function is implemented? Thanks in advance :)

@lenerd
Copy link
Collaborator

lenerd commented Dec 6, 2021

Currently we have only neural network inference, but no protocols for training.

For the ReLU activation, different protocols are implemented which basically multiply/AND the value with the inverse of the most significant bit denoting the sign. All require that the input is already available as a Boolean sharing (which is for example usually the case if we computed a MaxPool operation before).

  • Yao: using a garbled circuit to AND the lower l-1 bits with the inverse of the most significant bit
  • Boolean GMW: using a bit x bit-vector multiplication with a specialized Beaver triple
  • Boolean ABY2.0: using a bit x bit-vector multiplication
  • Boolean/Arithmetic GMW: using a bit x integer multiplication via precomputed OT (requires the input to be available both as Boolean and arithmetic shares)
  • Boolean/Arithmetic ABY2.0: using a bit x integer multiplication (requires the input to be available both as Boolean and arithmetic shares)

Comparisons are also used in the MaxPool implementation:

  • Yao: uses a size-optimized circuit based on greater-than comparison circuits
  • Boolean GMW/Boolean ABY2.0: use a depth-optimized circuit based on greater-than comparison circuits

The circuits are either given in circuits/int/ or implemented in the CircuitLoader class. The implementation of all tensor operations can be found in src/motioncore/protocols/<protocol>/tensor_op.{h,cpp}.

@Srish-4
Copy link

Srish-4 commented Jul 4, 2023

Hi,

Speaking of Maxpool operation using Boolean ABY2.0 , its not giving accurate result.Always gives the 0th index of the vector provided.Yao implementation works fine. Please share your insights , why this is the case.

Thanks in advance !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants