Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How MOTION handles gates that can be executed in parallel #39

Open
yuquan1210 opened this issue May 12, 2023 · 1 comment
Open

How MOTION handles gates that can be executed in parallel #39

yuquan1210 opened this issue May 12, 2023 · 1 comment

Comments

@yuquan1210
Copy link

Hi, I wrote a simple parallelizable circuit (see pseudocodes below) and I hope that MOTION actually executes these gates in parallel. Hence, my question is how I can confirm if MOTION is executing them sequentially or in parallel? If it is default behavior to execute gates sequentially, is there any setting I should configure to let MOTION run them in parallel?

int[] a;
int[] b;
int[] c;
for(i=0; i<32; i++){
      c[i] = a[i] * b[i]; //parallel mul op
}

I am still an ameturer with MOTION, so any helps and clarifications are great appreciated!

@robinhundt
Copy link
Contributor

MOTION will by default perform these gates in parallel, as they don't depend on each other. Concretely, MOTION executes every gate in its own boost::Fiber (boost docs), which are lightweight user-space threads. Once a gate has finished its execution, including any local computation or communication, it notifies its children. Once all inputs of a gate are available, the fiber responsible for executing it is automatically scheduled, and the gate is evaluated.
On a high-level, you can think of the circuit as graph, where we spawn a thread for each gate. The execution of a gate waits until its inputs are ready and then continues. This way, independent operations at the same depth of the circuit are automatically evaluated in parallel.

However, this has some drawbacks. There is some overhead associated with this asynchronous mode of execution. In your example, you could use the Simdify operation to transform an array of secret shared values into a SIMD vector of these values and then perform the multiplication on these SIMDified values. This generally results in reduced communication and better performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants