Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-GPU support #24

Open
arassadin opened this issue Jun 8, 2017 · 5 comments
Open

Multi-GPU support #24

arassadin opened this issue Jun 8, 2017 · 5 comments

Comments

@arassadin
Copy link

Hi,

I just wondered how FM can be parallelized effectively between multiple GPUs. I'm a bit familiar with TF and not really with FMs. If you provide me with ideas or any highlights, I would make a necessary modifications and subsequently a PR since for today I'm interested in parallel GPU FM implementation and seems that your code is a well base for this.

Best Regards,
Alexandr

@geffy
Copy link
Owner

geffy commented Jun 12, 2017

Hi,
the simplest way is to do data parallelism, I mean just split batch over several GPU

@arassadin
Copy link
Author

Hi,

Thanks for the response. If I'm right, in FMs there is no explicit batches as in NNs. In a sample-wise splitting, at least two questions come to mind:

  • should it be somehow balanced by feature presence?
  • how to merge independently-learned weights then, since they are feature-wise and abstracted from samples?

Best Regards,
Alexandr

@geffy
Copy link
Owner

geffy commented Jun 12, 2017

in FMs there is no explicit batches as in NNs

You need to solve an optimization task. While it's common to use sample-wise updates in such settings (for example, in libFFM), mini-batch learning also works.
This implementation use batches exactly as in NNs.
You can see it in batch dimension in placeholders (https://github.com/geffy/tffm/blob/master/tffm/core.py#L129) and param batch_size of TFFMBaseModel (https://github.com/geffy/tffm/blob/master/tffm/base.py#L160)

@geffy
Copy link
Owner

geffy commented Aug 31, 2017

Hi, any news there?

@arassadin
Copy link
Author

Hi,

Unfortunately, priorities changed rapidly. I nearly had not a chance to handle this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants