You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ensuring that edges aren't duplicated can require linear-time operations (in the number of edges). For graphs with some nodes that have very high degree, this quickly becomes a problem.
I propose that we relax the requirement that edges are unique, and instead offer an API call to remove the duplicate edges. This would make it easier to drive bulk loads and rebuilds of graphs where we know that the edge set is unique.
In some of the libbdsg implementations of the API, we don't have a simple mechanism to prevent duplication. Adding one that would be efficient would take a lot of memory.
@jeizenga mentioned this before. I think we should consider supporting it.
The text was updated successfully, but these errors were encountered:
Ensuring that edges aren't duplicated can require linear-time operations (in the number of edges). For graphs with some nodes that have very high degree, this quickly becomes a problem.
I propose that we relax the requirement that edges are unique, and instead offer an API call to remove the duplicate edges. This would make it easier to drive bulk loads and rebuilds of graphs where we know that the edge set is unique.
In some of the libbdsg implementations of the API, we don't have a simple mechanism to prevent duplication. Adding one that would be efficient would take a lot of memory.
@jeizenga mentioned this before. I think we should consider supporting it.
The text was updated successfully, but these errors were encountered: