-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
set createTime per node and purge old nodes if maxNodes is reached #57
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great! A couple minor comments only.
@@ -512,6 +512,10 @@ func (d *DHT) needMoreNodes() bool { | |||
return n < minNodes || n*2 < d.config.MaxNodes | |||
} | |||
|
|||
func (d *DHT) GetNumNodes() int { | |||
return d.routingTable.numNodes() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need this? I don't think we do and I think it's better to only expose methods when we really need to. Besides, I think this isn't safe to be used concurrently while the DHT is running?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree. was just usefull for my debuging.
@@ -206,6 +206,12 @@ func (r *routingTable) cleanup(cleanupPeriod time.Duration, p *peerStore) (needP | |||
r.kill(n, p) | |||
continue | |||
} | |||
// kill old and currently unused nodes if nodeCount is > maxNodes | |||
if len(r.addresses) > p.maxNodes && time.Since(n.createTime) > cleanupPeriod && len(n.pendingQueries) == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd say we should kill the node even if there are pending queries. If it's so old, better refresh the routing table with newer models?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we may have just sent out a query a second ago to that specific node, because of the nearest distance to the searched hash.. so we don't want to loose that result?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it stop forever after searching that many times? Should that be a maximum rate instead? Like, X many queries per minute or so? My concern is that the failure mode here is a a DHT that gets stuck forever and can't recover. Unless I'm missing something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think I agree with this patch - or I don't understand it. There is already NumTargetPeers to control this. Why don't you set a lower value for it if you don't want a super aggressive node? That's the point of that attribute after all :-).
@@ -105,13 +111,16 @@ func NewConfig() *Config { | |||
MaxNodes: 500, | |||
CleanupPeriod: 15 * time.Minute, | |||
SaveRoutingTable: true, | |||
PassivMode: false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is redundant with the RateLimit, right?
(In English we would spell it Passive, I think.)
// max get_peer requests per hash to prevent infinity loop | ||
MaxSearchQueries int | ||
// number of concurrent listeners on same port | ||
ConnPoolSize int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this will work? The code is not safe for concurrent use, right? If you want to use multiple goroutines you need different DHT instances
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as mentioned by email I broke my fork by commiting testing code and have no clue how to revert
discarded because broken. will create a new one. |
fixes #56