Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch the agent over to UDP #91

Merged
merged 16 commits into from
Feb 16, 2018
Merged

Switch the agent over to UDP #91

merged 16 commits into from
Feb 16, 2018

Conversation

jamiboym
Copy link
Member

Reraising #90

@jamiboym jamiboym self-assigned this Feb 15, 2018
Copy link
Member

@geoffxy geoffxy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! Thanks again for doing this 😄

My only concern is running the defrag on the reader thread. I think we should aim to have all state access/mutation happen on the agent's main thread so that it's easier to reason about how the state changes and so we can potentially use the fragment state later on for different things. (Let's talk about this in person in the morning!)

raw_data = io_data.get_data()
address = io_data.get_address()

if FragmentUtils.is_fragment(raw_data):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really think this should be running on the main thread, not the reader thread. It'll help avoid subtle concurrency bugs in the future if we're only accessing state on the main thread. Right now only the reader uses the fragment state, but in the future if we ever need to make a decision based on the fragments it'll be easier to have it accessible.

I initially used the handler_function pattern to avoid coupling the executor with these IO classes, but I think it should be fine now especially since it has become a common pattern. One way could be to add an executor argument to the base class and then schedule _received_data() on the executor.

So in InterfaceBase you can add something like:

def _schedule_received_data(self, io_data):
    self._executor.submit(self._received_data, io_data)

and then in _read_data() instead of calling _received_data(), call self._schedule_received_data(UDPData(...)) (and similarly for STDStreams too). Then the incoming_data_handler can be set to the protocol handle message functions directly.

def retrieve_io_data():
return io_data

self._incoming_data_handler(retrieve_io_data)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just pass io_data directly?

Copy link
Member

@geoffxy geoffxy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for doing this 🎉

@geoffxy
Copy link
Member

geoffxy commented Feb 16, 2018

Attaching #61

Copy link
Member

@rageandqq rageandqq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is awesome. great work.

def __init__(self, **kwargs):
super(Hello, self).__init__(
InteragentProtocolMessageType.Hello,
**kwargs
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

trailing comma

def __init__(self, **kwargs):
super(Bye, self).__init__(
InteragentProtocolMessageType.Bye,
**kwargs
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: trailing comma

def __init__(self, **kwargs):
super(NewOperations, self).__init__(
InteragentProtocolMessageType.NewOperations,
**kwargs
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comma

@jamiboym jamiboym merged commit 3fd3a1e into master Feb 16, 2018
@jamiboym jamiboym deleted the jm-udp branch February 20, 2018 01:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants