Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

single process chat listening #127

Open
malgorithms opened this issue Feb 20, 2019 · 4 comments
Open

single process chat listening #127

malgorithms opened this issue Feb 20, 2019 · 4 comments

Comments

@malgorithms
Copy link
Contributor

malgorithms commented Feb 20, 2019

ideally:

  • the bot logic doesn't need to do any analysis, upgrading, or comparing channel objects to know whether a watch request matches an incoming message, even though a user may have referred to the same chat in a different way

  • there's only one chat api listen call made, so we don't need like 100 of them to listen to 100 channels

  • the bot library doesn't need to receive and process messages it doesn't care about

internally, we should work with CORE to make chat api listen take subscription additions and cancellations via stdin. something where we pass it a watchId we invent and a channel to watch, and it starts including any matches in its output, along with the watchId. This achieves all our goals while keeping code simple on this side and for people who want to make their own libs in other languages.

cc @pzduniak @songgao . not necessarily something to work on right now, but it seems like the right answer.

@malgorithms
Copy link
Contributor Author

@songgao mentioned tracking conversationId internally in the library to manage this. That seems good.

the one issue is I don't know how the library knows a possible chat's conversationId before anyone has sent a message into a convo. That annoyance comes up a lot with the desktop programming (I've heard a lot of discussion about it over the months here in the office) and I really don't want that kind of management bleeding into the library code too.

@songgao
Copy link
Contributor

songgao commented Feb 20, 2019

Good point on non-existent conversations. I guess I took it for granted when working on the GUI code. As part of brainstorming, how about something like this as an exported helper function that generates a string which can be used later to compare conversations/channels:

type ConversationRef = string

// assuming default values are filled in param, which we can 
export const makeConversationRef = (param: ChatChannel): ConversationRef => {
  const withDefaults = fillDefaults(param)
  const names = sortUsernames(param.name)
  `${withDefaults.topicType}:${withDefaults.membersType}:${names}:${withDefaults.topicName}`
}

Then we'll take ConversationRef instead of ChatChannel in all exported functions. Internally we can have a map that tracks a ConversationRef to ChatChannel mapping for all generated refs, to avoid parsing the string. This make sure we can deterministically reference a conversation, and since it's just a string, === would work everywhere. It can even be used as object keys. Library user would just treat the ref as an opaque string, but I think it's fine as makeConversationRef is descriptive. What do you all think?

Kind of off topic but related, if I remember correctly public conversations is not a supported thing anymore, so looks like public can be left off all the time.

Edit: add missing not in last paragraph

@pzduniak
Copy link
Member

pzduniak commented Feb 23, 2019

I wouldn't overcomplicate it, defining a standard way the usernames are sorted should be enough, the actual comparsions will be 2-3 lines of code, so it's not that bad. If we really want to do this, then an alternative would be bundling the conversation ID inside of the channel object - that should solve the unknown channel issue at the cost of more data going through the data stream.

I'm all for the subscription stuff, if this gets to CORE I'd love to work on this. For now I think it'd be worth it to modify the existing code to use a single api-listen process, it won't that terribly difficult.

@malgorithms
Copy link
Contributor Author

I think I'd even postpone the single process change for now. It's effectively a performance improvement, which I agree with, but there are more missing features we should probably get through first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants