Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UDP event listeners live forever #7

Open
goharahmed opened this issue Jun 2, 2022 · 5 comments
Open

UDP event listeners live forever #7

goharahmed opened this issue Jun 2, 2022 · 5 comments

Comments

@goharahmed
Copy link

goharahmed commented Jun 2, 2022

Hi,
Probably my poor understanding of the code but it seems like the UDP event listener function are never ending loops. I've a test setup where initially sending a mi rpc request via call-api-client, call-api created a newer UDP local listener which never gets terminated.

Here is what I've understood is happening:

STEP1:

Create a NewProxy object every time a new wss/ws client sends request! (Can we not reuse older ones?)
NewProxy() has two major tasks.
1 - Establish connection to the opensips mi-datagram socket
2 - Create an event listener which can do subscriptions and filter on events

STEP2:

NewProxy() must connect to the opensips mi_datagram socket for sending commands, but to return a response/event it creates another UDP connection to the OpenSIPS mi_datagram socket.? Could we not have used the first connection's LocalAddr() instead.

func (event *EventDatagram) Init(mi mi.MI) (error) {

	miAddr, ok := mi.Addr().(*net.UDPAddr)
	if ok != true {
		return errors.New("using non-UDP protocol to connect to MI")
	}
	c, err := net.DialUDP("udp", nil, miAddr)
	if err != nil {
		return err
	}
	udpAddr, ok := c.LocalAddr().(*net.UDPAddr)
	if ok != true {
		return errors.New("using non-UDP local socket to connect to MI")
	}
	local := net.UDPAddr{IP: udpAddr.IP}
	udpConn, err := net.ListenUDP(c.LocalAddr().Network(), &local)
	if err != nil {
		return err
	}

EventHandler created another UDPConn to opensips mi_datagram socket and setup a goroutine waitForEvents()

STEP3:

waitForEvents() create a never expiring/ending for loop on the local side of the second mi_datagram UDPConn. Untill some error happens this doesn't break !

image

Questions:
Is this correct understanding?
Can we reuse the first established socket to mi_datagram for both sending commands and receiving replies for all incoming wss clients. Right now, every new WSS client sending command created a few sockets which live way longer than needed.

I'm implementing mi_http interface here, do I really need to have Events subscriptions or waitforEvents() functions?

@goharahmed
Copy link
Author

goharahmed commented Jun 2, 2022

Possible resolution. I've yet to try this:

wsc.proxy = proxy.NewProxy(Cfg)

	wsc.proxy = proxy.NewProxy(Cfg)
	if wsc.proxy == nil {
		logrus.Fatal("could not initialize SIP proxy")
	}

Don't create another new Proxy in every incoming WS/WSS client request. Instead make it once at start time and attach the same proxy instances to all requests. Then we probably have to attach our internally generated cmd_id and match the responses back based on that cmd_id and once matched then deliver event/response back to the client.

		go func(c *cmd.Cmd) {
			for event := range c.Wait() {
                               //Something like 
                                if cmd_id == event.cmd.ID {
				          agg <- &WSCmdEvent{c, event}
                                }
			}
			logrus.Debugf("done reading events for cmd %s (%s)", c.Command, c.ID)
			agg <- &WSCmdEvent{c, nil}
		}(c)

@razvancrainea
Copy link
Member

@goharahmed UDP does not actually open a connection, it simply allocates a port and only uses it when sending a command.
That being said, I believe the only issue here is that the connection remains open - if we close it every time, we would indeed open a port for each request, but that would not result in extra traffic.
So my suggestion is to simply close the socket after each request.Any thoughts?

@goharahmed
Copy link
Author

Yeah that could work.
I will check back on this one, I've done modifications on my local repo which (AFAIR) keeps the UDP socket open and exchanged all new requests and their replies over single UDP socket and yet was able to distribute back to correct WSS client.
I will recheck my code and come back with the option that works nicely.

@razvancrainea
Copy link
Member

I am not saying no to your approach either, your approach is actually better. However, we still need to make sure that a proxy connection gets eventually closed at a certain point, correct? That's why probably the best approach would be a combination of the two.
If you could create a PR with your changes, I will try to take a look. Thanks!

@goharahmed
Copy link
Author

I totally agree and follow your directions. I need to revive my IDE and code repo to get the best understanding.
I've since implemented AMQP interface as well in parallel to WSS and OpenSIPS MI_HTTP interface as well. So in time, will create separate PRs for both.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants