Replies: 2 comments
-
I am not sure what happened to the https://github.com/pcbermant/whoi-dolphins repo, but I can no longer access it and when I go to your github profile I do not see it. Really good progress! Quite surprised the detector works well despite it being developed for transient sounds (whale clicks). That is really good! I think your approach is valid, in that you - assuming I understand this correctly - are asking the following question -> if we listen to dolphin communication, can we tell if a stranding is about to occur / occurred? Given that the labels are weak (we don't have an exact timestamp on when a stranding occurred but only per entire day labels) 80+% accuracy could mean many things, including that this method performs even better than we suspect right now! I think what would be really cool is to do the following: Grab some prestranding and non stranding days and train the detector and classifier on them. But for inference, set aside say two of the pre stranding and non stranding days with the smallest count of hand annotations and not show any of these to the model during training. Then run the detector on these days and report the accuracy of the classifier on these detections + perform error analysis (what were the examples that the model got most wrong? what are the examples where the model was most confident and got them right?). I think this would be very useful and very interesting. Might be, if I were to extrapolate from your note, that your thinking already is going in that direction 🙂 Really curious what you will find and you are right, the analysis you have performed so far shows the value of the tools that have been developed thus far. Very quick progress that couldn't have been achieved otherwise. |
Beta Was this translation helpful? Give feedback.
-
Thanks! Great suggestions! I think that's a little bit farther down the road from where I am right now. I ran some initial inference experiments that were not successful at all (!!) so I'm trying to run some diagnostics in order to see what the issues might be. I'm guessing it might have to do with (1) windowing and (2) choosing non-whistle sections of the recordings, but I'm not sure quite yet - especially since the activation maps appear to light up with whistle contours. I then repackaged the annotations to make it a little easier to experiment with choosing inputs. Hopefully, I can experiment with this. It's still super early in this investigation, so there is definitely a long way to go! Regardless, it's exciting to make little steps forward! |
Beta Was this translation helpful? Give feedback.
-
I added a new notebook to the whoi-dolphins repo called Pipeline_V0.ipynb and it has a whistle detector model and a whistle stranding alarm model. In addition to visualizing the training data, it also has some visualizations of the activation maps of the trained models. Based on these maps and the testing accuracy, it seems like the whistle detector is picking up whistle contours and extracting meaningful information in order to carry out the binary classification task. Even though it has only been a few days, this project seems to have a lot in common with the sperm whale click detector and ID classifier. I'm testing out a similar pipeline, so it might make sense to run inference experiments next. With that said, I imagine that the additional incoming data might be super useful!
Also, what's the best way to merge this repo? Open to any suggestions!
Beta Was this translation helpful? Give feedback.
All reactions