Skip to content

Releases: YttriLab/B-SOID

Nature Communications release

28 May 17:46
Compare
Choose a tag to compare

This release will reflect the digital object identifier (DOI) for Nature Communications publication.

B-SOiD app update!

17 Nov 00:01
Compare
Choose a tag to compare

New app features

  • Take in SLEAP h5 files, DLC h5 files, and OpenPose json folders.
  • Incorporated caching to improve performance
  • Modularized steps for ease of use
  • Incorporated shuffling and subsampling to accommodate large datasets
  • Dimension flexibility based on user data variance
  • Use of Random Forest classifier, articles have shown this works best for UMAP
  • New analysis app that allows users to generate out GitHub like videos, validate trajectories, (beta) analyze kinematics and generate plots

And

  • Better aesthetics.

App version 1

16 Nov 08:35
Compare
Choose a tag to compare

Archiving app version 1 before releasing app version 2.

B-SOiD v1.3 (hello, python!)

13 May 21:11
Compare
Choose a tag to compare

We are proud to release a major update for the B-SOiD Python suite (bsoid_umap). The MATLAB package is not changing. In both cases, the user experience is no different. If, however, you are new to python package, you can watch the video in demo/bsoid_py_tutorial_v2.mp4.

By implementing UMAP with HDBSCAN, B-SOiD in Python can now handle high-dimension clustering with an essentially infinite number of features (we’ve tried up to 784 features = 28 body parts).

Moreover, this update renders the code truly agnostic - not just to the groups extracted, but to the input data itself. Although B-SOiD has always been capable of taking in any set of spatiotemporal data, it is now optimized to handle any set of estimated positions, from any camera angle, and of any model system.

As always, once trained, B-SOiD can be applied and generalized across datasets, processing hundreds of thousands of frames a minute - and do so with the same temporal resolution as the camera being used.

B-SOiD v1.2: Updates with adaptive high-pass for signal occlusion, improved handling of larger datasets, and frame-shifted prediction for millisecond neurobehavioral alignment.

19 Feb 23:23
2c8b2c2
Compare
Choose a tag to compare

B-SOiD v1.2 update improves user-friendliness by incorporating the following:

1. Data-driven determination of likelihood cutoff for possible signal occlusion.

2. Adaptive t-SNE parameters. The parameters include learning rate, exaggeration, and perplexity.

3. Frame-shifted machine learning prediction to identify behavioral transitions up to your camera's frame rate.

We have updated our README.md to reflect the newest changes. Upon setting the parameters and having folder containing the frames extracted from one of your video datasets, all you have to do is run the bsoid_master_v1p2.m and follow the prompt to select your files/folders. We want to make this as user-friendly as possible. Please do not hesitate to open up an issue!

The Python 3 version (Google Colab, free for all) will be released as a separate branch before the end of February 2020! We envision B-SOiD to run on the cloud one day!

Open Source Version, July 23rd, 2019

23 Jul 03:32
9d0af0d
Compare
Choose a tag to compare

This is the beta release for Behavioral-Segmentation of Open Field in DeepLabCut (B-SOID).