NOTE: Due to Git LFS storage limitations on github all model .onnx files + weights are stored separately. In order to run the project you might have to manually copy them into the folder "AIServer/AIServer/Assets/" (follow this link: https://drive.google.com/drive/folders/1lC2jHfe08Jcu86ueu86ThJXM0lPFQAZU?usp=sharing)
Key Features • How To Use • Download • Credits • Related • License
Table of Contents
With more and more powerful Mixed Reality (MR) hardware capabilities, MR applications are gaining ground in various industries such as construction, medicine, education, etc. One of the main features of these tools is providing guidance by extending physical objects around the users with extra information. To reduce users’ overload, such a real-world extension should map to users’ tasks and intentions. However, currently MR tools does not know the users’ aim and such lack of information influences the interaction quality between the user and the mixed world. This thesis aims to integrate users’ eye movement data and hand gestures to identify users’ intentions while doing a task and provide user-adaptive guidance. With this approach, we aim to increase user interaction quality between MR environment and the user by providing intelligent guidance.
Feature | Implement Status |
---|---|
Yolov4 object detection | ✔️ |
Yolov5 object detection | ✔️ |
Yolov7 object detection | ✔️ |
Yolov5 instance segmentation | ✔️ |
-
The "AIServer" solution receives a video stream from the HoloLens and feeds it to neural nets. The corresponding results will be then sent back to the HoloLens. Follow this MixedReality-WebRTC tutorial for deeper understanding and/or if you want to build it by yourself.
-
The "DaimlerTruckDataGenerator" Unity application is for enlarging our dataset with more synthetic data. Follow the unity perception tutorial for deeper understanding and/or if you want to build it by yourself.
-
When ou plan to modify the "SharedResultsBetweenServerAndHoloLens" project in the AIServer solution make sure to build a .dll and also update the "HoloLensUserGuidance"- Unity app (HoloLens side). To update means to copy the new .dll into the "Assets/Scripts"-Folder
-
The node-dss project is used as a signaling solution between server and HoloLens.
-
For simplicity: I expect the AIServer run before the HoloLens application: The opposite order will probably crash!
-
Keep in mind that the AIServer solution contains two projects with the same code (SharedResultsBetweenServerAndHoloLens, SharedResultsBetweenServerAndHoloLensServerSide) We need two projects for compiling against .NET Core UWP for the server and against .NET Standard 2.0 for Unity
-
The datasets were created using roboflow.
-
Customize the signaler component in your unity app. Add the ip address of your server on the highlighted area (see image above)
Commands I use for tracking the python requirements for the UserGuidance system.
python -m pipreqs.pipreqs --encoding iso-8859-1 --ignore [data/,.venv,__pycache__,yolov5-7.0] --force .
pip freeze > requiuirement.txt
- Clone the repo
git clone --recurse-submodules [email protected]:future-labs/future-devices-lab/hololens-smartar.git
When you are not able to build the II2CppOutputProject in the Visual Studio project you might consider this solution: https://forum.unity.com/threads/il2cpp-error-on-building-for-windows.1351589/
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Jonas Heinle - @Cataglyphis_ - [email protected]
Some very helpful literature, tutorials, etc.