Skip to content

Kataglyphis/Designing-User-adaptive-Content-for-Mixed-Reality-Using-Eye-and-Hand-Tracking

Repository files navigation


VulkanEngine
Hololens-SmartAR

Designing User-adaptive Guidance for Mixed Reality Using Eye-Tracking

NOTE: Due to Git LFS storage limitations on github all model .onnx files + weights are stored separately. In order to run the project you might have to manually copy them into the folder "AIServer/AIServer/Assets/" (follow this link: https://drive.google.com/drive/folders/1lC2jHfe08Jcu86ueu86ThJXM0lPFQAZU?usp=sharing)

Table of Contents
  1. About The Project
  2. Getting Started
  3. Troubleshooting
  4. Tests
  5. Roadmap
  6. Contributing
  7. License
  8. Contact
  9. Acknowledgements

About The Project

With more and more powerful Mixed Reality (MR) hardware capabilities, MR applications are gaining ground in various industries such as construction, medicine, education, etc. One of the main features of these tools is providing guidance by extending physical objects around the users with extra information. To reduce users’ overload, such a real-world extension should map to users’ tasks and intentions. However, currently MR tools does not know the users’ aim and such lack of information influences the interaction quality between the user and the mixed world. This thesis aims to integrate users’ eye movement data and hand gestures to identify users’ intentions while doing a task and provide user-adaptive guidance. With this approach, we aim to increase user interaction quality between MR environment and the user by providing intelligent guidance.

Key Features

Feature Implement Status
Yolov4 object detection ✔️
Yolov5 object detection ✔️
Yolov7 object detection ✔️
Yolov5 instance segmentation ✔️

Built With

Useful tools

Getting Started

  1. The "AIServer" solution receives a video stream from the HoloLens and feeds it to neural nets. The corresponding results will be then sent back to the HoloLens. Follow this MixedReality-WebRTC tutorial for deeper understanding and/or if you want to build it by yourself.

  2. The "DaimlerTruckDataGenerator" Unity application is for enlarging our dataset with more synthetic data. Follow the unity perception tutorial for deeper understanding and/or if you want to build it by yourself.

  3. When ou plan to modify the "SharedResultsBetweenServerAndHoloLens" project in the AIServer solution make sure to build a .dll and also update the "HoloLensUserGuidance"- Unity app (HoloLens side). To update means to copy the new .dll into the "Assets/Scripts"-Folder

  4. The node-dss project is used as a signaling solution between server and HoloLens.

  5. For simplicity: I expect the AIServer run before the HoloLens application: The opposite order will probably crash!

  6. Keep in mind that the AIServer solution contains two projects with the same code (SharedResultsBetweenServerAndHoloLens, SharedResultsBetweenServerAndHoloLensServerSide) We need two projects for compiling against .NET Core UWP for the server and against .NET Standard 2.0 for Unity

  7. The datasets were created using roboflow.

Customize ip address in unity engine app
  1. Customize the signaler component in your unity app. Add the ip address of your server on the highlighted area (see image above)

Commands I use for tracking the python requirements for the UserGuidance system.

	python -m  pipreqs.pipreqs --encoding iso-8859-1 --ignore [data/,.venv,__pycache__,yolov5-7.0] --force .
	pip freeze > requiuirement.txt

Prerequisites

Installation

  1. Clone the repo
    git clone --recurse-submodules [email protected]:future-labs/future-devices-lab/hololens-smartar.git

Tests

Troubleshooting

When you are not able to build the II2CppOutputProject in the Visual Studio project you might consider this solution: https://forum.unity.com/threads/il2cpp-error-on-building-for-windows.1351589/

Roadmap

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Contact

Jonas Heinle - @Cataglyphis_ - [email protected]

Acknowledgements

Literature

Some very helpful literature, tutorials, etc.

Networking

Windows.AI.MachineLearning API

Yolo

MRTK

3D Models

Synthetic datasets

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published