Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MRTk or ARfoundation + openXR? #2042

Open
fwild opened this issue Jul 18, 2024 · 7 comments
Open

MRTk or ARfoundation + openXR? #2042

fwild opened this issue Jul 18, 2024 · 7 comments

Comments

@fwild
Copy link
Collaborator

fwild commented Jul 18, 2024

Fridolin Wild:
The more I look into the Quest 3 setup, the more I believe we should probably retire MRTk and use ARfoundation only. Do you happen to know if ARfoundation covers all our needs (it does not cover image targets on hololens, but we have that already via vuforia)?

Benedikt Hensen:

Yes, I would say that we are not using anything that is completely specific to the MRTK that couldn't be replaced by other means. The ARFoundation is made for smartphone AR, so the Android and iOS versions should always work. The question is whether all HoloLens features and sensors are still accessible without it.

The following aspects in the project would need to be replaced. All seem possible, although some of them require a bit of work, especialy the last two points where we need to double-check whether the HoloLens-version would work with them.

  • The main feature that we currently use by the MRTK are its input system with the head and hand tracking but since these are OpenXR-compliant, they can directly be switched out with the ARFoundation XR rig.
  • We might lose some comfort functions like the pre-made script for dragging around an object in MR, as well as the solvers but I don't know if we are using these anywhere. Most of these scripts can quickly be re-created.
  • Since the MRTK also handles a lot of cross-platform adjustments like translating touch events on a smartphone into 3D scene interactions or automatically setting up a teleportable scene when connecting with VR glasses, we could require a more extensive platform management strategy where these changes need to be handled explicitly and need to be implemented by us.
  • There is also a dedicated MRTK build window for the HoloLens but I think this just automates and groups some build steps that can be performed manually using the native Unity tools.
  • We don't use the MRTK's UI toolkit, so this will likely not break anything in the UI.
  • For quick testing, it will also be vital to replace the in-editor simulation and navigation that is currently provided by the MRTK with something equivalent. Otherwise, we lose the ability to move the viewport with the in-editor playmode, as well as a quick hand simulation and the default spatial map. ARFoundation has a XR Simulation feature which seems to do the same and could have features for these points.
  • Then, there are some MRTK-specific shaders that are in use. These need to be replaced with some light-weight performance-optimized Unity shader.
  • The one thing that might be tricky to replace is the access to the spatial mapping on the HoloLens. Currently, this is just a MRTK service which runs in the background and generates the spatial mesh in the scene but without it, we need a way to access the map ourselves. There is a Unity documentation page for Unity 2018 and 2019 for doing this with the old XR plugin system. ARFoundation has a meshing feature but they don't mention whether that works with the HoloLens.
  • We would need to double check how we access the camera on the HoloLens and whether the MRTK is involved in it, e.g., to take snapshots or record videos. I don't think that we need the MRTK for it and it should just call the first available webcam stream but strangely, ARFoundation doesn't list camera access as a supported feature for the HoloLens: https://docs.unity3d.com/Packages/[email protected]/manual/features/camera/platform-support.html

Concluding, it is likely possible to remove the MRTK but it could be a medium-sized project to do so. We potentially lose a bit of comfort in development and might need to re-implement some foundational features that just existed out of the box in the past like the object movement and cross-platform adjustment of input handling. It is a pity that the MRTK acts like a core engine extension with its own architecture and required scene setup instead of it being an optional auxiliary library - otherwise, we could switch to ARFoundation and keep the helpful scripts of the MRTK but since it hooks into the input system and all around it, that won't work.

An alternative might also be to look into upgrading to the MRTK 3. It is somehow flying under the radar since it was moved into a different repository but in contrast to the original MRTK, this one is steadily getting new commits and doesn't seem dead at all. Releases might be a bit more rare than in the past but they seem to have already passed the experimental stage and reached stable release numbers. https://github.com/MixedRealityToolkit/MixedRealityToolkit-Unity

It seems like it is already more geared towards the XR structure of ARFoundation and lists the Quest 1 and 2 as supported platforms - I also saw some videos online of it working on the Quest 3, so that is probably possible. However, upgrading to MRTK 3 is quite substantial as they now rely on a more modular, completely new architecture with lots of little feature packages that have basically nothing in common with the MRTK 2. So, the switch to MRTK 3 also requires us to work on the above points and instead, insert a MRTK equivalent. https://learn.microsoft.com/de-de/windows/mixed-reality/mrtk-unity/mrtk3-overview/architecture/mrtk-v2-to-v3 (edited)

@fwild
Copy link
Collaborator Author

fwild commented Jul 18, 2024

Re Quick testing and in-editor simulation: yes, that's what the XR simulation feature seems to do - though it seems less nice to use in comparison. It seems to promise quite similar functionality (and then some, minus the preloading of a spatial map).

@fwild
Copy link
Collaborator Author

fwild commented Jul 18, 2024

Re Camera access for taking pictures: as far as I can see it uses the native camera interface for windows, not something from MRtk.

@fwild
Copy link
Collaborator Author

fwild commented Jul 18, 2024

Re MRTk cross-platform adjustments: I think this is what the XR interaction management package does for openXR - looks like it can be configured to handle multimodal input, and differently on different platforms.

@fwild
Copy link
Collaborator Author

fwild commented Jul 18, 2024

Another alternative to this is to move most of the core functionality into a separate package, and then use single platform build projects instead. This has charme, too, but development might get more complicated, as we would need to release the package first (or import+pull it from a git repo)?

@fwild
Copy link
Collaborator Author

fwild commented Jul 18, 2024

Benedikt Hensen

Yes, a core package would be the cleanest approach. It would allow us to finetune the version for each platform on its own to get the optimal design for each device.

The automatic cross-platform migration always has the disadvantage that the usage experience somehow feels slightly off on devices that were not the main development target as it then uses functions that were not mainly meant for that device and, e.g., do not utilize all its capabilities. And putting everything in one repository also makes the project quite complex and bloated.

The main task for creating a core package would be to identify what logic is platform-independent. It probably also requires adjustments to the current architecture in a way that the underlying logic becomes fully platform-independent. We have a chance to go into that direction when we add the new data model. If we properly separate it from the UI parts, a large part of this separation is already achieved. At the moment, the loading routine sets up all GameObjects once and distributes information onto them. If changes are made in the scene, we keep the data in sync by applying the changes to it at the same time. Instead, a nicer model would be to have a single source of truth in the data and connect the GameObjects to it via events. So, the GameObjects can adjust by themselves if the data change.

Once we have a suitable architecture, the technical part of setting up a package for it is actually quite straightforward with Unity's packaging system. The core package can even be edited from a project that uses it if the package is imported from the local disk (e.g. by cloning it first and then connecting it to all the platform-specific projects). But the approach requires the developers to be careful in what they change as it could break all the other projects, so automated testing, etc. could be vital for that setup.

@fwild
Copy link
Collaborator Author

fwild commented Jul 18, 2024

The idea that we manipulate the gameobjects by changing data in the data model is very appealing - with the data change reverse-triggering events that update the gameobject. Not quite sure whether this works everywhere, though. For example with the ghost tracks.

@fwild
Copy link
Collaborator Author

fwild commented Jul 18, 2024

  1. The services attached to the Root script+gameobject (and the i5 services, finally unifying how we use them) should quite nicely fit into a mirageXR service package (containing things like floor manager, AI manager, brand manager, calibration manager, exception manager)
  2. The UI kit as well as the spatial UI kit
  3. The content management (activity logic, augmentations, task station, search etc. - the activitySelection even is already encapsulated into its own scene).

I think that’s pretty much it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant