INFOCARVE: Focus and Context Augmented Reality Visualisation
Funding Agency: Science Foundation Ireland Programme: SFI Technology Innovation Development Award (TIDA) 2014 Project ID: SFI 14/TIDA/2349 Duration: January 2015 - January 2016 Researchers: John Dingliana (PI) [URL],
Niall Mullally, Lingqiang Ran Research Center: Graphics Vision and Visualisation Group [GV2], Intelligent Systems Laboratory,
School of Computer Science and Statistics, Trinity College Dublin [TCD]
This TIDA project addresses
the problem of effective interactive visualization of highly complex
dynamic 3D geometric data on augmented reality (AR) displays.
Extending upon a previous SFI-funded Research Frontiers
project [SFI RFP/08/CMS1076],
our solutions draw on previous research from perception, volume rendering
and non-photorealistic rendering (NPR). The acronym for the project is derived from the key
words Interactive Focus and Context Augmented Reality Visualisation. At the same time the title supports the analogy of an artist "carving" a medium to remove all but the most pertinent elements of the object that is being represented.
We outline below the main areas where we conduct research to achieve effective 3D visualization in AR.
AR Volume Visualization Demo for the BT Young Scientist Exhbition 2016
Augmented Reality (AR) is a display paradigm that involves a mixture of computer generated imagery interacting
with real-world inputs (e.g. head or gesture tracking) or composited with images of the real-world. The idea of AR and Mixed Reality has received
renewed interest in recent years with increasing availability of high-quality commodity-priced VR and AR technologies such as Google Glass, Oculus Rift
as well as
upcoming developments including Microsoft Hololens and Magic Leap. However, it could be argued that the field is still in its infancy,
particularly with regard to serious non-leisure 3D applications
that use these platforms to benefit a wide range of consumers.
Several Modalitites of Information Blended Together in Real-time (from left to right): RGB real-world image, Depth image, MRI dataset of Brain, Final result
A key challenge in Mixed and Augmented Reality is ensuring an effective blend between the real and virtual elements of the scene.
In particular, it is important that the relative depth and spatial position of real and virtual objects is correctly expressed to the viewer.
Additionally, in AR, it is of benefit to preserve as much as possible of the real world and to not require the user to wholly transplant
themselves into the virtual world, thus we must ensure that the graphical virtual objects minimally occlude the real world.
NPR visualization of a CT Head Dataset
The primary goal of graphical visualisation is to convey information efficiently in order support various user tasks.
Non-photorealistic rendering (NPR) techniques, often inspired by hand-drawn art styles, improve effectiveness of computer generated image through a combination
of enhancing important details in an image (e.g. lines in a sketch) and abstracting or de-emphasizing extraneous detail
e.g. texture and colour variance. Previous work has shown that such image operations can aid common tasks such as shape and depth perception.
We propose to exploit such techniques in AR, where efficiency is doubly important
due to the additional visual clutter caused by merging real and virtual objects together.
3D visualization on Epson Moverio BT-200 AR glasses.
State-of-the-art techniques for visualizing 3D data are typically computationally expensive.
This poses challenges for commodity AR platforms, the majority of which are comprised of mobile devices or
light-weight and highly-portable systems applicable for head-mounted displays that are not tethered
to a desktop system. In order to achieve widespread delivery of engaging and interactive visualizations, we need
to employ highly efficient computer graphics techniques tailored to the limitations of heteregeneous AR output devices.