IMVIP 2015

The conference will start on the Wednesday 26th of August 2015 at 4pm and will end on Friday 28th of August 2015 at 1pm. Locations in Trinity are (click to see maps):

Programme below (subject to changes). Instructions for presenters are here. The book of the proceedinds is here in draft form.

Wednesday 26/08: Robotics and vision public event in Science Gallery

  • 16:00-18:00 visit Science Gallery & Imvip Registration desk

  • 18:00-20:00 Public Reception with teaser talks with IMVIP keynotes, and demos.

Thursday 27/08: J.M. Synge Theatre in the Arts Building

  • 08:00-09:00 Registration

  • 09:00-10:00 Oral paper presentation - Chair: Paul Whelan

    • 3D Reconstruction of Reflective Spherical Surfaces from Multiple Images
      A. Bulbul, M. Grogan & R. Dahyot

    • Kernel Density Filtering for Noisy Point Clouds in One Step
      M.A. Brophy, S.S. Beauchemin, J.L. Barron

    • Symmetry and Repeating Structure Detection
      M. Jilani, P. Corcoran & M. Bertolotto


  • 10:00-11:00 Coffee / Imvip Posters presentation / Expo - Chair: Kenneth Dawson-Howe

    • Simplifying Genetic Algorithm: A Bit Order Determined Sampling Method for Adaptive Template Matching
      C. Zhang and T. Akashi

    • Automatic Segmented Area Structured Lighting
      K. Goyal, H. Baghsiahi & D. R. Selviah

    • Machine Learning in Prediction of Prostate Brachytherapy Rectal Dose Classes at Day 30
      P. Leydon, F. Sullivan, F. Jamaluddin, P. Woulfe, D. Greene, K. Curran

    • Hand Hygiene Poses Recognition with RGB-D Videos
      B. Xia, R. Dahyot, J. Ruttle, D. Caulfield and G. Lacey

    • Gradient Magnitude Based Normalised Convolution
      A. Al-Kabbany, S. Coleman, and D. Kerr

    • Resolution enhancement of thermal imaging
      C. Lynch, N. Devaney, A. Drimbarean

    • Range Image Feature Extraction using a Hexagonal Pixel-based Framework
      B. Gardiner and S. Coleman

    • Investigation of Face Tracking Accuracy by Obscuration Filters for Privacy Protection
      J. Sato & T. Akashi

    • Cone detection and blood vessel segmentation on AO retinal images
      L. Mariotti & N.Devaney

    • Interpolating eigenvectors from second-stage PCA to find the pose angle in handshape recognition
      M. Oliveira & A. Sutherland

  • 11:00-12:00 Keynote Anil Kokaram - Chair: Nicholas Devaney

    Title: Pushing Pixels at YouTube

    Abstract : The Video Infrastructure Division is concerned with the care, feeding and transport of pixels from ingested source material to the final display device. With more than 1 Billion users, 300 hours of video uploaded per minute, and thousands of different output devices as targets, the technological challenges are significant. This talk exposes some of the technology behind the massively distributed transcoding and broadcast center that is YouTube. We consider in particular the difficulties caused by scale and highlight the importance of high level, automated "black box" control for many of the video processing tools which are considered standard today.


  • 12:00-13:00 Oral paper presentation - Chair: Francois Pitié

    • Dynamic Texture Classification using Combined Co-Occurrence Matrices of Optical Flow
      V. Andrearczyk & P. F. Whelan

    • PatchCity: Procedural City Generation using Texture Synthesis
      J. D. Bustard and L. P. de Valmency

    • Multiscale “Squiral" (Square-Spiral) Image Processing
      M. Jing, S. A. Coleman, B. W. Scotney, M. McGinnity

  • 13:00 - 14:00 Lunch


  • 14:00-15:00 Keynote Aljoscha Smolic - Chair: Alexandru Drimbarean

    Title: Thinking in Video Volumes

    Abstract : Video is typically represented as a temporal sequence of arrays of values. These values contain strong interconnections in spatial and temporal dimensions, which can be exploited for efficient processing. However, computational complexity and memory restrictions limited exploitation of temporal interconnections in the past. Today's computing power enables development of a new class of algorithms that operate on video volumes. FeatureFlow provides efficient solutions for classical problems in visual computing such as optical flow, disparity estimation, and data propagation. DuctTake is a novel approach for spatio-temporal video compositing. Temporally consistent tone mapping of HDR video is another application scenario of this principle, which will be covered in this talk.


  • 15:00-16:30 Industry Panel Discussion - Chair: David Moloney

    Presentations from companies including Evercam, jaliko, sentireal, daqri, tomra, fotonation, valeo, emdalo, movidius, cathx ocean, kinesense, surewash, treemetrics, xilinx, intel

  • 16:30-17:30 Coffee / Imvip Posters presentation / Expo - Chair: Kenneth Dawson-Howe


  • 17:30-18:30 Keynote Takeo Kanade - Chair: Gerard Lacey

    Title: Smart Headlight: A new active augmented reality that improves how the reality appears to a human

    Abstract: A combination of computer vision and projector-based illumination opens the possibility for a new type of computer vision technologies. One of them is augmented reality: selectively illuminating the scene to improve or manipulate how the reality itself, rather than its display, appears to a human. One such example is the Smart Headlight being developed at Carnegie Mellon University's Robotics Institute. The project team has been working on a set of new capabilities for the headlight, such as making rain drops and snowflakes disappear, allowing for the high beams to always be on without glare, and enhancing the appearance of objects of interest. Using the Smart Headlight as an example, this talk will further discuss various ideas, concepts and possible applications of coaxial and non-coaxial projector-camera systems.


  • 18:30-19:30 Reception in Atrium

  • 19:30- Conference Dinner in Dining hall

Friday 28/08: J.M. Synge Theatre in the Arts Building


  • 9:30-10:30 Keynote John Mc Donald - Chair: Rozenn Dahyot

    Title: Visual SLAM: from sparse mapping to 3D perception

    Abstract: From fully autonomous vehicles to markerless AR, gaming to household robotics, recent progress in visual SLAM is providing such systems with the foundations for higher level scene interpretation, visualisation, and interaction. This talk will provide an overview of the visual SLAM problem in the context of two systems developed jointly between Maynooth University and MIT. The first system employs a feature based approach for multi-session visual mapping where multiple separate mapping sessions can be combined into a single globally consistent model of the environment. The second system, known as Kintinuous, provides a real-time dense SLAM system that allows globally consistent mesh based mapping over extended scales. Results will be presented for both systems using a number of different datasets. Finally the talk will present the application of Kintinuous to a number of robotic tasks demonstrating the benefits of the resulting dense representations for 3D perception.
  • 10:30-11:00: Coffee

  • 11:00-12:00: Oral paper presentation - Chair: Sonya Coleman

    • Investigation into DCT Feature Selection for Visual Lip-Based Biometric Authentication
      C Wright, D Stewart, P Miller, F Campbell-West

    • Bayer Interpolation with Skip Mode
      D.G. Bailey, M. Contreras & G. Sen Gupta

    • Architecture for Recognizing Stacked Box Objects for Automated Warehousing Robot System
      T. Fuji, N. Kimura, and K. Ito


  • 12:00-13:00 Keynote Oscar Deniz Suarez - Chair: David Moloney

    Title: Project Eyes of Things

    Abstract: Vision, our richest sensor, allows inferring big data from reality. Arguably, to be “smart everywhere” we will need to have “eyes everywhere”. Currently, computer vision is rapidly moving beyond academic research and factory automation. The possibilities are endless in terms of wearable applications, augmented reality, surveillance, ambient-assisted living, etc. Vision is, however, the most demanding sensor in terms of power consumption and required processing power, which can explain the shortage of development platforms with low-cost mobile processing and IoT features. Our objective in this EU-funded innovation project running from 2015 to 2017 is to build an optimized core vision platform that can work independently and also embedded into all types of artefacts.


  • 13:00-13:05 IMVIP closing

  • 13:05-13:35 IPRCS meeting