Skip to main content

Trinity College Dublin, The University of Dublin

Menu Search

M.Sc. in Interactive Entertainment Technology

Dissertation Abstracts 2009-10

The following is a list of abstracts for MSc IET dissertations submitted in 2009-10. There is a link to available dissertation reports below the abstract.

Broadphase Collision Detection on the Cell Processor

By: Gavin Campbell
Supervisor: Michael Manzke

Fundamental to computer games, physics simulations, molecular modelling and also robot motion planning, collision detection is considered to be a generally well established field. Many different solutions exist already, some more suitable to particular applications than others. However, in recent years, research into collision detection techniques has been re-invigorated by the onset of the parallel opportunities that came with the emergence of multi-core architectures and possibilities for general purpose processing on GPUs. The demand for more efficient techniques is evident as the importance of physical simulation in computer games has risen dramatically. It is the aim of this project to investigate and evaluate the applicability of broadphase collision detection, to the unique architecture of the Cell processor.

The broadphase algorithm to be implemented is a variation of the popular “Sweep and Prune", also known as “Sort and Sweep", first documented by David Baraff and also implemented in the I-Collide system. The technique is supported by most current physics middleware solutions including the open source physics library, “Bullet". This work will describe the development of a unique parallel sweep and prune algorithm designed to harness the power of the Cell. The development of an x86 based implementation will also be described, to provide a basis for comparison with the Cell implementation.

Dissertation Report (0.8Mb): [PDF]

Sketch-based Path Control

By: Brendan Carroll
Supervisor: Dr. John Dingliana

Sketch-based interfaces are largely confined to experimental research and are used in specialist areas such as those occupied by artists, architects and engineers. While mouse driven input and menu based interfaces have become the common computer interaction method, sketch input is an intuitive, natural way of communicating intent and ideas. Pathfinding algorithms are a very important part of modern interactive entertainment applications. They can be used in a multitude of data structures and research into their operation continues to this day. Interaction with these algorithms has been primarily through the use of mice which limit the amount the user can contribute to their function. This project implements an approach for the use of sketch-based interfaces with pathfinding so that it is possible to control and modify the paths of agents on 3D terrain using sketch strokes in real-time.

You Tube Link:

Dissertation Report (12Mb): [PDF]

Integration of Ray-Tracing Methods into the Rasterisation Process

By: Shane Christopher Supervisor: Michael Manzke

Visually realistic shadows in the field of computer games has been an area of constant research and development for many years. It is also considered one of the most costly in terms of performance when compared to other graphical processes. Most games today use shadow buffers which require rendering the scene multiple times for each light source. Even then developers are still faced with a wide range of shadow artefacts ranging from false shadowing to jagged shadow edges caused by low resolution shadow maps. In ray-tracing perfect, accurate shadows can be achieved but the performance cost involved has until now been considered too great to integrate into an interactive game. This dissertation will examine the best methods of using ray-tracing to calculate shadows and the consequences of doing so in terms of visual accuracy and resource usage. Improvements in hardware as well as research into scene management in dynamic ray-traced applications could make this a feasible alternative to current methods.

Youtube Link:

Dissertation Report (9Mb): [PDF]

A Framework for Visual Features Database Creation for Building Recognition on Mobile Devices

By: Marco Conti
Supervisor: Gerard Lacey

We propose the design and development of a framework for the creation of small visual features database. This database is to be used on mobile devices to perform building recognition on a self-contained "tell me what I am looking at" application using two inputs: GPS data and camera images.

The main contribution of our approach is exploring the automated creation of a compact local visual features database to be installed on the mobile device. Using a local database is justified by scenarios where a data connection to a remote server is not available or too expensive (e.g. tourists using data roaming abroad).

Creating a compact database requires a balance between various constraints. The number of visual features in the database will affects both the size of the database on the limited storage of a mobile platform and the computation time of the image matching. However, having a small number of features in the database also results in poor results. This project evaluates the use of a genetic algorithm that will select the best parameters to build the database using visual features clustering.

Dissertation Report (3Mb): [PDF]

An Animation & Particle Design Tool for a Location-Aware Mobile Game

By: Andrew James Fearon  
Supervisor: Mads Haahr

Current generation smartphones are quickly becoming a central hub for all kinds of activities besides communication. With a combination of powerful hardware, superb displays and open marketing channels, smartphones have seen a recent explosion of video game activity.

This project was set in motion due to the restrictive nature of memory allowances on modern mobile devices and the increased demand for visual impact. The project required a visual effects tool for a location-aware mobile game allowing non-technical designers to create animations which would consume a minimal amount of memory on the device.

The goals for the project were achieved through the combination of a desktop application which is used by designers to design animations and particle effects as well as an accompanying framework which executes the designs on the device. This ensured that designers were abstracted away from the technical level and could create engaging animations without the worry of large memory overheads.

Dissertation Report (1.9Mb): [PDF]

Narrow Phase Collision Detection on the Cell Architecture

By: Colin Greene
Supervisor: Michael Manzke

The aim of this project is to try take advantage of the immense computational ability of the Cell Broadband Engine when implementing a narrow phase algorithm on this unique architecture. By using parallel processing and Single Instruction Multiple Data operations, performance increases over a x86 architecture implementation are to be hopefully gained.

This dissertation implements the GJK narrow phase algorithm. Detailed in this research is the design approaches taken on the Cell processor, how these approaches are implemented and the optimisations attempted.

Although some of the approaches on the Cell outperform that of the x86 version, the performance of these approaches are hindered by factors such as memory access latency. Parallelisation lends itself well to narrow phase collision detection and the avenues that may be taken to improve on this research are provided as future work.

Procedural Generation of Planetary Bodies

By: Dermot Hayes-McCoy
Supervisor: Micheal Mac An Airchinnigh

Procedural generation is a technique used in multimedia for the creation of a variety of its associated content. Such content may take the form of characters, images, land- scapes, etc in both 2D and 3D applications and is usually created manually by highly skilled artists or designers. In contrast, procedural generation refers to the produc- tion of such content through automatic, algorithmic methods. Through automation of such expensive and time-consuming tasks procedural generation can save money and produce a more content-rich end product.

This report details a system capable of both generating and rendering in real-time procedural terrain in the form of planets. Although terrain is usually generated using a 2D heightmap image to represent elevation here a 3D heightmap is used. This results in a longer generation time but avoids many of the problems associated with mapping a 2D plane to a sphere and produces a much higher quality result. The heightmap itself is generated from Simplex Noise, using Fractional Brownian Motion to create a more realistic landscape topology. To create more heterogeneous terrain a series of ivmultifractal-based heightmaps are also used. The heightmap is not stored in memory as, depending on the size and number of planets to be generated, this could occupy considerable space. Instead, a value is generated for each input position on the ?y, ensuring that computation time is determined by the degree of detail required on screen at any time rather than the size and extent of the terrain itself. In order to display the terrain in real-time a Level of Detail (LOD) system is used to remove excess detail when necessary. This system uses an implementation of Geometry Clipmaps to achieve this, although heavily modi?ed to accommodate spherical terrain. Improved performance is also achieved by utilising the GPU rather than the CPU in both the heightmap creation and LOD stage. As modern graphics hardware provides much greater computational power than the CPU for many tasks, o?oading these stages to the GPU results in very good performance and leaves the CPU mostly free for potential future tasks.

Results show that these techniques are suitable for creating a number of proce- durally generated planets in real-time, without consuming large amounts of memory. Frame rates of greater than 60 frames per second can be maintained whilst generating and displaying an earth-sized planet to a resolution of less than 1 metre per vertex. Furthermore, the various fractal techniques used are capable of creating heterogeneous terrain surfaces that provide a good approximation of realistic topography.

Youtube link:

Dissertation Report (4Mb): [PDF]

Ray Tracing A comparison of the Cell and Intel Xeon processors

By: Geoff Keating
Supervisor: Dr Michael Manzke

Ray tracing is a method of image generation that provides numerous desirable features such as reflection, shadows, refractions and transparencies that are difficult to achieve using other methods such as rasterisation. Ray tracing however has always suffered from poor performance when compared to other methods, and a variety of accelerarion data structures have been developed which cut down on the number of ray-object intersection tests that that must be done in order to render a scene. The tracing of each ray does not depend on any other ray, so it would seem that ray tracing would be a good candidate for running on parallel hardware such as the Cell and multi-core x86 architectures such as the Xeon. Ray tracing on the Cell brings with it, it's own challenges however. In particular how to efficiently perform memory transfers between main memory and the SPEs without causing bottlenecks, and also how to best take advantage of the SPE's SIMD architecture. This dissertation will briefly review the various algorithms and acceleration data structures available and their performance, before implementing a simple ray tracer on both the P.C. and the Cell.

Optimized Face Tracking in Adobe Flash

By: Kevin Lockard
Supervisor: Yann Morvan and John Dingliana

This dissertation aims to solve the problem of smooth face tracking for input. Face detection is a heavily researched problem in the field of computer vision, but much less attention is paid to smoothing the results to use as a form of input for applications. Face movement cannot be used as an alternative form of input until solutions are created to smoothly and accurately track a user's face over time. Existing solutions for face detection produce results that are often quite noisy and jitter a significant amount. The chosen platform is Adobe Flash, in order to take advantage of that platform's ease of distribution through the web and desktop applications.

Dissertation Report (1.4Mb): [PDF]

Tool Support for Location-Aware Story-Driven Games

By: Darragh McCarthy
Supervisor: Dr. Mads Haahr

Location aware games offer the possibility of removing the sedentariness of traditional gaming, as well as providing entanglement between real world venues and the virtual world. This new and innovative area lacks the support available to other fields in game development, notably in the form of development tools. In modern game production development tools are essential to smooth efficient creation of high quality products.This dissertation is focused on developing one such tool for location aware games, which uses GPS coordinates in order to schedule specific game events or tasks. These GPS locations must be chosen by a developer but due to the inaccurate nature of GPS systems, this can be a frustrating and difficult task that the tool developed here aims to simplify.

This tool comes in two parts. Firstly, a mobile application is built for the mobile Android system that allows easy collection of large amounts of GPS readings and details for desired locations.

Next a desktop interface tool was created which allows the developer to analyse this data using a variety of visualisation techniques to best display the recorded data. With this tool the developer is able to quickly understand the data and make important informed decisions with regards to the game locations.

Multi-Agent Pathfinding over Real-World Data Using CUDA

By: Owen McNally
Supervisor: John Dingliana

This project will implement a framework to perform pathfinding over real-world data using CUDA. The system will read in XML maps from Open Street Map and convert them into a roadmap format that we can use to perform pathfinding. The system uses the CUDA architecture to perform A* searches for many agents in parallel while allowing the user to set start and goal states manually or have the system generate random start and goal states automatically. The implementation will be tested on a variety of maps with differing numbers of agents in order to ascertain the viability of using the GPU to perform pathfinding in real-world situations.

Dissertation Report (1Mb): [PDF]

A LazyBrush plug-in for Photoshop

By: Stephen O'Brien
Supervisor: Daniel Sýkora

LazyBrush is a flexible painting tool which attempts to speed up the process of painting rough hand-drawn images by solving many problems that are usually encountered while trying to colour these images. These problems include gappy outlines, small regions, anti-aliasing and imprecise stroke placement to name but a few.

Photoshop has become synonymous with the practice of image editing and as such it seems like the perfect platform to help integrate LazyBrush's functionality with an extremely large audience. The easiest way to incorporate LazyBrush functionality into Photoshop is to create a plug-in.

Accordingly such a plug-in was designed and a prototype was implemented. This prototype successfully manages to bring some of LazyBrush's features to Photoshop. Having said that, there is still some work to be done to fully utilise what LazyBrush has to offer and to embrace all the image modes which Photoshop offers.

Although the plug-in is not fully complete it provides a good starting point to help bring LazyBrush to a wider audience.


Painterly Stylization of Real-time Volume Rendering

By: Carlos Ramalhão
Supervisor: John Dingliana

Interest in the field of non-photorealistic rendering (NPR) has grown significantly within graphics research and development. NPR techniques are regarded as increasingly important tools to provide artists and designers with novel ways of achieving artistic expressiveness from visual information. Furthermore, NPR abstracts the detail of a given set of information such that resulting images are simplified into more comprehensible representations. For this reason, research in this field has also been driven by science domains such as medicine and physics, especially for the visualization of 3D volume information..

We shall concentrate on the research of painterly rendering techniques. That is, the representation of a scene in a way that it would mimic the visual appearance of a hand-made painting and the effects achieved from the materials used, such as oil or acrylic paints.

We propose a real-time interactive painterly rendering pipeline for the visualization of volume data on the GPU. A certain iso-surface, a region in the volume data of a certain iso-value, is retrieved through a raycasting algorithm, all the necessary information (namely iso-surface intersections, gradients and flow) is calculated on-the-fly and used to directly influence the final image. The result is a real-time rendering that takes inspiration from traditional paintings. The volume’s colour, details and surface topology are conveyed using brush stroke colour, size, density and orientation. The brush properties and texture can be defined by an artist to achieve the desired result.

Dissertation Report (19Mb): [PDF]

Procedurally Aided Level Design

By: Dariaus Stewart
Supervisor: Dr. Mads Haar

Recent research in procedural content generation has demonstrated ways of creating a vast amount varying geometry. From multiple types of variant terrain to vast cityscapes have been explored. Many theorised that the methods employed in such research would become adopted by the games industry to solved their growing development costs for producing game content. But this has not been the case apart from a few exceptions. The purpose of this research is to explore the task of applying procedurally generated content intended for use within games. It is probable that content produced procedurally must meet other criteria that would allow it to be more pliable.

This thesis will first analyse the multiple applications of procedural generation techniques and looks at their usefulness within game development. The fundamental theory within this report, is that procedurally generated content in most cases, would be unsuitable for use in games. The reasons for this are numerous but the foremost one is the lack of control and flexibility available when procedural methods are employed. This report advocates that content for games should be authored by designers on an abstract level and then procedurally enhanced to create detailed physical models. This is to allow the author to explicitly state the inclusion of certain desired aesthetic features.

To demonstrate this theory a unique approach to developing suitable game content using various procedural content generation techniques is presented along with a example implementation. Specifically the generation 3D game levels will be done to highlight the merit of this method. This novel approach demonstrates how procedural methods, if applied tactfully, can generate quality content that would be usable in a variety of game genres. The output will conform to the structure provided by the author but also introduce small random elements to aid replayability.

Dissertation Report (1Mb): [PDF]

Real Time Rendering of Animated Volumetric Data

By: Luis Valverde
Supervisor: John Dingliana

Animated volumetric data can be found in fields like medical imaging -produced by 4D imaging techniques such as ultrasound-, scientific simulation -for example, fluid simulation- or cinematic special effects -for reproducing volumetric phenomena like fire or water. Real-time rendering of this data is challenging because due to its large size, in the order of gigabytes per second of animation, it requires on-the-fly streaming from external storage to GPU memory (called out-of-core rendering) causing bandwidth between memory subsystems become the bottleneck.

This dissertation work describes the design and implementation of an out-of-core rendering system for animated volumes. A two-stage compression system is used to reduce bandwidth requirements based on a fast lossless compression method in the CPU (LZO) and a hardware supported lossy method in the GPU (PVTC) following previous research [nagayasu06, nagayasu08]. This provides an average increase in FPS of 290% relative to rendering without compression. The system is critically evaluated and compared with a novel GPU compression scheme developed to improve image quality (E-PVTC). Additionally, an assessment of the applicability of these techniques to interactive entertainment and the medical and scientific fields is performed.

Dissertation Report (5Mb): [PDF]

Gaze Based Paint Program with Voice Recognition

By: Jan van der Kamp
Supervisor: Dr. Veronica Sundstedt

Modern eye-trackers allow us to determine the point of regard of an individual on a computer monitor in real time by measuring the physical rotation of eyes. Recent studies have investigated the possibility of using gaze to control a cursor in a paint program. This has opened up the doors for certain disabled users to have access to such programs which may not have been possible via the traditional input methods of keyboard and mouse. This dissertation investigates using gaze to control a cursor in a paint program in conjunction with voice recognition to command drawing. It aims to improve upon previous work in this area by using voice recognition rather than `dwell' time to activate drawing. A system of menus with large buttons to allow easy selection with gaze is implemented with buttons only being shown on screen when needed. Gaze data is smoothed with a weighted average to reduce jitter and this adjusts automatically to account for saccades and fixations, each of which requires a different level of smoothing. Helper functions are also implemented to make drawing with gaze easier and to compensate for the lack of sub-pixel accuracy which one has with a mouse. The application is evaluated with subjective responses from voluntary participants rating both this application as well as traditional keyboard and mouse as input methods. The main result indicates that while using gaze and voice offers less control, speed and precision than mouse and keyboard, it is more enjoyable with many users suggesting that with more practice it would get significantly easier. 100% of participants felt it would benefit disabled users.

Youtube Link:

Dissertation Report (3Mb): [PDF]

Gestural Interface with a Projected Display Game

By: Gianluca Vatinno [Best Dissertation]
Supervisor: Dr. Gerard Lacey

Today technology is moving fast towards interaction innovation; new ways of Human Computer Interactions (HCI) are being researched, especially in the field of Interactive Entertainment Technology. The history of console controllers has drawn the direction for future interaction with interactive applications, which points to a "controllerless" approach. This research follows on that direction.

The research project presents a system which uses a Pico-projector to project a game onto a table top and uses hand gestures, recognized by a camera, to interact with game elements. A technology inspection is carried out throughout the research to highlight subtle issues of the coupled technology camera-projector. Methods and techniques which provide the best accuracy in hand segmentation are analyzed by fast prototyping. A final demo shows the research project feasibility, presenting the remaking of the classic Labyrinth Wood Maze in a digital videogame playable with hand gestures. In the end a technical evaluation assesses the Gestural Interface Accuracy, Robustness and Repeatability; a user impact evaluation reports the users' opinions on the overall system.

Youtube Link:

Dissertation Report (9Mb): [PDF]

Solving Diffusion Curves on GPU

By: Jeff Warren
Supervisor: Daniel Sykora

Many tasks in computer graphics and vision produce a large sparse system of linear equations which typically requires a large amount of CPU time to be solved. Processing images which contain “diffusion curves” is one such example of this category of systems. Recently various GPU based solvers have been proposed allowing real-time processing and feedback for diffusion based images, however they have been closed systems which cannot be expanded and developed further. To mitigate this, we propose a linkable library which can be used by third party applications to easily abstract and solve diffusion curves, using available GPU hardware in a computer system. This allows applications which can provide feedback to artists working with large images, at unintrusive speeds. Both CPU and GPU based algorithms are provided, allowing support of legacy hardware. The library can also be compiled to run natively on 32 and 64 bit operating systems.

Using modest hardware, the users of such an application can edit and develop multi megapixel images at processing speeds in excess of 10 frames per second.

Dissertation Report (4Mb): [PDF]