Seminars Archive

2013 VMG Seminars

   

Deformable and Articulated Objects: Segmentation, Registration, and Interaction

Associate Professor Xianghua Xie - Swansea University

Wednesday, 4th December - 2.00pm in Powerwall Room 302 (Dean Street)

In this seminar, I will first talk about deformable modelling for object segmentation. Deformable models have been widely used for shape recovery due to their natural handling of shape variation and the ability to incorporate prior knowledge. Their design normally involves the consideration of the following three fundamental issues: representation and its numerical solution, object boundary description and stopping function design, and initialisation and convergence. These issues are not always independent of each other. An appropriate representation ensures the model handle the deformations properly, however, representation schemes that support topological changes do not necessarily always achieve the desired evolutions. Better boundary description can improve convergence ability by preventing leakage through and good convergence ability is paramount to not compromise any gains from carefully chosen representation and boundary extraction. On the other hand, good convergence properties can compensate certain inadequacies in boundary description. In this talk, I would like to show some of our recent developments in implicit deformable modelling in 2D, 3D, 3D+time and object tracking, and discuss their relationship to above three design issues. The talk will cover our novel implicit representation which can handle more complex topological changes than splitting and merging, an edge based physics-inspired external force field and its generalisation to arbitrary dimensions that has superior performance in capturing complex geometries and dealing with weak edges and broken boundaries, a simple but effective numerical solution to avoid periodic regularisation so that initialisation is no longer necessary which is particularly useful for object detection, a 4D segmentation in SPECT which incorporates learnt shape and appearance prior, and an application to object tracking.

I will also present our recent work on 3D human pose estimation by exploiting the advantages of fixing the root node. This approach has two key benefits: The first is that each local solution can be found by modelling the articulated object as a kinematic chain, which has far less degrees of freedom than alternative models. The second is that by using this approach we can represent, or support, a much larger area of the posterior than is currently possible. This allows far more robust algorithms to be implemented since there is far less pressure to prune the search space to free up computational resources.

Finally, I will show some of our latest attempts to model human interaction using articulated models.

 


 

Dynamic Facial Processing and Motion Capture and Difficult Environments: from Basic Research to VFX

Dr Darren Cosker, University of Bath

Wednesday, 27th December - 2.00pm in Powerwall Room 302 (Dean Street)

The visual effects and entertainment industries are now a fundamental part of the computer graphics and vision landscapes. One of the issues in this area is the creation of realistic environments and characters. Advances in computer graphics and rendering have underlined much of the success of these industries, built on top of academic advances in these areas. However, in order to populate these worlds with lifelike characters and creatures we also require effective methods of capturing performers in the real world. This also has wider implications for general capture in the real world - with applications beyond entertainment. In this talk I will outline some of the challenges in creating facial performances and using facial models in visual effects. In particular, I will attempt to distinguish between academic challenges and industrial demands, and attempt to highlight some of the shared open challenges. I will also describe some of the work that myself and my group have been performing in the area of 4D facial processing. I will describe how this has led to us stepping back to focus on first solving more ‘basic’ (or fundamental) computer vision research problems - particularly in the area of optical flow, non-rigid tracking and shadow removal. Solving these issues are fundamental to current academic progress, the wider entertainment industry and beyond to other domains.

Short Bio:

Darren Cosker is currently a Royal Society Industrial Research Fellow - at Double Negative Visual Effects, London - and a Lecturer/Assistant Prof. at the University of Bath. Prior to this, Darren held a Royal Academy of Engineering Research Fellowship, also at the University of Bath. His current research interests are in the area of 4D mesh processing and registration, dense non-rigid tracking in difficult environments, and animation of mesoscopic surface detail. He is particularly interested in applying his research to solve problems in motion capture and performance driven animation. He actively collaborates and consults with several leading entertainment based companies in this area (Disney, EA, Imaginaruim).

 


 

UltraSendo - Focussed Airborne Ultrasound Tactile Feedback for Medical Simulators

Gary Hung, Bangor University

Wednesday, 20th November - 2.00pm in Powerwall Room 302 (Dean Street)

This presentation aims to give a short overview and highlights the main contributions of my PhD project. A force field is created mid-air when an array of ultrasonic transducers cooperatively emit ultrasound towards a focal point. We have investigated, in a medical context for training palpation, what tactile sensations airborne ultrasound can display such as arterial pulses and thrills. As airborne ultrasound is still an emerging technology, no off-the-shelf product was available for purchase. Therefore, we have built our own custom tactile feedback device called UltraSendo at Bangor University to carry out this investigation. UltraSendo differs from existing airborne devices built by other institutes in terms of system design, hardware, and is the first airborne ultrasound device to include a physical palpable interface for user interaction.

 


 

Excitement of VisWeek 2013

Dr Jonathan C Roberts, Dr Panagiotis Ritsos and Francis Williams, Bangor University

Wednesday, 13th November - 2.00pm in 313 (Dean Street)

IEEE VIS is the premier visualization conference. VIS 2013 takes place in Atlanta. We will present what we believe to be the most interesting, talked-about and highly-regarded research that was presented at VIS2013.

 


 

The Oculus Rift and the future of Virtual Reality

Dr Llyr ap Cenydd, Bangor University

Wednesday, 30th october - 2.00pm in 313 (Dean Street)

The Oculus Rift is a head-mounted display which aims to transport the user into the virtual world similar to the fictional Metaverse, Holodeck or Matrix. The device is one of the most talked about pieces of technology of the past few years, and has already gathered an impressive amount of awards and support from gamers, enthusiasts and developers.

In this talk I will explore the past, present and future of VR, focusing on the Oculus Rift and how modern technology has for the first time enabled a consumer device to start fulfilling the long elusive promise of revolutionising the way we learn, play and communicate.

While the consumer model is not due until late-2014, I have been working with a Development Kit for the past four months, and so in this talk I will also share my experiences developing two demos for the device - Ocean Rift and Crashland, both of which are currently ranked in the top 5 on the Oculus app store.

 


 

Flipped lids, exam anxiety and mental wellbeing - A new organising idea and the challenge of evidence

Bill Andrews, Human Givens therapist, trainer and supervisor, Founder and director of the Pragmatic Research Network.

Wednesday, 23rd october - 2.00pm in 313 (Dean Street)

Understanding some fundamental principles around how our brains work, particularly when under stress can assist us in planning and preparation for such potentially stressful events as exams. While there is a plethora of psychological approaches to the management of such stress, often clear simple information and self-help strategies can be enough to greatly assist.

In this short presentation Bill will first provide some background around the central organising idea embraced in the Human Givens approach and then look at the underlying mechanisms of exam anxiety and how these might be mitigated against. He will conclude by looking at the challenges around evidence in mental health treatments, his own role in evidencing the Human Givens approach, and how the field of computer science might assist in offering solutions.

 


 

Travels Down Under

Professor Nigel John, Bangor University

Wednesday, 16th october - 2.00pm in 313 (Dean Street)

Highlights of Winston Churchill Travelling Fellowship

 


 

Haptic Data Visualization

Dr Panagiotis Ritsos, Bangor University

Wednesday, 2nd October - 2.00pm in 313 (Dean Street)

Haptic interactions for data visualization. A tool for rapid prototyping haptic data visualizations is presented. This tool uses the Omni forcefeedback device to create haptic interactions. It loads X3D models that then can be touched through the haptic interface. The motivation for the work is to develop haptic interactions (haptic data visualizations) for blind people. However, other roles could be envisaged. At this presentation we welcome a discussion from the audience of where to apply this prototyping tool.

 


 

Open Session

Wednesday, 25th September - 2.00pm in 313 (Dean Street)

Haptic interactions for data visualization. A tool for rapid prototyping haptic data visualizations is presented. This tool uses the Omni forcefeedback device to create haptic interactions. It loads X3D models that then can be touched through the haptic interface. The motivation for the work is to develop haptic interactions (haptic data visualizations) for blind people. However, other roles could be envisaged. At this presentation we welcome a discussion from the audience of where to apply this prototyping tool.

 


 

Alternative views: Automated photogrammetry on lost heritage

Dr Andrew Wilson, Bangor University

Wednesday, 18th September - 2.00pm in 313 (Dean Street)

Computer photogrammetry techniques have advanced enough in recent years to utilise multiple 2D photographs of the same object, building or environment, perform a registration of them, and calculate a 3D model from the 2D photos.

The alternative views project aims to recreate the three-dimensionality of lost heritage assets by taking a case-study based approach to a sizeable image database which in parts dates back to the 1970s and (partially) even before.

Alongside Gwynedd Archaeological Trust’s (GAT) photographic archive (c. 500,000 photographs) we have access to Bangor University’s archaeological slide collection (c. 70,000 photographs). The work will focus on heritage photographs, taken, either during excavation and survey, or during University field trips; as such, the images were not taken for the purpose of creating a 3D picture. Images will then be examined visually to identify potential candidates for 3D photogrammetry rendering. Images of sites and monuments that have since been destroyed or significantly altered will be selected as a matter of priority, with a preference for images taken at the same time.

This process, however, is further complicated by the fact that these archived images were developed over a long period, and have been created by different technologies. The project aims to investigate the possibility of combining these images into one rendering process, to successfully create a 3D model of a heritage asset.

Thus we aim to re-create 3D images of sites that cannot be digitised into 3D today - re-creating lost heritage.

 


 

Visual ideation - low-fi design strategies for data-visualization

Dr Jonathan C Roberts, Bangor University

Wednesday, 11th September - 2.00pm in 313 (Dean Street)

Developing and creating new visualization designs that meet the needs of a client can be very difficult. Developers need to interact with clients, understand their requirements, design some solutions, implement and evaluate the designs. Lo-fidelity solutions can help engender discussion. Indeed, sketching visualization designs is really important to get the right idea of what the solution would look like. In this talk I will take a high-level approach to low-fidelity visualization design, sketching and ideation in data-visualization.

 


 

“OPTIS” from Optics to Virtual Reality

Joss Petit - Application Engineer at Optis UK

Wednesday, 28th August - 2.00pm in 313 (Dean Street)

Optis is an international company offering software and expertise in physically based lighting simulations and virtual reality. The Optis Suite and the virtual reality facilities at Daresbury could greatly benefit to researchers, by providing excellent materials for publications and being a solid base for ambitious projects. Already at Daresbury, we have taken concepts within virtual reality from a development idea through to production. Notable collaborations include work with companies such as Bentley Motors. The opportunity and effectiveness of working with developers, academic partners and industrial collaborators is proving to be a very real opportunity. Concerning our Optis Suite, three of our products will be presented: VRX (driving simulator), THEIA (physically correct real time renderer) and HIM (immersive environments with human interaction). These products are built upon our VR development environment; SimplyCube. Finally, at the end of the presentation, we would like to explore the opportunities available during an open discussion on how Optis and the Bangor University could work together towards future projects and publications.

 


 

On Pictures and Stuff

James A. Ferwerda, Rochester Institute of Technology, New York

Wednesday, 19th August - 2.00pm in 313 (Dean Street)

Efforts to understand human vision have focused primarily on our abilities to perceive geometric objects (things) and have largely neglected the perception of materials (stuff). However perceiving materials is at least as important as perceiving objects, and material properties allow us to tell if objects are smooth or rough, hard or soft, clean or dirty, fresh or spoiled, and dead or alive. Therefore understanding the perception and representation of materials is of critical importance to many fields. In this talk I will first describe experiments we have been conducting to develop psychophysical models of material perception that can relate the physical properties of materials to their visual appearances. I will then show how we have been taking advantage of the limits of material perception to develop new techniques for efficient realistic image synthesis. Finally I will discuss some recent efforts to develop advanced display systems that allow hands-on interaction with virtual materials and surfaces.

Short Bio:

James A. Ferwerda is an Associate Professor and Xerox Chair in the Chester F. Carlson Center for Imaging Science at the Rochester Institute of Technology. He received a B.A. in Psychology, M.S. in Computer Graphics, and a Ph.D. in Experimental Psychology, all from Cornell University. The focus of his research is on building computational models of human vision from psychophysical experiments, and developing advanced imaging systems based on these models. He is an Associate Editor of ACM Transactions on Applied Perception, and serves on the Program Committees of SPIE Human Vision and Electronic Imaging and Measuring, Modeling, and Reproducing Material Appearance. In 2003 he was selected by the National Academy of Engineering for the Frontiers of Engineering Program and in 2010 for the Keck Futures program.

 


 

A Fast Inverse Kinematics Solver using Intersection of Circles

Ashok Srinivasan, Bangor University

Wednesday, 14th August - 2.00pm in 313 (Dean Street)

Kinematics is the study of motion without regard to the forces that causes it. In computer animation and robotics, Inverse Kinematics (IK) calculates the joint angles of an articulated object so that its end effector can be positioned as desired. This paper presents an efficient IK method using a geometric solver based on the intersection of circles. For an articulated object with n joints, our method will position the end-effector accurately and requires only a reverse iteration of (n-2). An intuitive user interface is provided, which automatically keeps the end effector between the maximum and minimum extent of the articulated object. Common problems that can occur with other IK methods are avoided. The algorithm has been implemented using WebGL and Javascript and tested by simulating a three joint robot arm and human cycling motion, achieving interactive rates of up to 60 FPS.

 


 

Investigation of Distance Perception in a Virtual Environment for Rugby Skills Training

Helen Miles, Bangor University

Wednesday, 7th August - 2.00pm in 313 (Dean Street)

We have designed and developed a virtual environment to train rugby ball passing skills. Seeking to validate the system’s ability to correctly aid training, an initial experiment was performed to examine the effect of stereoscopic technology and the physical screen’s setup on the user’s ability to perceive virtual distances correctly. In this talk, the experiment will be described and the results discussed, followed by details a further experiment that will be conducted shortly.

 


 

Massively parallel / GPU programming / GPGPU: Sounds a bit confusing? (Part 3 - final session)

Franck Vidal, Bangor University

Wednesday, 26th June - 2.00pm in 313 (Dean Street)

There is a growing interest for general-purpose computation on GPUs (GPGPU) and this has been an active area of research some time. Graphical Processing Units (GPUs) are stream processors based on a highly parallel architecture and they can be used as a mathematical coprocessor. Pioneering work took advantage of the hardware acceleration using non-programmable GPUs. With the introduction of the programmability of GPUs, graphics hardware can be considered as a “low-cost” parallel computing solution. However, due to the special hardware architecture and programming environment, GPU programming methods differ notably from traditional CPU programming: Programming on GPU is much more complicated than on CPU.

This series of lectures will introduce the fundamentals that are necessary to start writing code in NVIDIA CUDA, the main plateform environment for CPU programming.

It will combined both theoretical and practical aspects.

 


 

Massively parallel / GPU programming / GPGPU: Sounds a bit confusing? (Part 2)

Franck Vidal, Bangor University

Wednesday, 19th June - 1.00pm in 313 (Dean Street)

There is a growing interest for general-purpose computation on GPUs (GPGPU) and this has been an active area of research some time. Graphical Processing Units (GPUs) are stream processors based on a highly parallel architecture and they can be used as a mathematical coprocessor. Pioneering work took advantage of the hardware acceleration using non-programmable GPUs. With the introduction of the programmability of GPUs, graphics hardware can be considered as a “low-cost” parallel computing solution. However, due to the special hardware architecture and programming environment, GPU programming methods differ notably from traditional CPU programming: Programming on GPU is much more complicated than on CPU.

This series of lectures will introduce the fundamentals that are necessary to start writing code in NVIDIA CUDA, the main plateform environment for CPU programming.

It will combined both theoretical and practical aspects.

 


 

Respiration Simulation : Application in Treatment Planning and Training Simulators

Pierre-Frederic Villard, LORIA laboratory of the University of Lorraine

Wednesday, 5th June - 1.00pm in 313 (Dean Street)

This talk will focus on respiration simulation applied to two different application contexts: treatment planning in radiotherapy and training simulators for ultrasound-guided liver biopsy.

Anatomy and physiology of the respiration system will be reviewed, focusing on each organ of interest, which depends on the procedure. For each context, the respiration modelling technique will be introduced in order to fit with the application requirement. Final results of these different studies will be presented using videos, clinical validation studies, and user feedbacks.

 


 

Extending Haptic Technology for Dexterous Manual Interaction - moving past the point force model

Dr Ally Barrow, Imperial College London

Wednesday, 29th May - 2.00pm in 211 (Dean Street)

Anyone familiar with current haptic technology is likely to also be familiar with the most common form - the one handed, single point-force model. I.e. a single 3 Degree-of-Freedom force feedback interface is used to probe the virtual environment. This paradigm has been applied to an incredible array of application areas over the last 2 decades yet, fundamentally, has changed little in that time. This has generated a great deal of exciting literature and creative approaches to “making do” with what is ultimately very a limited level of interaction. However, it is evident from this that there is a plateau in haptic research holding back more sophisticated varieties of interaction which will need to be addressed in order to open up haptic technology to the wide variety of potential applications which are, as yet, untapped.

This talk comprises a selection of current research projects sharing the theme of “extending haptic technology to permit more dexterous manual interaction”. The first two projects presented are “blue sky” approaches, looking at solving more fundamental issues in haptic simulator design; the second two are specific applications of haptics, both in areas of clinical training, which, in practice demand dexterous manual skills, not covered by the ubiquitous point force model. For each project, the current work, findings and future research directions are discussed, along with recommendations for simulator design.

 


 

Template Based Evolution: Evolving reactive agents

Chris Headleand, Bangor University

Wednesday, 15th May - 2.00pm in 313 (Dean Street)

This talk will discuss a novel approach, developed at Bangor for multi-agent simulation where agents evolve freely within their environment. We present Template Based Evolution (TBE), a genetic evolution algorithm and methodology that evolves behaviour for embodied, reactive agents whose fitness is tested implicitly through repeated trials in an environment. All agents that survive in the environment breed freely, creating new agents based on the average genome of two parents. This talk will describe the design of the algorithm and the initial investigation where virtual migratory creatures were evolved to survive a simulated environment. Comparisons are made between the evolutionary responses of the artificial creatures and observations of natural systems justify the strength of the methodology for species simulation.

 


 

Bangor Symposium: “Computer Gaming Across Cultures” - 8th May

Keynote: Dr Esther McCallum-Stewart

Wednesday, 8th May - 09.00am in JP HALL, Seminar Room

Computer Gaming Across Cultures: Perspectives from Three Continents”, is to be held in JP Seminar Room (former TV Studio), on Wednesday, 8th May. The event is funded by the British Council, under its UKIERI (UK-US-India Education and Research Initiative) scheme, and will feature presentations by speakers from West Virginia University, Jawaharlal Nehru University New Delhi, and Bangor University. The keynote lecture will be given by Dr Esther MacCallum-Stewart, one of the UK’s leading scholars in Games Studies.

 


 

Perceptual Display: Exceeding Display Limitations by Exploiting the Human Visual System

Dr. Piotr Didyk, MIT Computer Science and Artificial Intelligence Laboratory, Computer Graphics Group

Wednesday, 3rd May - 2.00pm in 313 (Dean Street)

Existing displays have a number of limitations, which make it difficult to realistically reproduce real-world appearance; discrete pixels are used to represent images, which are refreshed only a limited number of times per second, the output luminance range is much smaller than in the real world, and only two dimensions are available to reproduce a three-dimensional scene. On the other hand, the human visual system has its own limitations, as those imposed by the density of photoreceptors, imperfections in the eye optics, or the limited ability to discern high-frequency information.

In this talk, I will show that taking these limitations into account and using perceptual effects, which very often are not measurable physically, allow us to design methods which can overcome the physical limitations of display devices in order to enhance apparent image qualities.

More precisely, I will discuss how high quality frames can be interleaved with low quality frames improving the sharpness of rendered sequence. Further, I will present an optimization technique which produces frames that shown in a rapid succession lead to an apparent increase in spatial resolution, and, finally, I will talk about the role of perception in context of stereovision.

 


 

Massively parallel / GPU programming / GPGPU: Sounds a bit confusing? (Part 1)

Franck Vidal, Bangor University

Wednesday, 19th June - 2.00pm in 313 (Dean Street)

There is a growing interest for general-purpose computation on GPUs (GPGPU) and this has been an active area of research some time. Graphical Processing Units (GPUs) are stream processors based on a highly parallel architecture and they can be used as a mathematical coprocessor. Pioneering work took advantage of the hardware acceleration using non-programmable GPUs. With the introduction of the programmability of GPUs, graphics hardware can be considered as a “low-cost” parallel computing solution. However, due to the special hardware architecture and programming environment, GPU programming methods differ notably from traditional CPU programming: Programming on GPU is much more complicated than on CPU.

This series of lectures will introduce the fundamentals that are necessary to start writing code in NVIDIA CUDA, the main plateform environment for CPU programming.

It will combined both theoretical and practical aspects.

 


 

UltraPulse - Simulating a Human Arterial Pulse with Focussed Airborne Ultrasound

Gary Hung, Bangor University

Wednesday, 20th March - 2.00pm in 313 (Dean Street)

Medical simulators provide a risk-free environment for trainee doctors to practice and improve their skills.UltraPulse is a new tactile system designed to utilize focussed airborne ultrasound to mimic a pulsation effect such as that of a human arterial pulse. This session will describe the construction of UltraPulse.

 


 

Hybrid Approaches to Visual Analysis in Archaeological Landscapes

Dr Andrew Wilson, Bangor University

Wednesday, 13th March - 2.00pm in 313 (Dean Street)

Geographical Information System (GIS) have become a key part of archaeological research in the last twenty years. During this time visibility analysis has been one of the main foci for the majority of GIS related research papers. The majority of major landscape projects utilise some kind of viewshed analysis, and visibility analysis has been used within research that aims to investigate past human cognition, perception and phenomenology. Although visibility analysis has been regarded as one of the most important types of GIS analysis in archaeology, its use is not free from criticism. The ease at which viewshed analysis can be performed has produced a very stagnant, overly crowded research field. This stagnation is, in part, a consequence of the overall limitations of current GIS packages, most of which are based on an environmental, two dimensional Cartesian view of the World. By default the two dimensional nature causes inherent problems with any viewshed analysis. These problems range from errors in elevation data (caused by earth curvature), absence of any type of surface details (such as vegetation or urban environments) to the lack of atmospheric conditions (including fog, smog, and haze). The apparent lack of critical thought by some archaeologists to the underlying workings and limitations of viewshed analysis, have resulted in analysis based primarily on the topography of a research area, void of any type of human or social aspects of the landscape. Archaeologists have attempted to overcome these shortfalls by applying different types of social and cultural theoretical models to their viewshed analysis. The results from current viewshed methods, however, only illustrate the areas of the landscape that are visible and how many viewpoints those areas are visible from. These results alone cannot answer any question of a social or humanistic nature. This has led archaeologists to research other computing technology such as three dimensional rendering to find solutions to the two dimensional nature of current viewshed methods. The overall goal of this thesis was to explore the development and potential of three dimensional viewshed analysis and how this technique may be implemented within a wide-range of archaeological research questions, enabling a better understanding of space and constructed ideas of space and place.

 


 

WeARable Computing - From the Qing Dynasty to Project Glass: Prototypes, Myths, Confusion and Lots of Wires…

Dr Panagiotis Ritsos, Bangor University

Wednesday, 6th March - 2.00pm in 313 (Dean Street)

Wearable computing is the study or practice of inventing, designing, building, or using miniature body-borne computational and sensory devices. Wearable computers may be worn under, over, or in clothing, or may also be themselves clothes (i.e. “Smart Clothing” (Mann, 1996)). Wearables have often been associated with the field of Augmented Reality (AR) as well as the less ‘hyped’ Diminished Reality and Mediated Reality. We revisit how wearable computers evolved and how they have been used in various applications, such as AR, fashion or the military. We also clear up some confusion on what AR truly is and pinpoint how related terms and definitions are often misused.

 


 

Efficient subjective evaluation: Pair-wise comparisons

Dr Rafal Mantiuk, Bangor University

Wednesday, 27th February - 2.00pm in 313 (Dean Street)

Whenever the results of an algorithm are visual (images of video), the problem arises of how to judge their quality. The most proven and direct way of judging the quality of such results is by running an experiment with human participants. In this seminar I will introduce one of the most popular and robust experimental procedures - the method of pair-wise comparisons. I will discuss how to design and implement such studies, collect the data and analyse them. This will cover the concepts of psychophysical scaling and statistical testing.

 


 

AMBER Shark-Fin mouse: A psychophysiological input device for Affective Gaming

Tom Christy, Bangor University

Wednesday, 20th February - 2.00pm in 313 (Dean Street)

Analysing, measuring, recognising and exploiting emotion are attractive agendas when designing computer games. Affective gaming has become a serious area of exploration in both academic and commercial fields. Given the maturity of the video-game industry, an outline of the advances in the psychophysiological input will be discussed. Current devices for inputting physiological modalities are usually awkward to wear or handle. To answer this challenge, a Shark-fin Mouse has been developed which streams three signals in real-time: pulse, electrodermal activity (EDA, also known as galvanic skin response or GSR) and skin temperature. All sensors are embedded into a fully functional computer mouse and positioned so as to ensure maximum robustness of the signals. No restriction of the mouse function is imposed apart from the user having to place the tip of their middle finger into the Shark-fin hub. Boundary tests and experiments with a simple bespoke computer game demonstrate that the Shark-fin Mouse is capable of streaming clean and useful signals.

 


 

Cops, robbers and video streams - A trip into the world of IT security & forensics

Dr Les Pritchard, Business Liaison Manager - SAW (Software Alliance Wales)

Wednesday, 13th February - 2.00pm in 313 (Dean Street)

Analysing, measuring, recognising and exploiting emotion are attractive agendas when designing computer games. Affective gaming has become a serious area of exploration in both academic and commercial fields. Given the maturity of the video-game industry, an outline of the advances in the psychophysiological input will be discussed. Current devices for inputting physiological modalities are usually awkward to wear or handle. To answer this challenge, a Shark-fin Mouse has been developed which streams three signals in real-time: pulse, electrodermal activity (EDA, also known as galvanic skin response or GSR) and skin temperature. All sensors are embedded into a fully functional computer mouse and positioned so as to ensure maximum robustness of the signals. No restriction of the mouse function is imposed apart from the user having to place the tip of their middle finger into the Shark-fin hub. Boundary tests and experiments with a simple bespoke computer game demonstrate that the Shark-fin Mouse is capable of streaming clean and useful signals.

 


 

Hardware in the Loop Simulation: Case Study Dinorwig Power Station

Dr Sa’ad Mansoor, Bangor University

Wednesday, 6th February - 2.00pm in 313 (Dean Street)

Embedded systems are designed to control complex plants such as land vehicles, satellites, Unmanned Aerial Vehicles (UAVs), aircrafts, weapon systems, marine vehicles, jet engines, and many more. They generally require a high level of complexity within the embedded system to manage the complexity of the plant under control. Hardware-in-the-Loop (HIL) simulation is a technique that is used increasingly in the development and test of complex real-time embedded systems. The purpose of HIL simulation is to provide an effective platform for developing and testing real-time embedded systems. HIL simulation adds the complexity of the plant under control to the test platform. The complexity of the plant under control is included in test and development by adding a mathematical representation of all related dynamic systems.

 


 

Introduction to Applied Multivariate Statistical Analysis

Dr Serban Pop, Bangor University

Wednesday, 30th February - 2.00pm in 313 (Dean Street)

The term “multivariate” implies the existence of more than one variable, multiple values for a single observation, hence the multivariate statistics considers and analyzes multi-dimensional data with the specified aim of understanding its structure, background, purpose and how these dimensions relate to each other. The underlying theoretical structure of many quantitative applied sciences and also most of the observable phenomena in the empirical sciences are of a multivariate nature. The aim of this seminar is make a short introduction of multivariate applied statistics in a way that is understandable for non-mathematicians and practitioners who are confronted by statistical data analysis. Also the difference between quantitative and qualitative analysis is discussed, together with several basic statistical techniques of analyzing complex sets of data.