Seminars usually held on Wednesdays at 2.00pm in ROOM 313 (Dean Street) Attendance at one of our seminars qualifies for “xp” experience points towards the Bangor Employability Award (BEA).
This year, we’ve been experimenting with running a data journalism group for students. After a successful summer’s work with masters students, we recruited a set of first years and kept it going. In November, we entered two teams in the visualizing.org Visualization Marathon - a 24 hour visualization competition in London. This seminar will feature some stories from that event, in Ignite format (20 slides, 15 secs per slide), presented by students who participated.
Virtual rehabilitation is the application of VR and related technologies to neuro-rehabilitation therapies. In the past decade, a diverse range of systems has been developed and tested. However, this field appears to be immature with major opportunities to explore. The seminar will outline the rationale for using VR in a neuro-rehabilitation context, and outline examples of the state of the art in virtual- and tele-rehabilitation. Outstanding examples of VR systems will be discussed, along with the advances, limitations and opportunities for this exciting field of research engineering.
Archaeology has started to create vast amounts of digital data, from direct digital recording during excavations to post-excavation digitization and interpretative GIS databases. Yet, analysing that data still mostly relies on the most traditional archaeological method: looking at data visualisations and comparing them ‘manually’ (often in form of printouts). This paper will present some of that data and ask for suggestions whether more sophisticated analyses could be applied to it.
The seminar presents a multi-disciplinary research project on eye fixation patterns and object recognition jointly funded by ESRC (the Economic and Social Science Research Council) and EPSRC (the Engineering and Physical Sciences Research Councils). This project is carried in collaboration with the School of Psychology. One of the main focuses of this research is to identify what makes particular features of objects key to their recognition in our visual system. If we can establish the ‘check-list’ being carried out in identifying objects, it will have a wide range of implications including engineering applications among others. This seminar will also present some of the recent results from this project.
We all know what a videogame is. We all know what a novel or a poem is. But what happens when literature and gameplay come together in the digital medium? Can we concentrate on playing and reading simultaneously? How do writers and game developers ‘play’ with the reader/player to make them think critically about their expectations of a videogame or literary text?
My talk is going to focus on literary gaming, a project that seeks to answer these questions by studying systematically digital texts that bridge, blend and juxtapose reading and gaming. I will discuss theoretical and methodological issues and offer a comparative close-reading/play of the ludic Flash fiction, The Princess Murderer (geniwate and Deena Larsen 2003) and the literary 3D game, The Path (Tale of Tales 2009).
We have been developing iCove: the interactive coastal oceanographic visual analytic environment. The challenge for the ocean scientists is that their models are complex and the datasets that are generated are huge; furthermore, the oceanographers wish to interactively investigate and quantitatively compare different runs of these models. We propose a novel visual analytics tool to permit detailed exploration through interactive data querying to enable their analysis. This presentation presents our experience of building iCove in Processing especially in comparison with our previous oceanographic tool building in VTK and OpenDX.
Visualisation of TVVD in a way that allows the user to readily interpret and extract salient information presents a number of challenges. This seminar will present the work currently being carried out at Bangor in this area. We construct Contour Trees at each time step in the data to locate regions of topological interest in the data set and allow this information to influence the automatic selection of camera paths. In addition we use clustering to identify paths that are optimal for a wide selection of different features.
Stream compaction is an important parallel computing paradigm that produces a reduced (compacted) output stream from desired elements in an input stream. Computing on this compact stream rather than the mixed input stream leads to improvements in performance, load balancing, and memory footprint.
Compaction has numerous applications in computer graphics and visualization where data reduction is required. We present two novel stream compaction methods for CUDA; Fused-Kernel-Compact and Fused-Kernel-Collate. Both methods are called from within a running kernel; typically at the end.
While FK-Compact produces a compacted stream that preserves ordering, we relax this constraint for FK-Collate. We apply both algorithms to a multiple-kernel ray-casting pipeline for isosurface visualization. We show that our Fused-Kernel Collation method is on average 30% per cent faster than the standard out-of-kernel
This seminar will provide an introduction to agent-directed simulation in the programming language NetLogo. NetLogo is based on the Logo programming language Logo first developed by Daniel Bobrow, Wally Feurzeig and Seymour Papert in 1967 for educational use, primarily to teach school children the basics of graphics programming. However, the multi-agent variation of Logo - NetLogo - developed at NorthWestern University, is increasingly being used as a means for rapidly prototyping applications such as for modelling, simulation and visualisation because of the speed with which applications can be developed. A complete simulation can be written with very few lines of code, it provides fast graphic visualization of the results and it is free, working reliably under most operating systems and from web browsers.
The specific focus of this talk is to show how visualisation can be performed relatively easily in NetLogo. The talk will also show how the agent-oriented perspective is beneficial in suggesting alternative solutions to a visualisation problem. Two examples will be provided: first, for the educational visualisation of various search algorithms such as breadth first search and depth first search; and secondly, for visualising different natural phenomena such as fire, lightning, blood vessels, flocking and trees using cellular automata, fractals and boids. A brief explanation will also be provided to demonstrate how music can be created while the latter visualisations are being executed.
A short introduction about the great interest of High Dynamic Range (HDR) images, with some beautiful pictures and videos as illustrations of the results you can obtain by post-processing HDR images. The post-processing step usually has many parameters, and their variations can make dramatically change the appearance of the images. In the second part, we thus present the results of the experiments done so far about these parameters variations, and point out some images descriptors related to the perceived image quality.
People are ever busier and increasingly want useful information in easily digested bite size pieces, delivered to them as efficiently as possible. The use of the mobile devices combined with current and future information access and retrieval technologies can rejuvenate the publishers’ existing offerings as well as suggesting new ones. But the opportunities of the mobile environment extend beyond the use of location and publishers cannot simply cut and paste content to fit the display constraints of a small device.
Individuals use mobiles in many aspects of their lives including work and leisure, and publishers need to be aware of the different orientations of users depending on their context of use. Mobile search has a distinctive nature, which is different from traditional desktop-based searching. The success of digital publishing in the mobile environment will depend not only on the design and presentation of the underlying content, but also on recognizing the nature of mobile search in the implemented facilities to improve their effectiveness. Furthermore in this more dynamic environment there are many more circumstances of use and shifts in context, driven by links to the physical world and triggers within it.
Information is a key part of our lives. However, the amount of available digital information continues to grow at a tremendous rate along with increasingly diverse forms of media and communication channels. To mitigate the effects of information overload, we need to create paths through the information space for users to navigate and manage their needs. The key enabler for this is to use context information. Context information provides an important basis for identifying and understanding people’s information needs. A key challenge is making more information accessible whilst also ensuring it is relevant and useful for users’ information needs.
Context includes aspects of the situation, such as location, but can also include the user’s task, their environment, the device that they are using for accessing information, their personal interests, and their social interactions. Additional reasons for the importance of context include: timely delivery, better matching of user expectations and experience, and better potential for linking with advertising. This was evident in early work on personalization of web search and is increasingly clear for the mobile environment.
User studies are essential for designing and evaluating new products and methodologies that meet the needs of real users. It is important to test developed applications in naturalistic contexts and not only to make theoretical assumptions about users’ needs and activities. I will argue that user studies should be conducted in a realistic way and provide example applications from travel and tourism.
The future of electronic media depends on refining our understanding of what constitutes mobile and developing innovative applications.
With the rapid development of computer processing power, online 3D virtual gaming worlds are becoming more realistic; this makes them increasingly attractive for the user and provide an interesting testing ground for scientific research (Gamberini, 2000). Formal experiments in cognitive science or psychology could highly benefit from these advancements and researchers could weight these new tools against traditional testing methods (Bainbridge, 2007) This research explores the field of human memory and how virtual worlds could be an attractive alternative and/or a complementary environment to run cognitive tests. Research such as the investigation on the impact of endogenous control on memory recall through using 2D vs. 3D environments could result in new ways of learning. Ultimately results obtained from a 3D environment could be used to influence or refine the design of a cognitive architecture for an artificial agent living in a virtual world.
The School of Computer Science at Bangor has been involved in a collaboration with Liverpool, Imperial, Manchester, Leeds and Hull since 2003, developing virtual environments and haptics to fabricate and validate interventional radiology task simulator models (www.craive.org.uk). We have based the development of these simulations on human factors evidence to define the task, its cues, decisions and actions, the required task fidelity, and the critical performance steps that are used as metrics by the simulation’s assessment tool. Replicating a training objective to an appropriate, but not unnecessarily high or inappropriately low, level of task fidelity is important for reasons of economy and validity. Thus, human factors research is central to correct simulator model development, and this has also been supported by clinical studies that identify force data that has been used to more closely replicate ‘feel’. Such data have significant influence on the relevance of simulation development to the task as performed in the real world. The development process is supported and guided by iterative, subject expert evaluation of the simulator’s content to check development is ‘on track’. Validation then forms a cornerstone of adopting simulations into an actual training curriculum.
In this talk Llyr will present method of visualizing the surface appearance of living human brain tissue, for use in anatomy education and with some extension, virtual surgery. We have been granted access to both a dissection room at a medical school and an operating theatre during a neurosurgical procedure to obtain colour data from calibrated photography of exposed brain tissue. In the latter case, the specular reflectivity of the brain’s surface is approximated from animal flesh. Our experiments provide data for a bidirectional reflectance distribution function (BRDF) that is then used in the rendering process. Visualization of the brain’s surface is achieved in real time by utilizing the GPU, and includes support for ambient occlusion, advanced texturing, sub surface scattering and multiple levels of specularity.
Project IVY is a European funded project with 6 partners (EU Lifelong Learning Progamme - Project 511862-2010-LLP-UK-KA-KA3MP). The aim is to develop a virtual collaborative training environment for interpreters. The project is currently using Second Life to create a prototype environment. Users can teleport to a chosen scenario, where the room is modelled in a suitable way for the scenario, and then the user can hear the scenario and can interact with the world to understand how to act as an interpreter.
In this presentation we will explain the project, the requirements and the design, and we will provide an early demonstration of the work in SL.
This seminar will include two poster presentations from members of the School of Computer Science, Bangor. These will cover various research topics that will be presented as posters later in September. Come and hear these “practice pitches!”
In this session I will talk about our current research in the use of acoustic radiation pressure from ultrasound emitters to produce tactile feedback in medical simulators. An initial application would be to simulate a pulse palpation where the trainee doctor or nurse actively searches for a pulse with their fingers to locate an artery within the body. Our first steps towards achieving this aim, the electronics behind the device and future work will be discussed.
There are several news organisations that discuss stories in the media and present their story along with a visualization; they also publish the data that was used to develop the story. Consequently we have been looking at some of those data, and have created several visualizations of the datasets. We will discuss the process by which we have been analysing the datasets and also demonstrate some of our visualizations of this data.
We present epSpread, an analysis and storyboarding tool for geolocated microblogging data. Individual time points and ranges can be analysed through queries, heatmaps, word clouds and stream-graphs. The underlying narrative can be shown on a storyboard-style timeline for discussion, refinement and presentation. The tool was used to analyse data from the VAST Challenge 2011 Mini-Challenge 1, tracking the spread of an epidemic using microblogging data. In this presentation we describe how the tool was used to identify the origin and track the spread of the epidemic.
An introduction to modelling and similation of fluids will be given. I will cover the system as a collection of entities, and a model as a representation of the system. Discuss some of the principle equations and explain the current simulation models, as well as introduce our use of SPH.
In this talk I will give a presentation of my forthcoming talk to SIGGRAPH.
There are many challenges for a developer when creating an information visualization tool of some data for a client. In particular students, learners and in fact any designer trying to apply the skills of information visualization often find it difficult to understand what, how and when to do various aspects of the ideation. They need to interact with clients, understand their requirements, design some solutions, implement and evaluate them. Thus, they need a process to follow. Taking inspiration from product design, we present the Five design-Sheet approach. The FdS methodology provides a clear set of stages and a simple approach to ideate information visualization design solutions and critically analyze their worth in discussion with the client.
Hoshi is one of the inventors of a novel tactile display that provides unrestricted tactile feedback in air without any mechanical contact . It controls ultrasound and produces a stress field in a 3D space. The principle is based on a nonlinear phenomenon of ultrasound: acoustic radiation pressure. A fabricated prototype has been developed consisting of 324 airborne ultrasound transducers, and the phase and intensity of each transducer are controlled individually to generate a focal point. The DC output force at the focal point is 16mN and the diameter of the focal point is 20 mm. The prototype produces vibrations up to 1 kHz. An interaction system has also been introduced, which enables users to see and touch virtual objects. This talk will explain how this technology works and discuss potential applications.
Increasingly, biomedical researchers need to build functional computer models from images (MRI, CT, EM, etc.). The “pipeline” for building such computer models includes image analysis (segmentation, registration, filtering), geometric modeling (surface and volume mesh generation), large-scale simulation (parallel computing, GPUs), large-scale visualization and evaluation (uncertainty, error). In my presentation, I will present research challenges and software tools for image-based biomedical modeling, simulation and visualization and discuss their application for solving important research and clinical problems in neuroscience, cardiology, and genetics.