Video Entries

In 2015, the Faculty for Computer Science and Biomedical Engineering at TU Graz held the video contest „Voices of Computer Science: Put the Spotlight on Researchers and Pioneers!“. The goal was to collect connotations and capture atmospheric images related to the topic „Computer Science – Shaping the Future“. We wanted to know what kind of research computer scientists, students and interdisciplinary teams are up to. What are they most passionate about? How do they want to shape the future? What type of research is being carried out in Graz? What visions of the future do people, particularly scientists, have today? We were truly excited about the level of participation which has by far exceeded our expectations. All 21 contributions presented at the i-KNOW conference 2015 can be viewed here on this site.

Organizer:
Faculty for Computer Science and BioMedical Engineering,
under direction of Roderick Bloem

Concept & Organization:
Know-Center GmbH, under direction of Nina Simon

Expert Jury:

  • Johann Harer, Human.technology Styria GmbH
  • Claudia von der Linden, TU Graz
  • Elisabeth J. Nöstlinger, ORF

Awards:

  • First Prize: „Brain Composer“, sponsored by Human.technology Styria GmbH, presented by Johann Harer
  • Second Prize: „Multitouchtable“, sponsored by Faculty for Computer Science and BioMedical Engineering, presented by Elisabeth J. Nöstlinger
  • Third Prize: „The Glove Project“, sponsored by Faculty for Computer Science and BioMedical Engineering, presented by Claudia von der Linden

Winners

First Prize: „Brain Composer“

by Andreas Pinegger, Hannah Hiebel, Gernot Mueller-Putz (TU Graz)

The video shows the brain composing system which allows a motor impaired person to compose music with a brain-computer interface. The tune composed here is original work from Hannah Hiebel, she has the copyright.

Second Prize: „Multitouchtable“

by Jenny Bieling (Fraunhofer Austria Research GmbH)

This video gives an impression of current projects developed for a Multitouchtable. The projects are developed in cooperation with Fraunhofer Singapore and Nanyang Technological University.

Third Prize: „The Glove Project“

by Jörg Simon, Viktoria Pammer-Schindler, Granit Luzhnica (Know-Center GmbH)

This is a project we do at the Know-Center for a Sensor Glove for gesture detection.
It is also Jörg Simon’s Master Thesis.

Entries

You can view all entries of the Video Contest „Voices of Computer Science“ in this YouTube Playlist:

„Ast3roids“ by Stephan Keller, Samuel Kogler, Joerg Mueller, Haresh Vekriya
Ast3roids is a 3D remake of the arcade classic Asteroids using the velocity sensors of an Android device as control input.The game was created as a project of the lecture „Game Design and Development“ (IICM) where projects were realized by students from TU Graz together with students from the University of Westminster in London.

„Computer Vision and Biomedical Image Analysis“ by Philipp Kainz
 I am PhD student at the Medical University of Graz and my work is focusing on biomedical image analysis, especially histopathology images of human bone marrow. I am working on cell detection and classification as well as segmentation of other tissue samples. This short video should bring across the challenging and very interesting interdisciplinary research area of biomedical imaging using computer vision and artificial intelligence methods and which advantages we can gain, if these methods are used properly.

„DAVE“ by Jenny Bieling, Volker Settgast
This video gives an impression of current projects developed for the DAVE. The projects are developed in cooperation with the CGV and Graz University of Technology.

„Engaging Learning in Virtual Worlds“ by Johanna Pirker, Lisa Tomes
Virtual worlds are an innovative way to discover new ways of learning and collaborating. They can be used to provide exploratory, interactive and experimental learning environments in an online multi-user universe. They can be used to enhance traditional learning scenarios with three-dimensional immersive visualizations, hands-on experiments, and simulations in a playful environment.

„Imaging at the speed of light“ by Institut für Medizintechnik
Wir wollen mit diesem Kurzfilm unsere Zukunftsvision der Forschung an der Magnetresonanztomographie in kurzen 90 Sekunden vorstellen.

„In Touch with the Illusion“ by Roland Mariacher, Werner Huber, Attila Primus
By utilizing Spatial Augmented Reality (Projection Mapping) it is possible to project animated 3D images onto physical objects in order to bring them to life. The model itself is transformed into a display which can hold information on its intended purpose without the need for additional media. A solid storyboard and interaction model is crucial in the development of an engaging, sophisticated media installation. Therefore, our first step was to create a rich virtual world utilizing videogame and interaction design concepts. We then created several environments to showcase different editions and interior versions of the multivan model within a scenery that reflects its intended purpose, the smart city of tomorrow. Interaction points were added in order to keep users engaged and interested as well as to enhance their feeling of immersion and control.
At the final exhibition which took place at Designforum Steiermark users were able to explore the model by using a tablet application which was created in the same style as the 3D animations to provide a consistent look and feel.

„Isogeometric Analysis for Modelling and Design“ by Andreas Riffnaller-Schiefer, Ursula Augsdörfer, Dietrich Fellner
Isogeometric Analysis is a variant of the Finite Element Method intended to bridge the gap between the worlds of design and analysis. While mostly used to perform analysis on existing designs, we looked at it from a different angle. In our work, we made use of analysis techniques to create new modelling tools to support the design process, enabling the designer to intuitively deform and shape digital objects based on physical principles.

„KinesicMouse – the hands-free computer mouse“ by Markus Pröll
I am a former student (SEW) and research assistant of the TU-Graz. In my video submission I present one of my software solutions that help physically disabled people to access their computer, called „KinesicMouse“. The KinesicMouse makes it possible to control mouse, keyboard and joystick inputs using only head rotations and facial expressions. Users are affected by Parkinson’s, spinal cord injury, muscular dystrophy, multiple sclerosis, stroke and many other conditions that make it just impossible to use a keyboard, mouse or touch display. Several research projects have been carried out by the TU to help develop this solution (master thesis projects).

„Live Dense Reconstruction on a Mobile Device“ by Christian Reinbacher, Christoph Bauernhofer
Dieses Video zeigt ein Gemeinschaftsprojekt der „Mobile Vision Teams“ unter der Leitung von Prof. Thomas Pock am Institut für maschinelles Sehen und Darstellen. Da es ein Studentenprojekt war/ist würde ich es unter die Kategorie „Student“ einordnen. Das Video zeigt 3D Rekonstruktion auf einem handelsüblichen Android Tablet ohne die Zuhilfenahme von speziellen 3D Sensoren, wie sie etwa in „Google Tango“ verbaut sind. Damit können 3D Modelle auf modernen Tablets in kurzer Zeit erzeugt werden, welche dann als Basis für Anwendungen wie Augmented Reality, Spiele oder 3D Druck verwendet werden können.

„Model-Based Diagnosis for Self-* Systems“ by Franz Wotawa
In this movie we discuss the need for autonomous systems and in particular model-based autonomy and its underlying foundations. Model-based autonomy is based on model-based diagnosis, where a model of a system together with observations is used for fault localization. The underlying method allows for constructing systems that adapt themselves in case of faults or other not expected environmental changes. Hence, model-based diagnosis provides the ideal basis for self-* systems that adapt their behaviour autonomously over time in order to fulfil a pre-defined task.

„Offline voice based interaction on mobile phones“ by Alfred Wertner, Oliver Prentner
Wir zeigen eine sprachbasierte Kommissionierung am Smartphone App. Die App funktioniert offline, ohne Zugriff auf Cloud Services. Über das Headset werden Sprachein/-ausgabe übertragen. Der Anwender hat also beide Hände frei. Für die Demo haben wir die Ausgabe auf einen Lautsprecher geleitet, damit man hört was das System spricht.

„Pocket Code – Lego Mindstorm“ by Wolfgang Slany
Pocket Code allows kids to create their own mobile game apps in a visual, „Lego-style“ programming language. Pocket Code, among a plethora of other features, now also supports Lego Mindstorms NXT robots via Bluetooth. In the video we show how the robot’s motors can be controlled from the programs that kids create themselves, and how their programs can access the robot’s sensors, e.g., to measure the distance to an obstacle through the NXT’s ultrasonic sensor. By mounting your phone on the robot, you can give the robot an easily animated face and a voice (either recorded speech or through speech synthesis), besides enhancing the simple sensors of the robot with the much more powerful and diverse sensors of your phone. Additionally, Pocket Code now can detect human faces through the phone’s camera, so that the Lego robot can turn to you, follow someone, or start talking when someone is in front of it. You can program your games and the robot directly on your phone — no PC or laptop is needed! Pocket Code is part of the free open source non-profit project Catrobat. Details: www.catrobat.org

„Research in the Mobile Vision Group@TU Graz“ by Thomas Pock
Current research @ Team Pock; Graphics and Vision. This video gives a short overview of some computer vision research done in the research group lead by Prof. Thomas Pock, at the Institute for Computer Graphics and Vision.

„Retargeting Technical Documentation to Augmented Reality“ by
Peter Mohr, Bernhard Kerbl, Michael Donoser, Dieter Schmalstieg, Denis Kalkofen
In this video we present a system which automatically transfers printed technical documentation, such as handbooks, to three-dimensional Augmented Reality.

„ScaR Recommender“ by Emanuel Lacic
This video presents ScaR, a novel scalable recommender framework. ScaR has been designed and implemented with focus on scalability and customization. It follows the Software-as-a-Service (SaaS) model and thus, can be deployed in a distributed cloud infrastructure.

„Some Challenging Aspects in Biomechanics“ by Gerhard Holzapfel & all members of the Institute of Biomechanics around Gerhard Holzapfel
Beitrag des Instituts fuer Biomechanik zum Thema „Voices of Computer Science“.The Institute of Biomechanics, since recently part of the “Faculty of Computer Science and Biomedical Engineering” is the only University Institute of this type in Austria. Biomechanics is an upcoming and (very) interdisciplinary area within Biomedical Engineering with the goal to develop, extend, and apply mechanics to answer questions of importance in biology and medicine. The submitted movie provides snapshots on current research areas at the Institute of Biomechanics which all have the power to shape the future.

„Virtual Coffee Break Room “ by Paul Czech, Angela Fessl, Carla Barreiros, Valentin Schwindsackl
Institute: Know-Center GmbH, Inffeldgasse 13, 8010 Graz Today’s working environments are becoming more and more flexible and independent with regard to place and time. Ubiquitous computing provides a lot of advantages not only for knowledge workers but also for organisations, because working can be done more efficient and more productive. However people s formerly strong interpersonal and social ties, which are mandatory for building a corporate or project identity, could get lost during their ubiquitous work lives. In this video we present the concept of the Virtual Coffee Break Room, a tool that aims at maintaining already existing interpersonal ties and creating new ones by automatically suggesting users the right time and place to take a break together with their colleagues. To achieve this the VCBR considers results from different research areas like ubiquitous computing, user models, mobile sensing, social network analysis and interruptibility. Beside conventional and explicit notifications to invite users to take a break!in form of push or silent messages we also propose the use of an unconventional form of notification, a Coffee Smell Gadget. This gadget distributes the smell of fresh brewed coffee and thus directly addresses the sense of smell and motivates unobtrusively to take a break.

„Visual-Interactive Analysis of Complex Data Sets“ by Tobias Schreck, Lin Shao
This video demonstrates different tools developed in our recent research work for interactive visual analysis of complex data sets. The examples include analysis of social media data, diagrammatic data, hierarchic data (example from Biology), and descriptor data (example from Chemistry). The video also gives a brief motivation to the problem of visual analysis of large data as well as conclusions and acknowledgements.

© 2015, all rights reserved