User Tools

Site Tools


people:radig


Technische Universität München

Department of Computer Science
Boltzmannstrasse 3
85748 Garching
Germany

Fax: +49-89-289-17757
Office: 
Mail:

About

Prof. Dr. Bernd Radig is principal researcher and member of the executive board of the Cluster of Excellence >Cognition for Technical Systems (CoTeSys)< (since 2006) and Full Professor for Image Understanding and Knowledge Based Systems, Fakultaet fuer Informatik, TU Muenchen (1986-2009). He was chairman and founder of the Association of Bavarian Research Cooperations (1993-2007), a unique network of scientists, specialising in challenging disciplines in accordance with the Bavarian companies. 1988 he founded together with colleagues from Univ. Erlangen and Univ. Passau and support from Bavarian enterprises the Bavarian Research Centre for Knowledge Based Systems (FORWISS), an institute common to the three universities. He holds the German Order of Merit (1992) and the award >Pro Meritis Scientiae et Litterarum< of the State of Bavaria (2002). His current activities are in real-time image sequence understanding for applications in robotics, sports or driver assistance systems.

Research Topics

  • Model-Based Image Interpretation
  • Objective Functions, Learning Objective Functions
  • Face Detection, Face Model Fitting, Head Tracking
  • Facial Expression Recognition, Emotion Recognition
  • Person Localization, Person Tracking
  • Driver Assistant Systems
  • Multi-Robot Coordination

Current Research Projects

  • Mudis : Since existing methods for human-machine interaction are often unintuitive, a lot of time is required for humans to adapt to the operation of a specific machine. In contrast, the MuDiS project aims at granting machines the ability to adapt to typical human behavior. The goal of the project is the development of a multimodal dialog system that considers various human communication channels such as facial expressions, spoken language and gestures for human-machine interaction. We perform experiments to determine the requirements for robots to interact with humans in an intuitive way. Insights gained from these human-human experiments are applied to the human-machine interface to grant robots the capability of participating in simple, every-day dialogs in various environments. To tackle this challenge, we unite researchers from diverse scientific areas, such as computer science, electrical engineering and psychology to reflect the interdisciplinary character of the project.
  • CoP : The project aims at the unification of vision-based sensing in the CoP (Cognitive Perception) in the learning and planning system. On the one hand, CoP manages the interpretation of different kinds of sensors and on the other hand it automatically acquires and maintains the knowledge about the world and objects in the world. CoP selects sensors and sensor interpretation algorithms based on their expected utility. To this end, CoP learns and improves intersensor and inter-algorithmic models for method seleciton from experience. Especially the vision system, the major sensor we use, provides several automatic model improving techniques. Improved models accelerate the perception process and provide more robust results.
  • Face Image Analysis : As robots emerge from their classical domain - factories - to be included in every day life, they need to gain new abilities besides those needed in manufacturing. They need not only to support humans, but also be able to socialize with their users to enhance the interactant experience and allow for social bonding. Recent progress in the field of Computer Vision allows intuitive interaction via gesture or facial expressions between humans and technical systems. Recent research aims at enabling machines to utilize communication channels natural to human beings, such as gesture or facial expressions. Humans interpret emotion from video and audio information and heavily rely on this information during every-day communication. Therefore, knowledge about human behavior, intention, and emotion is necessary to construct convenient human-machine interaction mechanisms. The human face provides much of the information that is passed between humans in every-day communication. Although most of this information is passed on a subconscious level, we still rely on the interaction partner's facial expression to determine emotional state or attention to form a prediction of his or her reaction.
  • MuJoVision : The Multi-Joint Vision project aims at providing image data obtained simultaneously from a large number of cameras in real time for further processing in Computer Vision applications. It involves the integration of a flexibly expandable architecture and interface for distributed Computer Vision applications. The project is part of a Multi-Joint Action scenario developed for the CoTeSys cluster of excellence, and the data obtained is used to generate tracking information for Human-Computer Interaction and monitoring tasks.
  • Cogito : A key challenge for the next generation of autonomous robots is the reliable and efficient accomplishment of prolonged, complex, and dynamically changing tasks in the real world. One of the most promising approaches to realizing these capabilities is the plan-based approach to robot control. In the plan-based approach, robots produce control actions by generating, maintaining, and executing plans that are tailored for the robots' respective tasks. Plans are robot control programs that a robot can not only execute but also reason about and manipulate. Thus a plan-based controller is able to manage and adapt the robot's intended course of action — the plan — while executing it and can thereby better achieve complex and changing goals. The use of plans enables these robots to flexibly interleave complex and interacting tasks, exploit opportunities, quickly plan their courses of action, and, if necessary, revise their intended activities. One of the grand visions in the area of plan-based robot control is the realization of general autonomous robot control programs that can adapt themselves to the environments they are to operate in and to the distribution of complex tasks they are to perform. An instance of this grand vision is a pre-programmed household robot that knows how to clean a kitchen, how to operate a dishwasher, and so on. Being installed in a new environment it specializes its general plans to the specifics of the household and learns to manage the specific agenda of household chorus that is given to it. The robot also has to learn about the pitfalls of its tasks and its environment and avoid them through foresight. Our research field is still far away from realizing such competent robotic agents.
  • ProbCog : ProbCog designs and implements a statistical relational learning system that supports efficient learning and inference in relational domains, focusing mainly on probabilistic logical models. Based on a common data model, we integrate various representation formalisms and have successfully extended the expressiveness and practical applicability of existing approaches. ProbCog thus provides a coherent probabilistic framework that enables cognitive technical systems to deal with a high degree of uncertainty and complexity in a variety of application contexts, including adaptive plan generation for under-specified tasks and heuristic generation.

Former Research Projects

  • Agilo : Robotic soccer has become a standard 'real-world' testbed for autonomous multi robot control. In robot soccer (mid-size league) two teams of four autonomous robots — one goal keeper and three field players — play soccer against each other. The soccer field is four by nine meters big. The key characteristics of mid-size robot soccer are that the robots are completely autonomous. Consequently, all sensing and all action selection is done onboard of the individual robots. Skillful play requires our robots to recognize objects, such as other robots, field lines, and goals, and even entire game situations. In the AGILO project we investigate how probabilistic visuomotoric autonomous robot controllers that are capable of learning can meet these challenges. The AGILO robot controllers employ game state estimation and situated action selection based on automatically learned control mechanisms.

Miscellaneous

Publications

Journal Articles and Book Chapters

Learning from Humans -- Cognition-enabled Computational Models of Everyday Activity (Michael Beetz, Martin Buss, Bernd Radig), In Künstliche Intelligenz, Springer, 2010. [bib]
Adjusted Pixel Features for Facial Component Classification (Christoph Mayer, Matthias Wimmer, Bernd Radig), In Image and Vision Computing Journal, 2009.() [bib]
Recognizing Facial Expressions Using Model-based Image Interpretation (Matthias Wimmer, Zahid Riaz, Christoph Mayer, Bernd Radig), In Advances in Human-Computer Interaction (Shane Pinder, ed.), I-Tech, volume 1, 2008. [bib]
Learning Local Objective Functions for Robust Face Model Fitting (Matthias Wimmer, Freek Stulp, Sylvia Pietzsch, Bernd Radig), In IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), IEEE Computer Society, volume 30, 2008. [bib] [doi]
Adaptive Skin Color Classificator (Matthias Wimmer, Bernd Radig), In ICGST International Journal on Graphics, Vision and Image Processing, volume Special Issue on Biometrics, 2006. [bib] [pdf]
The AGILO Robot Soccer Team -- Experience-based Learning and Probabilistic Reasoning in Autonomous Robot Control (Michael Beetz, Thorsten Schmitt, Robert Hanek, Sebastian Buck, Freek Stulp, Derik Schröter, Bernd Radig), In Autonomous Robots, volume 17, 2004. [bib] [pdf]
Cooperative Probabilistic State Estimation for Vision-based Autonomous Mobile Robots (Thorsten Schmitt, Robert Hanek, Michael Beetz, Sebastian Buck, Bernd Radig), In IEEE Transactions on Robotics and Automation, volume 18, 2002. [bib] [pdf]

Conference Papers

Learning Displacement Experts from Multi-band Images for Face Model Fitting (Christoph Mayer, Bernd Radig), In International Conference on Advances in Computer-Human Interaction, 2011. [bib]
Improving Aspects of Empathy and Subjective Performance for HRI through Mirroring Facial Expressions (Barbara Gonsior, Stefan Sosnowski, Christoph Mayer, Jürgen Blume, Bernd Radig, Dirk Wollherr, Kolja Kühnlenz), In Proceedings of the 19th IEEE International Symposium on Robot and Human Interactive Communication, 2011. [bib]
Real-Time Face and Gesture Analysis for Human-Robot Interaction (Frank Wallhoff, Tobias Rehrl, Christoph Mayer, Bernd Radig), In Real-Time Image and Video Processing 2010, volume , 2010.(invited paper) [bib]
Mirror my emotions! Combining facial expression analysis and synthesis on a robot (S. Sosnowski, C. Mayer, K. Kühnlenz, B. Radig), In The Thirty Sixth Annual Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB2010), volume , 2010.() [bib]
Towards robotic facial mimicry: system development and evaluation (C. Mayer, S. Sosnowski, K. Kühnlenz, B. Radig), In Proceedings of the 19th IEEE International Symposium on Robot and Human Interactive Communication, volume , 2010.() [bib]
A Distributed Many-Camera System for Multi-Person Tracking (C. Lenz, T. Röder, M. Eggers, S. Amin, T. Kisler, B. Radig, G. Panin, A. Knoll), In Proceedings of the First International Joint Conference on Ambient Intelligence (AmI 2010) (R. Wichert, B. de Ruyter, eds.), Springer Lecture Notes in Computer Science, 2010. [bib]
Image Normalization for Face Recognition using 3D Model (Zahid Riaz, Michael Beetz, Bernd Radig), In International Conference of Information and Communication Technologies, Karachi, Pakistan, IEEE, volume , 2009.() [bib] [pdf]
A Model Based approach for Expression Invariant Face Recognition (Zahid Riaz, Christoph Mayer, Matthias Wimmer, Michael Beetz, Bernd Radig), In 3rd International Conference on Biometrics, Alghero Italy, Springer, volume , 2009.() [bib] [pdf]
Multi-Feature Fusion in Advanced Robotics Applications (Zahid Riaz, Christoph Mayer, Saquib Sarfraz, Michael Beetz, Bernd Radig), In Internaional Conference on Frontier of Information Technology, ACM, volume , 2009.() [bib] [pdf]
Facial Expressions Recognition from Image Sequences (Zahid Riaz, Christoph Mayer, Michael Beetz, Bernd Radig), In 2nd International Conference on Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions, Prague, Czech Republic, Springer, volume , 2009.() [bib] [pdf]
Model Based Analysis of Face Images for Facial Feature Extraction (Zahid Riaz, Christoph Mayer, Michael Beetz, Bernd Radig), In Computer Analysis of Images and Patterns, Munster, Germany, Springer, volume , 2009.() [bib] [pdf]
3D Model for Face Recognition across Facial Expressions (Zahid Riaz, Christoph Mayer, Michael Beetz, Bernd Radig), In Biometric ID Management and Multimodal Communication, Madrid, Spain, Springer, volume , 2009.() [bib]
A Unified Features Approach to Human Face Image Analysis and Interpretation (Zahid Riaz, Suat Gedikli, Michael Beetz, Bernd Radig), In Affective Computing and Intelligent Interaction, Amsterdam, Netherlands, IEEE, volume , 2009.(Doctoral Consortium Paper) [bib] [pdf]
Facial Expression Recognition with 3D Deformable Models (Christoph Mayer, Matthias Wimmer, Martin Eggers, Bernd Radig), In Proceedings of the 2nd International Conference on Advancements Computer-Human Interaction (ACHI), Springer, volume , 2009.(Best Paper Award) [bib]
Did I Get it Right: Head Gesture Analysis for Human-Machine Interaction (Jürgen Gast, Alexander Bannat, Tobias Rehrl, Christoph Mayer, Frank Wallhoff, Gerhard Rigoll, Bernd Radig), In Human-Computer Interaction. Novel Interaction Methods and Techniques, Springer, volume , 2009.() [bib]
Tailoring Model-based Techniques for Facial Expression Interpretation (Matthias Wimmer, Christoph Mayer, Sylvia Pietzsch, Bernd Radig), In The First International Conference on Advances in Computer-Human Interaction (ACHI08), 2008. [bib] [pdf]
Robustly Classifying Facial Components Using a Set of Adjusted Pixel Features (Matthias Wimmer, Christoph Mayer, Bernd Radig), In Proc. of the International Conference on Face and Gesture Recognition (FGR08), 2008. [bib] [pdf]
Low-level Fusion of Audio and Video Feature for Multi-modal Emotion Recognition (Matthias Wimmer, Björn Schuller, Dejan Arsic, Bernd Radig, Gerhard Rigoll), In 3rd International Conference on Computer Vision Theory and Applications (VISAPP), volume 2, 2008. [bib] [pdf]
An ASM Fitting Method Based on Machine Learning that Provides a Robust Parameter Initialization for AAM Fitting (Matthias Wimmer, Shinya Fujie, Freek Stulp, Tetsunori Kobayashi, Bernd Radig), In Proc. of the International Conference on Automatic Face and Gesture Recognition (FGR08), 2008. [bib] [pdf]
Model Based Face Recognition Across Facial Expressions (Zahid Riaz, Christoph Mayer, Matthias Wimmer, Bernd Radig), In Journal of Information and Communication Technology, 2008. [bib] [pdf]
Shape Invariant Recognition of Segmented Human Faces using Eigenfaces (Zahid Riaz, Michael Beetz, Bernd Radig), In Proceedings of the 12th International Multitopic Conference, IEEE, 2008. [bib] [pdf]
Face Model Fitting with Generic, Group-specific, and Person-specific Objective Functions (Sylvia Pietzsch, Matthias Wimmer, Freek Stulp, Bernd Radig), In 3rd International Conference on Computer Vision Theory and Applications (VISAPP), volume 2, 2008. [bib] [pdf]
A Real Time System for Model-based Interpretation of the Dynamics of Facial Expressions (Christoph Mayer, Matthias Wimmer, Freek Stulp, Zahid Riaz, Anton Roth, Martin Eggers, Bernd Radig), In Proc. of the International Conference on Automatic Face and Gesture Recognition (FGR08), 2008. [bib] [pdf]
The Assistive Kitchen -- A Demonstration Scenario for Cognitive Technical Systems (Michael Beetz, Freek Stulp, Bernd Radig, Jan Bandouch, Nico Blodow, Mihai Dolha, Andreas Fedrizzi, Dominik Jain, Uli Klank, Ingo Kresse, Alexis Maldonado, Zoltan Marton, Lorenz Mösenlechner, Federico Ruiz, Radu Bogdan Rusu, Moritz Tenorth), In IEEE 17th International Symposium on Robot and Human Interactive Communication (RO-MAN), Muenchen, Germany, 2008.(Invited paper.) [bib] [pdf]
SIPBILD -- Mimik- und Gestikerkennung in der Mensch-Maschine-Schnittstelle (Matthias Wimmer, Bernd Radig, Christoph Mayer), In Beiträge der 37. Jahrestagung der Gesellschaft für Informatik (GI), volume 1, 2007. [bib] [pdf]
Learning Robust Objective Functions with Application to Face Model Fitting (Matthias Wimmer, Sylvia Pietzsch, Freek Stulp, Bernd Radig), In Proceedings of the 29th DAGM Symposium, volume 1, 2007. [bib] [pdf]
Initial Pose Estimation for 3D Models Using Learned Objective Functions (Matthias Wimmer, Bernd Radig), In Proceedings of the 8th Asian Conference on Computer Vision (ACCV07) (Yasushi Yagi, Sing Bing Kang, In So Kweon, Hongbin Zha, eds.), Springer, volume 4844, 2007. [bib] [pdf]
Automatically Learning the Objective Function for Model Fitting (Matthias Wimmer, Bernd Radig), In Proceedings of the Meeting in Image Recognition and Understanding (MIRU), 2007. [bib] [pdf]
Audiovisual Behavior Modeling by Combined Feature Spaces (Björn Schuller, Matthias Wimmer, Dejan Arsic, Gerhard Rigoll, Bernd Radig), In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), volume 2, 2007. [bib]
Learning Robust Objective Functions for Model Fitting in Image Understanding Applications (Matthias Wimmer, Freek Stulp, Stephan Tschechne, Bernd Radig), In Proceedings of the 17th British Machine Vision Conference (BMVC) (Michael J. Chantler, Emanuel Trucco, Robert B. Fisher, eds.), BMVA, volume 3, 2006. [bib] [pdf]
A Person and Context Specific Approach for Skin Color Classification (Matthias Wimmer, Bernd Radig, Michael Beetz), In Procedings of the 18th International Conference of Pattern Recognition (ICPR 2006), IEEE Computer Society, volume 2, 2006. [bib] [pdf]
Adaptive Skin Color Classificator (Matthias Wimmer, Bernd Radig), In Proceedings of the first International Conference on Graphics, Vision and Image Processing (Ashraf Aboshosha et al., ed.), ICGST, volume I, 2005. [bib] [pdf]
Sensor-based Situated, Individualized, and Personalized Interaction in Smart Environments (Simone Hämmerle, Matthias Wimmer, Bernd Radig, Michael Beetz), In INFORMATIK 2005 - Informatik LIVE! Band 1, Beiträge der 35. Jahrestagung der Gesellschaft für Informatik (GI) (Armin B. Cremers, Rainer Manthey, Peter Martini, Volker Steinhage, eds.), GI, volume 67, 2005. [bib] [pdf]
Detection and Classification of Gateways for the Acquisition of Structured Robot Maps (D. Schröter, T. Weber, M. Beetz, B. Radig), In Proc. of 26th Pattern Recognition Symposium (DAGM), Tübingen/Germany, 2004. [bib] [pdf]
The AGILO Autonomous Robot Soccer Team: Computational Principles, Experiences, and Perspectives (Michael Beetz, Sebastian Buck, Robert Hanek, Thorsten Schmitt, Bernd Radig), In International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS) 2002, 2002. [bib] [pdf]
Kontextunterstützte Analyse von Szenen mit bewegten Objekten. (R. Bertelsmeier, Bernd Radig), In Digital Bildverarbeitung - Digital Image Processing, GI/NTG Fachtagung, München, 28.-30. März 1977 (Hans-Hellmut Nagel, ed.), Springer, 1977. [bib]

Workshop Papers

Robustly Estimating the Color of Facial Components Using a Set of Adjusted Pixel Features (Matthias Wimmer, Sylvia Pietzsch, Christoph Mayer, Bernd Radig), In 14. Workshop Farbbildverarbeitung, 2008. [bib]
Recognizing Facial Expressions Using Model-based Image Interpretation (Matthias Wimmer, Christoph Mayer, Bernd Radig), In Verbal and Nonverbal Communication Behaviours, COST Action 2102 International Workshop, 2008. [bib]
Face Model Fitting based on Machine Learning from Multi-band Images of Facial Components (Matthias Wimmer, Christoph Mayer, Freek Stulp, Bernd Radig), In Workshop on Non-Rigid Shape Analysis and Deformable Image Alignment, held in conjunction with CVPR, 2008. [bib] [pdf]
Are You Happy with Your First Name? (Matthias Wimmer, Christoph Mayer, Martin Eggers, Bernd Radig), In Proceedings of the 3\textsuperscriptrd Workshop on Emotion and Computing: Current Research and Future Impact, 2008. [bib]
Interpreting the Dynamics of Facial Expressions in Real Time Using Model-based Techniques (Christoph Mayer, Matthias Wimmer, Freek Stulp, Zahid Riaz, Anton Roth, Martin Eggers, Bernd Radig), In Proceedings of the 3\textsuperscriptrd Workshop on Emotion and Computing: Current Research and Future Impact, 2008. [bib]
Human Capabilities on Video-based Facial Expression Recognition (Matthias Wimmer, Ursula Zucker, Bernd Radig), In Proceedings of the 2nd Workshop on Emotion and Computing -- Current Research and Future Impact (Dirk Reichardt, Paul Levi, eds.), 2007. [bib] [pdf]
Estimating Natural Activity by Fitting 3D Models via Learned Objective Functions (Matthias Wimmer, Christoph Mayer, Freek Stulp, Bernd Radig), In Workshop on Vision, Modeling, and Visualization (VMV), volume 1, 2007. [bib] [pdf]
Enabling Users to Guide the Design of Robust Model Fitting Algorithms (Matthias Wimmer, Freek Stulp, Bernd Radig), In Workshop on Interactive Computer Vision, held in conjunction with ICCV 2007, Omnipress, 2007. [bib] [pdf]

Other Publications

Multi Joint Action in CoTeSys --- Setup and Challenges (D. Br\vs\vci\'c, M. Eggers, F. Rohrmüller, O. Kourakos, S. Sosnowski, D. Althoff, M. Lawitzky, A. Mörtl, M. Rambow, V. Koropouli, J.R. Medina Hernández, X. Zang, W. Wang, D. Wollherr, K. Kühnlenz, C. Mayer, T. Kruse, A. Kirsch, J. Blume, A. Bannat, T. Rehrl, F. Wallhoff, T. Lorenz, P. Basili, C. Lenz, T. Röder, G. Panin, W. Maier, S. Hirche, M. Buss, M. Beetz, B. Radig, A. Schubö, S. Glasauer, A. Knoll, E. Steinbach), Technical report, CoTeSys Cluster of Excelence: Technische Universität München & Ludwig-Maximilians-Universität München, 2010. [bib]
Powered by bibtexbrowser
Export as PDF or BIB

people/radig.txt · Last modified: 2011/09/23 11:48 by Quirin Lohr