2009
Korus-Tech / SKKU-ICE Summer School on
Cognitive Robotics, Vision and Human Robot Interaction for Consumer Robots
   

Contents

  • Introduction
  • Special Thanks
  • Lectures
  • Schedule
  • Course Description
  • Short Bios for lectures
  • Organizer
  • Co-Organizer
  • Sponsor
  • Contact


  • Summer School Brochure (English) - Updated ( PDF : 524KB)
  •  

  • 등록안내


  •    - 수강료 : 일반(20만원), 학생(10만원), 경기도참여기업(10만원)


       - 사전등록 : 2009년 8월 5일까지


          효율적인 준비를 위하여 가능한 사전등록을 하여 주시면 감사하겠습니다.

          첨부된 사전등록 양식의 내용을 기재하여 Email 혹은 FAX등으로 보내주시고,

          아래의 계좌로 송금해 주시면 등록이 가능합니다.

          등록과 관련된 자세한 사항은 아래의 연락처로 연락주시면 감사하겠습니다.


        - 입금구좌 : 하나은행 460-810014-35304(성균관대학교)



  • 문의처


  •     - 성균관대 지능시스템 연구센터 정동원 연구교수


        - 전화 : 031-299-6488, 010-3042-2531


        - 팩스 : 031-299-6479


        - email : gtg462c@gmail.com



  • Summer School Registration Form - Updated ( DOC : 33.5KB)
  •  


    Introduction

     First of all, I would like to welcome each of you to the 2009 Korus Tech / SKKU-ICE summer school on Cognitive Robotics, Vision and Human Robot Interaction for Consumer Robots. This summer school is intended to offer professionals and students an opportunity to learn emerging critical technologies on cognitive robotics, vision and human robot interaction that are essential for development of the next generation of robotic services through consumer robots.

     

     This summer school is a part of the Korus Tech program of Ministry of Knowledge Economy in Korea, which aims at strengthening competitiveness of technologies and advancing industrial structure by yielding synergy effect from University-Industry-Institutes cooperation between Korea and U.S. From this perspective, we invite diverse lecturers from universities, industry, and institutes both in Korea and U.S. to have them share the state-of-the-art technologies in cognitive robot research. The program is mainly comprised of two parts: 1) Cognitive robotics, Vision and Human Robot Interaction Technologies and 2) Development of Home Service Robots as consumer products. The contents in this program will provide the current and prospective robot researchers with technical details about how the future robot technologies emerge.




    Special Thanks

     It is my pleasure to express my special thanks to those providing valuable contributions to the success of this summer school, including the Korus Tech program under the Ministry of Knowledge Economy, the SKKU-BK21 program at the Center for the Advanced IT-HRD, the School of Information & Communication Engineering, and BrainCity Research Institute at SKKU, and the Gyeonggi Province.




    Sukhan Lee, Dr.

    Professor and Director

    The Intelligent Systems Research Center

    Sungkyunkwan University




    Lectures

    Prof. Henrik Christensen (Georgia Tech)

    Prof. Aaron Bobick (Georgia Tech)

    Prof. Seong-Whan Lee (Korea University)

    Prof. Sukhan Lee (SKKU)

    Dr. Richard H. Shinn (Bonavision)

    Senior researcher (Yujin robot)

    Prof. Angel del Pobil(SKKU)

     




    Schedule

    Part 1 : Cognitive Robotics, Vision and HRI Technologies (2009/08/10)

    Time

    Content

    Presenter

    09:00~10:30

    Visual Recognition of Human Activity and Behavior

    Prof. Bobick

    10:30~12:00

    Visual SLAM Human Augmented Mapping

    Prof. Christensen

    12:00~13:00

    Lunch

    13:00~14:30

    Cognitive Visual Manipulation

    Prof. Sukhan Lee

    14:30~15:00

    Break

    15:00~17:00

    Gesture Recognition

    Prof. Seong-Whan Lee



    Part 2 : Development of Home Service Robot (2009/08/11)

    Time

    Content

    Presenter

    09:00~10:30

    Construction of Cognitive Robots – Experience from Cosy

    Prof. Christensen

    10:30~12:00

    Cognitive Agent System for CCR(Cognitive Consumer Robot)

    Dr. Shinn, Bonavision

    12:00~13:00

    Lunch

    13:00~14:30

    Long-term Study of User Experience with Roombas

    Prof. Christensen

    14:30~15:00

    Break

    15:00~16:30

    Service Development Environment from the Cognitive Perspective

    Mr. H. Jung, Yujin

    16:30~17:30

    Physical Interaction Through Manipulation

    Prof. Angel del Pobil



    Course Description

    ● Visual Recognition of Human Activity and Behavior


    - Time: Day 1, 09:00 – 10:30

    - Lecturer: Prof. Seong-Whan Lee, Korea University


    Over the last decade or so, the computer vision task of Action Recognition - semantically labeling a sequence of video data as containing a particular action - has grown to become as fundamental as that of classic static object recognition. We have developed a variety of techniques for the representation and recognition of action, most specifically focusing on human behavior. Such behavior ranges from simple movements - atomic primitives, requiring no contextual or sophisticated sequence knowledge to be recognized - to high-level group activities - larger scale events that typically include multi-agent interaction with the environment and causal relationships. Fundamental questions underlying these techniques include how is time represented, what is the relationship between structural and statistical representations of behavior, and can the recognition of high level actions - involving, for example, intentionality - be achieved by compiled visual routines. Action understanding straddles the division between perception and cognition, computer vision and artificial intelligence/cognitive science. I will present examples of our work in each of these areas covering domains ranging from the low-level recognition of aerobics moves and gestures, to both structural and statistical models of visual surveillance, to the semantic labeling of football plays.




    ● Visual SLAM / Human Augmented Mapping


    - Time: Day 1, 10:30 – 12:00

    - Lecturer: Prof. Henrik I Christensen


    Simultaneous Localization and Mapping (SLAM) is by now a well formed problems, from a theoretical point of view. A number of solutions have been presented using in particular laser range date. More recently doing SLAM purely based on visual information has emerged as an alternative. In this presentation the basic SLAM problem is introduced and examples of doing visual SLAM will be presented including opportunities and challenges.
    Mapping the external world is essential for a robot to perform mobility tasks in natural environments. The maps are essential for a robot to perform basic navigation tasks and to ensure that the robot does not get lost. The maps are typically of limited value to robot users as they are parametric and have limited associated semantic information. To be useful to users it is essential that robot maps are augmented with semantics such as room type, location of key objects, and personalized information. Through cooperative mapping it is possible to generated human augmented maps that contain both parametric and human semantic information. Such maps are essential to allow users to use spoken commands such as "go to the kitchen and pick up the cup on the table". In the second part of the presentation methods for room categorization (e.g. kitchen) and methods for space augmentation are presented as key tools to enable deployment of service robots in domestic settings.




    ● Cognitive Visual Manipulation


    - Time: Day 1, 13:00 – 14:30

    - Lecturer: Prof. Sukhan Lee


    The capability of robot to carry out visually guided, autonomous, manipulation of objects in an unstructured environment is of a major technical challenge to be overcome for the next generation of robotic services. Forward looking robotic services such as errand and housemaid services for personal/professional applications as well as automated assembly and material handling services for industrial applications are only a few examples that signify what can be done with such capability, if available. However, unlike human, for robot to demonstrate such capability in our natural environment seems still not within our grasp in spite of over 30 years of rather remarkable progress in the technology of computer vision and robotic manipulation. The key to the problem lies in how robotic visual manipulation can deliver the required dependability in performance under a large variation of working environment as well as of situation. Recently, there has emerged, so called, “cognitive visual manipulation” as an approach to the solution of dependability in robotic visual manipulation in a natural and cluttered environment. This lecture aims at introducing fundamental technologies that are the foundation for engineering cognitive visual manipulation in 3D environments. They include 1) libraries and toolkits for visual recognition and pose estimation of 3D objects, 2) cognitive visual processes such as focus of attention with recognition saliency and evidence collection behaviors, 3) probabilistic decision-making based on evidence fusion, 4) categorization of 3D objects with generic object models, 5) real-time modeling of 3D workspace, 6) natural landmark based visual servo based on the integration of recognition and tracking, and 7) technologies for 3D depth imaging. A number of application details will be shown by demonstrating experimental results in order to help understand the lecture.




    ● Gesture Recognition


    - Time: Day 1, 15:00 – 17:00

    - Lecturer: Prof. Seong-Whan Lee, Korea University


    Recently, virtual interface have begun to emerge in human life space, while traditional interface device was used for the purpose of pointing, command, etc., in the past. In order for natural interactions with the virtual interface, automatic gesture recognition is required. Especially, vision-based gesture recognition provides intuitive and natural interface without wearing on any special devices. In this talk, vision-based human gesture recognition methods will be introduced dividing them into three types: pointing gesture, command gesture and whole body gesture. Various methods and recent progresses in the human gesture recognition are given, then applications and demos are shown to help understanding.




    ● Construction of Cognitive Robots - Experience from CoSy


    - Time: Day 2, 09:00 – 10:30

    - Lecturer: Prof. Henrik I Christensen


    The core problem in cognitive robotics is to construct physically instantiated ... systems that can perceive, understand ... and interact with their environment, and evolve in order to achieve human-like performance in activities requiring context-(situation and task) specific knowledge. As such the area poses a number of interesting problems in terms of design of next generation robot systems. To design such systems there is a need to carefully consider problems across systems design, spatial insertion, planning and error recovery, perception, learning and human-robot interaction.
    Over the last 5 years a project funded by the European Commission - Cognitive Systems for Cognitive Assistance - CoSy has studied the problem of cognitive robot design. The research has included both theoretical research across the areas mentioned above and empirical studies using two demonstrator scenarios - "the explorer – spatial mapping/exploration" and "the Playmate - object manipulation/grasping and spatial reasoning".
    The CoSy project made significant progress on the general problem of design of cognitive robot systems. In this presentation the basic problems address, key theoretical results and major demonstrations will be presented. Some of the major insights are presented and issues for future research are also discussed.




    ● Cognitive Agent System for CCR (Cognitive Consumer Robot)


    - Time: Day 2, 10:30 – 12:00

    - Lecturer: Dr. Richard H. Shinn, Bonavision


    An intelligent robot should be able to interact with humans in a way similar to human cognition. An intelligent agent is an autonomous entity which observes and acts upon an environment and directs its activity towards achieving goals. It may also learn or use knowledge to achieve their goals. A cognitive agent is an intelligent agent which reasons like a human being.
    We will first review a variety of cognitive agent architectures and systems currently available. From a cognitive perspective, we will then identify the requirements for our cognitive consumer robot. We will finally introduce a cognitive agent architecture for our robot and justify why it is a good architecture.




    ● Long-term study of user experience with Roombas


    - Time: Day 2, 13:00 – 14:30

    - Lecturer: Prof. Henrik I Christensen


    Domestic services has long been a major target for researchers in robotics. One of the areas that have been successful is in domestic vacuum cleaning. There are quite a few products on the market and more than 4 million units have been sold. An obvious question in relation to use of these systems is - over longer time periods how do people adopt these systems? what are key factors successful in adoption and what are some of the key factors that lead to failure in adoption of a system?
    To better understand these issues 30 Roomba systems have been deployed over a period of 6 months in a number of US households. The distribution of users is across single/married, ages, income, children, pets, ... etc to make sure that there is a broad coverage of the possible users. During the study 6 visits to the home took place to document the full process. The study covered people awareness of technology, their view of robotics, changes in cleaning routines, etc. The study also documented their bonding with the appliance to understand if they named it, if they decorated / personalized the unit etc.
    In this presentation we will outline the study, document the variability across the users, and discuss some of the main lessons that can be learned from the study in terms of physical design, social embedding, behaviour, training and interfaces.




    ● Service Development Environment from the Cognitive Perspective


    - Time: Day 2, 15:00 – 16:30

    - Lecturer: Senior Researcher, Yujin Robot


    Robot Contents Organizing Software (ROCOS) is a robot contents development environment for contents developers and users besides the robot developers. It features straightforward development of robot services that are applicable to heterogeneous robots. To this end, 1) we adopt an approach to make a model using drag-and-drop instead of the programming languages or script-based procedure, 2) we introduce an intuitive method based on events, and 3) we provide specialized methods for multi-media robot contents. In addition, ROCOS utilizes XML technologies to improve the universal application and compatibility between heterogeneous robots. It also has the strength of reusability of the robot contents and backward compatibility over heterogeneous robots by utilizing 1) the information of the robot interface configuration based on XML, 2) the XML based contents, and 3) the selection of XML-RPC communication framework. It is known that the target environment for contents developers and users is an important factor for commercializing the market, while the universal application and compatibility for heterogeneous robots is a crucial factor for maintenance of robot contents. Consequently, in a practical sense, the ROCOS is the integrated development environment (IDE) taking into account the commercialization of robot contents.




    ● Physical Interaction Through Manipulation


    - Time: Day 2, 16:30 – 17:30

    - Lecturer: Prof. Angel del Pobil


    Physical interaction through manipulation is one of the most important challenges in robotics. A robot with advanced manipulation capabilities would generate a great impact on the society, specially in what concerns consumer applications. There are three important properties that should be present in any such robot: versatility, autonomy and dependability. Versatility is defined as the capability to adapt to different situations, instead of being limited to a particular task. Autonomy concerns the level of independence in the robot operation: an autonomous robot should be able to perform its actions without human intervention. Finally, dependability involves different concepts such as reliability and safety, and refers to the capability of successfully completing an action even under important modelling errors or inaccurate sensor information. These concepts are introduced in this lecture along with examples of working systems.




    Short Bios for lectures

    ● Prof. Aaron Bobick


    Professor and Chair, School of Interactive Computing

    GVU Center

    Robotics and Intelligent Machines Center

    Office of the Dean

    Georgia Institute of Technology, Atlanta, GA


    Dr. Bobick is the chair of School of Interactive Computing at Georgia Tech. The School is a leader in computing research in all areas that involve computers interacting with the outside world, ranging from Human Computer Interaction to Robotics. Dr. Bobick’s research is in computer vision where he is one of the pioneers in activity and behavior recognition from video imagery. He has published numerous papers addressing many levels of the problem from basic human motion and gesture recognition to constructing multi-representational systems for an autonomous vehicle to the representation and recognition of high level human activities. Dr. Bobick has also explored the development of interactive environments where advanced sensing modalities provide input based upon the users' actions and, hopefully, intentions. He holds Bachelor of Science Degrees in Computer Science and Mathematics (’81) and a PhD in Cognitive Science (’87), all from the Massachusetts Institute of Technology (MIT).




    ● Prof. Henrik Christensen


    Director, Robotics and Intelligent Machines Center

    KUKA Chair of Robotics, Distinguished Professor

    School of Interactive Computing

    Robotics and Intelligent Machines Center,

    Georgia Institute of Technology, Atlanta, GA


    Henrik I Christensen is the director of Georgia Tech's new Robotics Program and the KUKA Chair of Robotics. Christensen's main research interests include human centered robotics, sensory/data fusion, and systems integration. A fundamental assumption remains that all work must be evaluated, therefore a solid theoretical model, a credible scenario, and thorough testing and verification are required. Christensen has contributed over 250 publications within robotics, vision, and artificial intelligence, including ten books. He has served on the board of the most prestigious robotics journals, and is currently serving on the editorial board of six journals across robotics, vision and AI. Christensen is the founding chairman of the EU network of excellence in robotics - EURON (1999-2006), and is a science advisor to government agencies across three continents, as well as several international companies. Christensen has held positions at Oak Ridge National Laboratory (1988), Aalborg University (1991-1996), University of Pennsylvania (1996), and the Swedish Royal Institute of Technology (1996-2006). He received his certificate of apprenticeship in mechanical engineering (1981),and his M.Sc. and Ph.D. degrees in electrical engineering from Aalborg University (1987 and 1990). In his spare time, Christensen enjoys gourmet food, fine wine, and photography.




    ● Mr. HyunChul Jung


    Staff Engineer, Yujin Robot


    Mr. Jung’s main job areas are developing environment for robot service. It features development environment for robot services that are applicable to heterogeneous robots by contents developers and users besides the robot developers. Without contents, maybe robot is empty can. He hopes that robot will be new-coming media delivery system used by even non-expert users.




    ● Prof. Sukhan Lee


    Director of the Intelligent Systems Research Center

    Professor, School of Information & Communication Engineering

    Sungkyunkwan University, Suwon, Korea


    Sukhan Lee received his B.S. / M.S. degrees in Electrical Engineering, Seoul National University, Seoul, Korea in 1974 and 1972, respectively. He received his Ph.D. degree in Electrical Engineering, Purdue University, West Lafayette, 1982.
    His research areas are intelligent systems and MEMS/NEMS. Research in intelligent systems includes real-time vision-based recognition, modeling, and understanding of the 3D environment, visual servo, robotic motion planning, reconfigurable and embedded systems-on-chip for robotic software and hardware engines, 3D cameras, robot manipulation and navigation, and intelligent vehicles. Research in MEMS/NEMS includes devices for jetting micro and NANO droplets, electro-wetting, NANO fountain pens, and micro and NANO transducers.




    ● Prof. Seong-Whan Lee


    Hyundai-Kia Motor Chair Professor

    Department of Computer Science and Engineering,

    Center for Artificial Vision Research,

    Korea University, Seoul, Korea


    Seong-Whan Lee received his B.S. degree in Computer Science and Statistics from Seoul National University, Seoul, Korea, in 1984; and M.S. and Ph.D. degrees in Computer Science from the Korea Advanced Institute of Science and Technology in 1986 and 1989, respectively. From February 1989 to February 1995, he was an Assistant Professor in the Department of Computer Science at Chungbuk National University, Cheongju, Korea. In March 1995, he joined the faculty of the Division of Computer and Communications Engineering at Korea University, Seoul, Korea, and now he is the full professor. Prof. Lee is also the director of Center for Artificial Vision Research (CAVR). In 2001, he also stayed at the Department of Brain and Cognitive Sciences, MIT as a visiting professor. In 2009, he was appointed as the Hyundai-Kia Motor Chair Professor. He has more than 250 publications on computer vision, pattern recognition and brain engineering in international journals and conference proceedings, and authored 10 books.




    ● Dr. Richard H. Shinn


    Chairman/CEO, Bonavision, Inc.


    CEO/Founder, CROSSCERT: Korea Electronic Certification Authority, 1999-Present President, Korea Intelligent Information Systems Society, 2000-2001 CEO, Dongbu Informative System, Inc., 1996-1997 Managing Director, Tongyang SHL (Currently, Tongyang Systems), 1993-1996 Senior member of technical staff, GTE Labs, Boston, MA, U.S.A., 1989-1991 Engineer, Bell Telephone Mfg. Co. (Currently, Alcatel), 1978-1979




    ● Prof. Angel del Pobil


    WCU Professor at the Department of Interaction Science (Sungkyungkwan University)


    Angel del Pobil is WCU Professor at the Department of Interaction Science (Sungkyungkwan University), Professor of Computer Science and Artificial Intelligence at Jaume I University (Spain), and founder director of the Robotic Intelligence Laboratory. He has been involved in robotics research for the last twenty years, his research interests include: motion planning, physical interaction, humanoid robots, service robotics, mobile manipulators, internet robots, collective robotics, sensorimotor transformations, visual servoing, self-organization in robot perception, learning for sensor-based manipulation, and the interplay between neurobiology and robotics. He serves as expert for the European Commission and the NSF. He has supervised 13 Ph.D. thesis and has been Principal Investigator of 25 research projects, with three on-going EU-funded projects at his lab: GUARDIANS, EYESHOTS and GRASP. He is author of over 160 publications.




    Organizer

    Intelligent Systems Research Center, SKKU




    Co-Organizer

    Robotics & Intelligent Machines @ Georgia Tech

    Yujin Robot, Inc.

    Bona-Vision, Inc.




    Sponsor

    Korus Tech Program, Ministry of Knowledge Economy

    BK21-SKKU Center for Advanced IT HRD

    School of Information & Communication Engineering, SKKU

    Gyeonggi Province, Office of Investment Committee

    SKKU BrainCity Research Institute




    Contact

    Dong-won Jung(Ph.D.), Research Professor.
    Intelligent Systems Research Center
    Sungkyunkwan University
    300 Cheoncheon-dong, Jangan-gu, Suwon, Gyeonggi-do 440-746, Korea
    Phone : +82-31-299-6488
    Fax: +82-31-299-6466
    E-mail :  gtg462c@gmail.com