Updated on 2023/09/26

写真a

 
NAGAO, Katashi
 
Organization
Graduate School of Informatics Department of Intelligent Systems 3 Professor
Graduate School
Graduate School of Information Science
Graduate School of Informatics
Undergraduate School
School of Engineering
School of Informatics Department of Computer Science
Title
Professor
Contact information
メールアドレス

Degree 1

  1. Doctor of Engineering ( 1994.4   Tokyo Institute of Technology ) 

Research Interests 5

  1. Intelligence Informatics

  2. Intelligent Rototics

  3. Augmented Reality

  4. Virtual Reality

  5. Data Analytics

Research Areas 4

  1. Others / Others  / Perception Information Processing/Intelligent Robotics

  2. Others / Others  / Intelligent Informatics

  3. Others / Others  / Media Informatics/Database

  4. Informatics / Intelligent informatics  / Creative Activity Support

Current Research Project and SDGs 3

  1. Personal Intelligent Vehicles

  2. Intelligent Content Technology

  3. Natural Language Processing

Research History 6

  1. Nagoya University   Graduate School of Informatics Department of Intelligent Systems 3   Professor

    2017.4

  2. Senior Researcher and Group Leader on Exploratory Software Technology Group at IBM Research, Tokyo Research Laboratory

    2000.7 - 2001.10

      More details

    Country:Japan

  3. Research Specialist at IBM Research, Tokyo Research Laboratory

    1999.7 - 2000.6

      More details

    Country:Japan

  4. Visiting Scientist at University of Illinois at Urbana-Champaign

    1996.7 - 1997.8

      More details

    Country:Japan

  5. Researcher at Sony Computer Science Laboratories

    1991.4 - 1999.6

      More details

    Country:Japan

  6. Researcher at IBM Research, Tokyo Research Laboratory

    1987.4 - 1991.3

      More details

    Country:Japan

▼display all

Education 2

  1. Tokyo Institute of Technology   Graduate School, Division of Integrated Science and Engineering   Systems Science

    - 1987

      More details

    Country: Japan

  2. Tokyo Institute of Technology   Faculty of Engineering   Industrial Engineering

    - 1985

      More details

    Country: Japan

Professional Memberships 4

  1. Japanese Society of Artificial Intelligence   Committee Member

    2010.4 - 2012.3

  2. Information Processing Society of Japan   Chief of Research Group of Intelligent Computing Systems

    2009.4 - 2012.3

  3. Information Processing Society of Japan

  4. Japanese Society of Artificial Intelligence

Awards 8

  1. Best Video Presentation Award

    2020.8   IEEE   Cyber Trainground: Building-Scale Virtual Reality for Immersive Presentation Training

    Katashi Nagao, Yuto Yokoyama

     More details

    Country:United States

    “Trainground” is a word we coined inspired by the word “playground.” It means a place for humans to perform various kinds of training and to also use machine learning to scientifically search for points they need to improve. The cyber trainground is an extension of the trainground to cyberspace. It enables training in virtual spaces that captures and reconstructs various types of information appearing in the real world, such as atmosphere and presence, to achieve an improvement effect that is the same or better than in the real world. In this paper, we build an immersive training space using building-scale VR, a technology that makes a virtual space based on an entire building existing in the real world. The space is used for presentations, allowing students to self-train. The results of a presentation are automatically evaluated by using machine learning or the like and fed back to the user. In this space, users can meet their past selves (more accurately, their avatars), so they can objectively observe their presentations and recognize weak points. We developed a mechanism for recording and reproducing activities in virtual space in detail and a mechanism for applying machine learning to activity records. With these mechanisms, a system for recording, reproducing, and automatically evaluating presentations was developed.

  2. IBM Best Paper Award at HICSS2018

    2018.1   IBM Corporation   Meeting Analytics: Creative Activity Support Based on Knowledge Discovery from Discussions

    Katashi Nagao

     More details

    Award type:Award from international society, conference, symposium, etc.  Country:United States

  3. 2017 IBM Faculty Award

    2017.7   IBM Corporation   Meeting Analytics: Creative Activity Support Based on Knowledge Discovery from Discussions

    Katashi Nagao

     More details

    Country:United States

  4. ACM UIST 2013 Lasting Impact Award

    2013.9   Association for Computing Machinery (ACM)  

     More details

    Country:United Kingdom

  5. 人工知能学会大会優秀論文賞

    2007   人工知能学会  

     More details

    Country:Japan

  6. 人工知能学会大会優秀論文賞

    2006   人工知能学会  

     More details

    Country:Japan

  7. 情報処理学会ベストオーサー賞

    1997   情報処理学会  

     More details

    Country:Japan

  8. 人工知能学会大会優秀論文賞

    1991   人工知能学会  

     More details

    Country:Japan

▼display all

 

Papers 35

  1. VR Dance Training System Capable of Human Motion Tracking and Automatic Dance Evaluation Reviewed

    Kazuhiro Esaki, Katashi Nagao

    PRESENCE: Virtual and Augmented Reality   Vol. 29   page: 1 - 23   2023

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:The MIT Press  

    In this paper, a method for 3D human body tracking using multiple cameras and an automatic evaluation method using machine learning are developed to construct a virtual reality (VR) dance self-training system for fast-moving hip-hop dance. Dancers’ movement data are input as time-series data of temporal changes in joint point positions and rotations and are categorized into instructional items that are frequently pointed out by coaches as areas for improvement in actual dance lessons. For automatic dance evaluation, contrastive learning is used to obtain better expression vectors with less data. As a result, the accuracy when using contrastive learning was 0.79, a significant improvement from 0.65 without contrastive learning. In addition, since each dance is modeled by a coach, the accuracy was slightly improved to 0.84 by using, as input, the difference between the expression vectors of the model’s and the user’s movement data. Eight subjects used the VR dance training system, and results of a questionnaire survey confirmed that the system is effective.

    DOI: 10.1162/PRES_a_00383

  2. VR Presentation Training System Using Machine Learning Techniques for Automatic Evaluation Invited Reviewed International journal

    Yuto Yokoyama and Katashi Nagao

    International Journal of Virtual and Augmented Reality   Vol. 5 ( 1 )   2022.1

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:IGI Global  

    In this paper, we build an immersive training space using building-scale VR, a technology that makes a virtual space based on an entire building existing in the real world. The space is used for presentations, allowing students to self-train. The results of a presentation are automatically evaluated by using machine learning or the like and fed back to the user. In this space, users can meet their past selves (more accurately, their avatars), so they can objectively observe their presentations and recognize weak points. We developed a mechanism for recording and reproducing activities in virtual space in detail and a mechanism for applying machine learning to activity records. With these mechanisms, a system for recording, reproducing, and automatically evaluating presentations was developed.

    DOI: 10.4018/IJVAR.290044

  3. Impact Sound Generation for Audiovisual Interaction with Real-World Movable Objects in Building-Scale Virtual Reality Reviewed International journal

    Nagao Katashi, Kumon Kaho, Hattori Kodai

    APPLIED SCIENCES-BASEL   Vol. 11 ( 16 )   2021.8

     More details

    Authorship:Lead author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:Applied Sciences (Switzerland)  

    In building-scale VR, where the entire interior of a large-scale building is a virtual space that users can walk around in, it is very important to handle movable objects that actually exist in the real world and not in the virtual space. We propose a mechanism to dynamically detect such objects (that are not embedded in the virtual space) in advance, and then generate a sound when one is hit with a virtual stick. Moreover, in a large indoor virtual environment, there may be multiple users at the same time, and their presence may be perceived by hearing, as well as by sight, e.g., by hearing sounds such as footsteps. We, therefore, use a GAN deep learning generation system to generate the impact sound from any object. First, in order to visually display a real-world object in virtual space, its 3D data is generated using an RGB-D camera and saved, along with its position information. At the same time, we take the image of the object and break it down into parts, estimate its material, generate the sound, and associate the sound with that part. When a VR user hits the object virtually (e.g., hits it with a virtual stick), a sound is generated. We demonstrate that users can judge the material from the sound, thus confirming the effectiveness of the proposed method.

    DOI: 10.3390/app11167546

    DOI: 10.3390/app11167546

    Web of Science

    Scopus

  4. VR Training System to Help Improve Photography Skills

    Hiroki Kobayashi, Katashi Nagao

    APPLIED SCIENCES-BASEL   Vol. 13 ( 7817 )   2023.7

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:Applied Sciences (Switzerland)  

    People aspiring to enhance their photography skills face multiple challenges, such as finding subjects and understanding camera-specific parameters. To address this, we present the VR Photo Training System. This system allows users to practice photography in a virtual space and provides feedback on user-taken photos using machine-learning models. These models, trained on datasets from the virtual environment, evaluate the aesthetics, composition, and color of photos. The system also includes a feature offering composition advice, which further aids in skill development. The evaluation and recommendation functions of our system have shown sufficient accuracy, proving its effectiveness for photography training.

    DOI: 10.3390/app13137817

    Web of Science

    Scopus

  5. Autonomous Stair Ascending and Descending by Quadruped Wheelchairs

    Atsuki Akamisaka, Katashi Nagao

    2023 20TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS, UR     page: 295 - 301   2023.6

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

    Wheelchairs are an important means of transportation for people with disabilities. However, because wheelchairs move on wheels, it is impossible for wheelchair users to move along paths such as stairs that cannot be covered by wheels, thus imposing significant limitations on their mobility. Although some wheelchairs are now available with crawler mechanisms, they cannot handle spiral staircases, in which the length of the tread differs between the inside and outside of the staircase. This paper proposes a quadruped wheelchair that can be used on all types of stairs. The proposed quadruped wheelchair is a fusion of a quadruped robot and a wheelchair, and has the potential to achieve high mobility on both plain and uneven terrain. On the other hand, it is very complicated to carry a person on the wheelchair. In this paper, we show that a quadruped wheelchair with a person on board can ascend and descend stairs using a simulator and reinforcement learning, which is a milestone toward the realization of a quadruped wheelchair.

    DOI: 10.1109/UR57808.2023.10202377

    Web of Science

    Scopus

  6. Virtual Reality Campuses as New Educational Metaverses

    NAGAO Katashi

    IEICE Transactions on Information and Systems   Vol. E106.D ( 2 ) page: 93 - 100   2023.2

     More details

    Language:English   Publisher:The Institute of Electronics, Information and Communication Engineers  

    <p>This paper focuses on the potential value and future prospects of using virtual reality (VR) technology in online education. In detailing online education and the latest VR technology, we focus on metaverse construction and artificial intelligence (AI) for educational VR use. In particular, we describe a virtual university campus in which on-demand VR lectures are conducted in virtual lecture halls, automated evaluations of student learning and training using machine learning, and the linking of multiple digital campuses.</p>

    DOI: 10.1587/transinf.2022eti0001

    Web of Science

    Scopus

    CiNii Research

  7. Effects of a new speech support application on intensive speech therapy and changes in functional brain connectivity in patients with post-stroke aphasia

    Katsuno Yuta, Ueki Yoshino, Ito Keiichi, Murakami Satona, Aoyama Kiminori, Oishi Naoya, Kan Hirohito, Matsukawa Noriyuki, Nagao Katashi, Tatsumi Hiroshi

    FRONTIERS IN HUMAN NEUROSCIENCE   Vol. 16   page: 870733   2022.9

     More details

    Language:English   Publisher:Frontiers in Human Neuroscience  

    Aphasia is a language disorder that occurs after a stroke and impairs listening, speaking, reading, writing, and calculation skills. Patients with post-stroke aphasia in Japan are increasing due to population aging and the advancement of medical treatment. Opportunities for adequate speech therapy in chronic stroke are limited due to time constraints. Recent studies have reported that intensive speech therapy for a short period of time or continuous speech therapy using high-tech equipment, including speech applications (apps, can improve aphasia even in the chronic stage. However, its underlying mechanism for improving language function and its effect on other cognitive functions remains unclear. In the present study, we investigated whether intensive speech therapy using a newly developed speech support app could improve aphasia and other cognitive functions in patients with chronic stroke. Furthermore, we examined whether it can alter the brain network related to language and other cortical areas. Thus, we conducted a prospective, single-comparison study to examine the effects of a new speech support app on language and cognitive functions and used resting state functional MRI (rs-fMRI) regions of interest (ROI) to ROI analysis to determine changes in the related brain network. Two patients with chronic stroke participated in this study. They used the independent speech therapy system to perform eight sets of 20 randomly presented words/time (taking approximately 20 min), for 8 consecutive weeks. Their language, higher cognitive functions including attention function, and rs-fMRI, were evaluated before and after the rehabilitation intervention using the speech support app. Both patients had improved pronunciation, daily conversational situations, and attention. The rs-fMRI analysis showed increased functional connectivity of brain regions associated with language and attention related areas. Our results show that intensive speech therapy using this speech support app can improve language and attention functions even in the chronic stage of stroke, and may be a useful tool for patients with aphasia. In the future, we will conduct longitudinal studies with larger numbers of patients, which we hope will continue the trends seen in the current study, and provide even stronger evidence for the usefulness of this new speech support app.

    DOI: 10.3389/fnhum.2022.870733

    Web of Science

    Scopus

    PubMed

  8. Recognition of Students' Mental States in Discussion Based on Multimodal Data and its Application to Educational Support Reviewed International journal

    Shimeng Peng, Katashi Nagao

    IEEE Access   Vol. 9   page: 1 - 16   2021.2

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:IEEE  

    Students will experience a complex mixture of mental states during discussion, including concentration, confusion, frustration, and boredom, which have been widely acknowledged as crucial components for revealing a student's learning states. In this study, we propose using multimodal data to design an intelligent monitoring agent that can assist teachers in effectively monitoring the multiple mental states of students during discussion. We firstly developed an advanced multi-sensor-based system and applied it in a real university's research lab to collect a multimodal “in-the-wild” teacher-student conversation dataset. Then, we derived a set of proxy features from facial, heart rate, and acoustic modalities and used them to train several supervised learning classifiers with different multimodal fusion approaches single-channellevel, feature-level, and decision-level fusion to recognize students' multiple mental states in conversations. We explored how to design multimodal analytics to augment the ability to recognize different mental states and found that fusing heart rate and acoustic modalities yields better recognize the states of concentration (AUC = 0.842) and confusion (AUC = 0.695), while fusing three modalities yield the best performance in recognizing the states of frustration (AUC = 0.737) and boredom (AUC = 0.810). Our results also explored the possibility of leveraging the advantages of the replacement capabilities between different modalities to provide human teachers with solutions for addressing the challenges with monitoring students in different real-world education environments.

    DOI: 10.1109/ACCESS.2021.3054176

  9. Using Presentation Slides and Adjacent Utterances for Post-editing of Speech Recognition Results for Meeting Recordings Reviewed International journal

    Kamiya K., Kawase T., Higashinaka R., Nagao K.

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   Vol. 12848 LNAI   page: 331 - 340   2021

     More details

    Authorship:Last author   Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)  

    In recent years, the use of automatic speech recognition (ASR) systems in meetings has been increasing, such as for minutes generation and speaker diarization. The problem is that ASR systems often misrecognize words because there is domain-specific content in meetings. In this paper, we propose a novel method for automatically post-editing ASR results by using presentation slides that meeting participants use and utterances adjacent to a target utterance. We focus on automatic post-editing rather than domain adaptation because of the ease of incorporating external information, and the method can be used for arbitrary speech recognition engines. In experiments, we found that our method can significantly improve the recognition accuracy of domain-specific words (proper nouns). We also found an improvement in the word error rate (WER).

    DOI: 10.1007/978-3-030-83527-9_28

    Scopus

  10. Recognition of Students' Multiple Mental States in Conversation Based on Multimodal Cues

    Peng Shimeng, Ohira Shigeki, Nagao Katashi

    COMPUTER SUPPORTED EDUCATION (CSEDU 2020)   Vol. 1473   page: 468 - 479   2021

     More details

    Language:Japanese   Publisher:Communications in Computer and Information Science  

    Learning activities, especially face-to-face conversational coaching may invite students experience a set of learning-centered mental states including concentration, confusion, frustration, and boredom, those mental sates have been widely used as vital proxies for inferring their learning processes and are closely linked with learning outcomes. Recognizing students’ learning-centered mental states, particularly effectively detecting negative mental states such as confusion and frustration in teacher-student conversation could help teacher effectively monitor students’ learning situations in order to direct personalized and adaptive coaching resources to maximum students’ learning outcome. Most of research focused on analyzing students’ mental states using univariate modality when they completing pre-designed tasks in a computer related environment. It is still an open question on how to effectively measure students’ multiple mental states when they interacting with human teacher in coach-led conversations from various aspects. To achieve this goal, in this work, we developed an advanced multi-sensor-based system to record multi-modal conversational data of student-teacher conversations generated in a real university lab. We then attempt to derive a series of interpretable features from multiple perspectives including facial and physiological (heart rate) to characterize students’ multiple mental states. A set of supervised classifiers were built based on those features with different modality fusion methods to recognize multiple mental states of students. Our results have provided the experimental evidence to validate the outstanding predictive ability of our proposed features and the possibility of using multimodal data to recognize students’ multiple mental states in ‘in-the-wild’ student-teacher conversation.

    DOI: 10.1007/978-3-030-86439-2_24

    Web of Science

    Scopus

  11. Using Presentation Slides and Adjacent Utterances for Post-editing of Speech Recognition Results for Meeting Recordings

    長尾 確

    The 24th International Conference of Text, Speech and Dialogue (TSD2021)   Vol. vol. 12848   page: 331 - 340   2021

     More details

  12. Automatic Generation of Multidestination Routes for Autonomous Wheelchairs Reviewed International journal

    Yusuke Mori, Katashi Nagao

    Journal of Robotics and Mechatronics   Vol. 32 ( 6 ) page: 1121 - 1136   2020.12

     More details

    Authorship:Last author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:Fuji Technology Press Ltd.  

    To solve the problem of autonomously navigating multiple destinations, which is one of the tasks in the Tsukuba Challenge 2019, this paper proposes a method for automatically generating the optimal travel route based on costs associated with routes. In the proposed method, the route information is generated by playing back the acquired driving data to perform self-localization, and the self-localization log is stored. In addition, the image group of road surfaces is acquired fromthe driving data. The costs of routes are generated based on texture analysis of the road surface image group and analysis of the self-localization log. The cost-added route information is generated by combining the costs calculated by the two methods, and by assigning the combined costs to the route. The minimum-cost multidestination route is generated by conducting a route search using cost-added route information. Then, we evaluated the proposed method by comparing it with the method of generating the route using only the distance cost. The results confirmed that the proposed method generates travel routes that account for safety when the autonomous wheelchair is being driven.

    DOI: 10.20965/jrm.2020.p1121

  13. Automatic Reconstruction of Building-Scale Indoor 3D Environment with a Deep-Reinforcement-Learning-Based Mobile Robot Reviewed

    Menglong Yang, Katashi Nagao

    International Journal of Robotics and Automation Technology   Vol. 6 ( 1 ) page: 11-23   2019.6

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

    The aim of this paper is to digitize the environments in which humans live, at low cost, and reconstruct highly accurate three-dimensional environments that are based on those in the real world. This three-dimensional content can be used such as for virtual reality environments and three-dimensional maps for automatic driving systems.
    In general, however, a three-dimensional environment must be carefully reconstructed by manually moving the sensors used to first scan the real environment on which the three-dimensional one is based. This is done so that every corner of an entire area can be measured, but time and costs increase as the area expands. Therefore, a system that creates three-dimensional content that is based on real-world large-scale buildings at low cost is proposed. This involves automatically scanning the indoors with a mobile robot that uses low-cost sensors and generating 3D point clouds.
    When the robot reaches an appropriate measurement position, it collects the three-dimensional data of shapes observable from that position by using a 3D sensor and 360-degree panoramic camera. The problem of determining an appropriate measurement position is called the “next best view problem," and it is difficult to solve in a complicated indoor environment. To deal with this problem, a deep reinforcement learning method is employed. It combines reinforcement learning, with which an autonomous agent learns strategies for selecting behavior, and deep learning done using a neural network. As a result, 3D point cloud data can be generated with better quality than the conventional rule-based approach.

    DOI: 10.15377/2409-9694.2019.06.2

  14. Prediction of Students' Answer Relevance in Discussion Based on their Heart Rate Data Reviewed

    Shimeng Peng, Shigeki Ohira, Katashi Nagao

    International Journal of Innovation and Research in Educational Sciences   Vol. 6 ( 3 ) page: 414-424   2019.5

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

    Whether a discussion is executed effectively depends on the completion level of the question-and-answer segments (Q&A segments) generated during the discussion. Relevance of answers could be used as a clue for evaluating the Q&A segments' completion degree. In this study, we argue that discussion participants' heart rate (HR) and its variability (HRV), which have recently received increased attention for being a crucial indicator in cognitive task performance evaluation, can be used to predict participants' answer-relevance in Q&A segments of discussions. To validate our argument, we propose an intelligent system that acquires and visualizes the HR data with the help of a non-invasive device, e.g. an Apple Watch, for measuring and recording the HR data of participants which is being updated in real-time. We also developed a web-based human-scoring method for evaluating answer-relevance of Q&A segments and question-difficulty level. A total of 17 real lab-seminar-style discussion experiments were conducted, during which the Q&A segments and the HR of participants were recorded using our proposed system. We then experimented with three machine-learning classifiers, i.e. logistic regression, support vector machine, and random forest, to predict answer-relevance of Q&A segments using the extracted HR and HRV features. Area Under the ROC Curve (AUC) was used to evaluate classifier accuracy using leave-one-student-out cross validation. We achieved an AUC= 0.76 for logistic regression classifier, an AUC=0.77 for SVM classifier, and an AUC=0.79 for random forest classifier. We examined possibilities of using participants' HR data to predict their answer-statements' relevance in Q&A segments of discussions, which provides evidence of the potential utility of the presented tools in scaling-up analysis of this type to a large number of subjects and in implementing these tools to evaluate and improve discussion outcomes in higher education environment.

  15. Building-Scale Virtual Reality: Reconstruction and Modification of Building Interior Extends Real World Reviewed

    Katashi Nagao, Menglong Yang, Yusuke Miyakawa

    International Journal of Multimedia Data Engineering and Management (IJMDEM)   Vol. 10 ( 1 ) page: 1-21   2019.4

     More details

    Authorship:Lead author   Language:English   Publishing type:Research paper (scientific journal)  

    A method is presented that extends the real world into all buildings. This building-scale virtual reality (VR) method differs from augmented reality (AR) in that it uses automatically generated 3D point cloud maps of building interiors. It treats an entire indoor area a pose tracking area by using data collected using an RGB-D camera mounted on a VR headset and using deep learning to build a model from the data. It modifies the VR space in accordance with its intended usage by using segmentation and replacement of the 3D point clouds. This is difficult to do with AR but is essential if VR is to be used for actual real-world applications, such as disaster simulation including simulation of fires and flooding in buildings. 3D pose tracking in the building-scale VR is more accurate than conventional RGB-D simultaneous localization and mapping.

    DOI: 10.4018/IJMDEM.2019010101

  16. Picognizer:電子音の認識のためのJavaScriptライブラリの開発と評価 Reviewed

    栗原 一貴, 植村 あい子, 板谷 あかり, 北原 鉄朗, 長尾 確

    情報処理学会論文誌   Vol. 60 ( 2 ) page: 397-410   2019.2

     More details

    Language:Japanese   Publishing type:Research paper (scientific journal)  

    われわれは今やさまざまな電子音に囲まれて生活しており、それらを検出・認識し情報処理を行うことはMaker文化の発展デジタルゲーム拡張およびゲーミフィケーションの構成のうえで有意義であるが、エンドユーザプログラマが個々の検出・認識対象について教師付き学習により認識器を構築するのは現状ではまだ容易ではない。そこで再生ごとの音響的変動が小さいという電子音の特徴を活かし、Dynamic Time Warpingなどの伝統的なテンプレートベースのパターンマッチングアルゴリズムにより電子音の検出・認識を行うJavaScriptライブラリを実装する。基礎的な性能評価を行うとともに、さまざまなユースケースを例示することでその有用性を示す。

  17. Evidence-Based Education: Case Study of Educational Data Acquisition and Reuse

    Katashi Nagao, Naoya Morita, Shigeki Ohira

    Journal of Systemics, Cybernetics and Informatics   Vol. 15 ( 7 ) page: 77-84   2018.2

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

    There must be as many concrete indicators as possible in education, which will become signposts. People will not be confident about their learning and will become confused with tenuous instruction. It is necessary to clarify what they can do and what kinds of abilities they can improve. This paper describes a case of evidence-based education that acquires educational data from students' study activities and not only uses the data to enable instructors to check the students' levels of understanding but also improve their levels of performance. Our previous research called discussion mining was specifically used to collect various data on meetings (statements and their relationships, presentation materials such as slides, audio and video, and participants' evaluations of statements). This paper focuses on student presentations and discussions in laboratory seminars that are closely related to their research activities in writing their theses. We propose a system that supports tasks to be achieved in research activities and a machine-learning method to make the system sustainable for long-term operation by automatically extracting essential tasks. We conducted participant-based experiments that involved students and computer-simulation-based experiments to evaluate how efficiently our proposed machine-learning method updated the task extraction model. We confirmed from the participant-based experiments that informing responsible students of tasks that were automatically extracted on the system we developed improved their awareness of the tasks. Here, we also explain improvements in extraction accuracy and reductions in labeling costs with our method and how we confirmed its effectiveness through computer simulations.

  18. Tools and evaluation methods for discussion and presentation skills training Reviewed

    Katashi Nagao, Mehrdad Tehrani, Jovilyn B. Fajardo

    Smart Learning Environments: a SpringerOpen Journal   Vol. 2 ( 5 ) page: doi:10.1186/s40561-015-0011-1   2015.2

     More details

    Authorship:Lead author   Language:English   Publishing type:Research paper (scientific journal)  

    Our university is currently developing an advanced physical-digital learning environment that can train students to enhance their discussion and presentation skills. The environment guarantees an efficient discussion among users with state-of-the-art technologies such as touch panel discussion tables, digital posters, and an interactive wall-sized whiteboard. It includes a data mining system that efficiently records, summarizes, and annotates discussions held inside our facility. We also developed a digital poster authoring tool, a novel tool for creating interactive digital posters displayed using our digital poster presentation system. Evaluation results show the efficiency of using our facilities: the data mining system and the digital poster authoring tool. In addition, our physical-digital learning environment will be further enhanced with a vision system that will detect interactions with the digital poster presentation system and the different discussion tools enabling a more automated skill evaluation and discussion mining.

    DOI: doi:10.1186/s40561-015-0011-1

  19. Support System based on Annotations of Documents and Video Scenes for Reading Technical Documents Reviewed

    ISHITOYA, Kentaro, YAMAMOTO, Keisuke, OHIRA, Shigeki, NAGAO, Katashi,

    Journal of the Institute of Image Information and Television Engineers   Vol. 66 ( 11 ) page: J461-J470   2012.11

     More details

    Language:Japanese   Publishing type:Research paper (scientific journal)  

    Reading as many technical documents as possible is important to improve our research. When we read these documents, we can understand their contents more easily by referring to related resources such as images, audio clips, and videos. Videos contain a variety of helpful information, and facilitate our understanding of technical documents. We propose a method to define video scenes and document elements, and to annotate them with additional information such as relationships. Based on these annotations and relationships, we developed a support system that uses videos and helps readers understand technical documents. We performed some experiments to confirm whether the system was usable.

    DOI: 10.3169/itej.66.J461

  20. 小型無人移動体との連携による個人用知的移動体の安全自動走行 Reviewed

    渡邉 賢, 長尾 確

    情報処理学会論文誌   Vol. 53 ( 11 ) page: 2599-2611   2012.11

     More details

    Language:Japanese   Publishing type:Research paper (scientific journal)  

  21. A Continuous Meeting Support System Reviewed

    ISHITOYA, Kentaro, OHIRA, Shigeki, NAGAO, Katashi

    Transactions of Information Processing Society of Japan   Vol. 53 ( 8 ) page: 2044-2048   2012.8

     More details

    Language:Japanese   Publishing type:Research paper (scientific journal)  

    In order to facilitate more efficient meetings held continuously, like project meetings, it is important to consider about past discussions and recall them in the current discussion. In this paper, we describe about a novel meeting support system consists with digital whiteboards and tablet devices which designed to be work cooperatively. With using the system, meeting results are saved as board contents which is usable to realize advanced application such as topic search or generating multi-meeting summary in content cloud server. Users are able to download contents from content cloud server and view them whenever they want with tablet device. Moreover, users are able to quote a part of past board content to the board which is using in a current meetings.

  22. オンラインビデオアノテーションの現状と課題 Invited

    長尾 確, 大平 茂輝, 山本 大介

    映像情報メディア学会誌   Vol. 64 ( 2 ) page: 173-177   2010

     More details

    Authorship:Lead author   Language:Japanese  

  23. ゼミコンテンツの再利用に基づく研究活動支援 Reviewed

    土田 貴裕, 大平 茂輝, 長尾 確

    情報処理学会論文誌   Vol. 51 ( 6 ) page: 1357-1370   2010

     More details

    Language:Japanese   Publishing type:Research paper (scientific journal)  

  24. 対面式会議コンテンツの作成と議論中におけるメタデータの可視化 Reviewed

    土田 貴裕, 大平 茂輝, 長尾 確

    情報処理学会論文誌   Vol. 51 ( 2 ) page: 404-416   2010

     More details

    Language:Japanese   Publishing type:Research paper (scientific journal)  

  25. タグクラウド共有に基づく協調的映像アノテーション Reviewed

    山本 大介, 増田 智樹, 大平 茂輝, 長尾 確

    人工知能学会論文誌   Vol. 25 ( 2 ) page: 243-251   2010

     More details

    Language:Japanese   Publishing type:Research paper (scientific journal)  

  26. スライド提示型プレゼンテーション方法論の拡張手法を定量的に評価する研究 Reviewed

    栗原一貴, 望月俊男, 大浦弘樹, 椿本弥生, 西森年寿, 中原淳, 山内祐平, 長尾確

    情報処理学会論文誌   Vol. 51 ( 2 ) page: 391-403   2010

     More details

    Language:Japanese   Publishing type:Research paper (scientific journal)  

  27. *Video Scene Retrieval Using Online Video Annotation Reviewed

    Tomoki Masuda, Daisuke Yamamoto, Shigeki Ohira, Katashi Nagao

    New Frontiers in Artificial Intelligence: JSAI2007 Conference and Workshops, Lecture Notes in Artificial Intelligence 4914, Springer-Verlag   Vol. 1 ( 1 ) page: 255-268   2008.2

     More details

    Language:English  

  28. Video Scene Annotation Based on Web Social Activities Reviewed

    YAMAMOTO, Daisuke, MASUDA, Tomoki, OHIRA, Shigeki, NAGAO, Katashi

    IEEE MultiMedia   Vol. 15 ( 3 ) page: 22-32   2008

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

    We describe a mechanism for acquiring the semantics of video contents from the activities of Web communities that use a bulletin-board system and Weblog tools to discuss video scenes. Since these community activities include information valuable for video applications, we extract it as annotations. We developed a Web-based video sharing and annotation system called Synvie. We also performed an open experiment to acquire annotation data such as user comments on video scenes and scene-quoted Weblog entries and to evaluate our system. We have also developed a tag-based system for retrieving video scenes that is based on annotations accumulated in this open experiment.

  29. Discussion Ontology: Knowledge Discovery from Human Activities in Meetings Reviewed

    Hironori Tomobe, Katashi Nagao

    New Frontiers in Artificial Intelligence: JSAI2006 Conference and Workshops, Lecture Notes in Artificial Intelligence 4384, Springer-Verlag   Vol. 1 ( 1 ) page: 33-41   2007

     More details

    Language:English  

    Discussion mining is a preliminary study on gathering knowledge based on the content of face-to-face discussion meetings. To extract knowledge from discussion content, we have to analyze not only the surface arguments, but also semantic information such as a statement's intention and the discussion flow during meetings. We require a discussion ontology for this information. This discussion ontology forms the basis of our discussion methodology and requires semantic relations between elements in meetings. We must clarify these semantic relations to build the discussion ontology. We therefore generated discussion content and analyzed meeting metadata to build the ontology.

  30. A Video Annotation System Based on Community Activities Reviewed

    YAMAMOTO, Daisuke,MASUDA, Tomoki,OHIRA, Shigeki,NAGAO, Katashi

    Transactions of Information Processing Society of Japan   Vol. Vol.48 ( No.12 )   2007

     More details

    Language:Japanese   Publishing type:Research paper (scientific journal)  

  31. Web-based Video Annotation and its Applications Reviewed

    YAMAMOTO, Daisuke,NAGAO, Katashi

    Transactions of the Japanese Society for Artificial Intelligence   Vol. Vol.20 ( No.1 ) page: 67-75   2007

     More details

    Language:Japanese   Publishing type:Research paper (scientific journal)  

  32. A Music Annotation System for Multiple Interpretations of Tunes Reviewed

    KAJI, Katsuhiko,NAGAO, Katashi

    Transactions of Information Processing Society of Japan   Vol. Vol.48 ( No.1 ) page: 258-273   2007

     More details

    Language:Japanese   Publishing type:Research paper (scientific journal)  

  33. Semantic Annotation and Transcoding: Making Web Content More Accessible Reviewed

    Katashi Nagao, Yoshinari Shirai, and Kevin Squire

    IEEE MultiMedia   Vol. 8 ( 2 ) page: 69-81   2001

     More details

    Authorship:Lead author   Language:English   Publishing type:Research paper (scientific journal)  

  34. A Preferential Constraint Satisfaction Technique for Natural Language Analysis Reviewed

    Katashi Nagao

    IEICE Transactions on Information and Systems   Vol. E77-D ( 2 ) page: 161-170   1994

     More details

    Authorship:Lead author   Language:English   Publishing type:Research paper (scientific journal)  

  35. A Logical Model for Plan Recognition and Belief Revision Reviewed

    Katashi Nagao

    IEICE Transactions on Information and Systems   Vol. E77-D ( 2 ) page: 209-217   1994

     More details

    Authorship:Lead author   Language:English   Publishing type:Research paper (scientific journal)  

▼display all

Books 7

  1. Encyclopedia of Artificial Intelligence

    Katashi Nagao( Role: Contributor ,  Smart Speaker)

    2019.12  ( ISBN:978-4-7649-0604-4

     More details

    Total pages:384   Language:Japanese Book type:Dictionary, encyclopedia

  2. Artificial Intelligence Accelerates Human Learning

    Katashi Nagao( Role: Sole author)

    Springer Singapore  2019.2  ( ISBN:978-981-13-6175-3

     More details

    Total pages:170   Language:English Book type:Scholarly book

    The book is divided into six chapters, the first of which provides an overview of AI research and practice in education. In turn, Chapter 2 describes a mechanism for applying data analytics to student discussions and utilizing the results for knowledge creation activities such as research. Based on discussion data analytics, Chapter 3 describes a creative activity support system that effectively utilizes the analytical results of the discussion for subsequent activities. Chapter 4 discusses the incorporation of a gamification method to evaluate and improve discussion skills while maintaining the motivation to participate in the discussion.
    Chapters 5 and 6 describe an advanced learning environment for honing students' discussion and presentation skills. Two important systems proposed here are a presentation training system using virtual reality technologies, and an interactive presentation/discussion training system using a humanoid robot. In the former, the virtual space is constructed by measuring the three-dimensional shape of the actual auditorium, presentations are performed in the same way as in the real world, and the AI as audience automatically evaluates the presentation and provides feedback. In the latter, a humanoid robot makes some remarks on and asks questions about students' presentations, and the students practice responding to it.

  3. Data Science on Discussion: Symbiosis of Human and Artificial Intelligence

    Katashi Nagao( Role: Sole author)

    Keio University Press Inc.  2018.1 

     More details

    Total pages:199   Language:Japanese Book type:Scholarly book

  4. Encyclopedia of Artificial Intelligence

    Katashi Nagao( Role: Joint editor)

    Kyoritsu Shuppan Co., Ltd.  2017.7 

  5. Digital Content Annotation and Transcoding

    Katashi Nagao( Role: Sole author)

    Artech House Publishers  2003 

     More details

    Language:English

  6. Advanced Agent Technologies

    Katashi Nagao et al.( Role: Joint author)

    Kyoritsu Pub.  2000 

     More details

    Language:Japanese

  7. Building Interactive Environments

    Katashi Nagao( Role: Sole author)

    Kyoritsu Pub.  1996 

     More details

    Language:Japanese

▼display all

Presentations 92

  1. Cyber Trainground: Building-Scale Virtual Reality for Immersive Presentation Training International conference

    Katashi Nagao, Yuto Yokoyama

    IEEE Cyber Science and Technology Congress 2020  2020.8.17  IEEE

     More details

    Event date: 2020.8

    Language:English   Presentation type:Oral presentation (general)  

    “Trainground” is a word we coined inspired by the word “playground.” It means a place for humans to perform various kinds of training and to also use machine learning to scientifically search for points they need to improve. The cyber trainground is an extension of the trainground to cyberspace. It enables training in virtual spaces that captures and reconstructs various types of information appearing in the real world, such as atmosphere and presence, to achieve an improvement effect that is the same or better than in the real world. In this paper, we build an immersive training space using building-scale VR, a technology that makes a virtual space based on an entire building existing in the real world. The space is used for presentations, allowing students to self-train. The results of a presentation are automatically evaluated by using machine learning or the like and fed back to the user. In this space, users can meet their past selves (more accurately, their avatars), so they can objectively observe their presentations and recognize weak points. We developed a mechanism for recording and reproducing activities in virtual space in detail and a mechanism for applying machine learning to activity records. With these mechanisms, a system for recording, reproducing, and automatically evaluating presentations was developed.

  2. Reading Students’ Multiple Mental States in Conversation from Facial and Heart Rate Cues International conference

    Shimeng Peng, Shigeki Ohira, Katashi Nagao

    12th International Conference on Computer Supported Education (CSEDU 2020)  2020.5.2 

     More details

    Event date: 2020.5

    Language:English   Presentation type:Oral presentation (general)  

    Students’ mental states have been widely acknowledged as crucial components for inferring their learning processes and are closely linked with learning outcomes. Understanding students’ complex mental states including concentration, confusion, frustration, and boredom in teacher-student conversation could benefit a human teacher’s perceptual and real-time decision-making capability in providing personalized and adaptive support in coaching activities. Many lines of research have explored the automatic measurement of students’ mental states in pre-designed human-computer tasks. It still remains a challenge to detect the complex mental states of students in real teacher-student conversation. In this study, we made such an attempt by describing a system for predicting the complex mental states of students from multiple perspectives: facial and physiological (heart rate) cues in real student-teacher conversation scenarios. We developed an advanced multi-sensor-based system and applied it in small-scale meetings to collect students’ multimodal conversation data. We demonstrate a multimodal analysis framework. Machine learning models were built by using extracted interpretable proxy features at a fine-grained level to validate their predictive ability regarding students’ multiple mental states. Our results provide evidence of the potential value of fusing multimodal data to understand students’ multiple mental states in real-world student-teacher conversation.

  3. VR2ML: A Universal Recording and Machine Learning System for Improving Virtual Reality Experiences International conference

    Yuto Yokoyama, Katashi Nagao

    27th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2020) 

     More details

    Event date: 2020.3

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Atlanta   Country:United States  

    We developed software for recording and reusing user actions and associated effects in various VR applications. In particular, we implemented two plugin systems that allow 3D motion data recorded in VR to be transferred to a machine learning module without programming. One is a system that runs on Unity and records actions and events in VR, and it is called VRec. The other is a system for using data recorded with VRec for machine learning, called VR2ML.
    Currently, recording all of the movements of people and objects in VR is not possible. Since the events that occur in VR include object movements and the generation of sounds and visual effects, recording only avatar movements is not enough. The archive function provided by VR communication services is a comprehensive recording mechanism that records live performances and the like that have been done in the past in VR and allows the users to view them later. This satisfies the necessary elements for this study but is limited to recording and browsing within the same service, and the recorded data cannot be analyzed or evaluated. Therefore, a unified format is required to reuse behavioral data after VR experiences.

  4. PSNet: A Style Transfer Network for Point Cloud Stylization on Geometry and Color International conference

    Xu Cao, Weimin Wang, Katashi Nagao

    2020 Winter Conference on Applications of Computer Vision (WACV '20) 

     More details

    Event date: 2020.3

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Colorado   Country:United States  

    We propose a neural style transfer method for colored point clouds which allows stylizing the geometry and/or color property of a point cloud from another. The stylization
    is achieved by manipulating the content representations and Gram-based style representations extracted from a pretrained PointNet-based classification network for colored point clouds. As Gram-based style representation is invariant to the number or the order of points, the style can also be an image in the case of stylizing the color property of a
    point cloud by merely treating the image as a set of pixels.
    Experimental results and analysis demonstrate the capability of the proposed method for stylizing a point cloud either from another point cloud or an image.

  5. AI-Powered Education: Smart Learning Environment with Large Interactive Displays Invited International conference

    Katashi Nagao

    International Display Workshops 2019 

     More details

    Event date: 2019.11

    Language:English   Presentation type:Oral presentation (general)  

    Country:Japan  

    Our university is currently developing an advanced physical-digital learning environment that can train students to enhance their presentation and discussion skills. The environment guarantees an efficient presentation and discussion among users with state-of-the-art technologies such as touch panel digital poster panels. It includes an automatic evaluation system that efficiently records, analyses, and evaluates the presenter's presentation and discussion skills. We also have developed a digital poster authoring tool, a novel tool for creating interactive digital posters displayed on the digital poster panels. The environment also allows the students for effective self-training of presentation by authoring digital posters and getting feedbacks of automatic evaluation of presentation. We call such education promoted by the AI technologies AI-Powered Education.

  6. Discussion-skill Analytics with Acoustic, Linguistic and Psychophysiological Data International conference

    Katashi Nagao, Kosuke Okamoto, Shimeng Peng, Shigeki Ohira

    11th International Conference on Knowledge Discovery and Information Retrieval (KDIR 2019) 

     More details

    Event date: 2019.9

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Vienna   Country:Austria  

    In this paper, we propose a system for improving the discussion skills of participants in a meeting by automatically evaluating statements in the meeting and effectively feeding back the results of the evaluation to them. To evaluate the skills automatically, the system uses both acoustic features and linguistic features of statements. It evaluates the way a person speaks, such as their voice size, on the basis of the acoustic features, and it also evaluates the contents of a statement, such as the consistency of context, on the basis of linguistic features. These features can be obtained from meeting minutes. Since it is difficult to evaluate the semantic contents of statements such as the consistency of context, we build a machine learning model that uses the features of minutes such as speaker attributes and the relationship of statements. In addition, we argue that participants' heart rate (HR) data can be used to effectively evaluate their cognitive performance, specifically the performance in a discussion that consists of several Q&A segments (question-and-answer pairs). We collect HR data during a discussion in real time and generate machine-learning models for evaluation. We confirmed that the proposed method is effective for evaluating the discussion skills of meeting participants.

  7. Building-Scale Virtual Reality: Another Way to Extend Real World International conference

    Katashi Nagao, Menglong Yang, Xu Cao, Yusuke Miyakawa

    IEEE 2nd International Conference on Multimedia Information Processing and Retrieval 

     More details

    Event date: 2019.3

    Language:English   Presentation type:Oral presentation (general)  

    Venue:San Jose, CA, USA   Country:United States  

    We propose building-scale virtual reality (VR), a real-world extension method different from augmented reality (AR), that uses automatically generated indoor 3D point cloud maps. To make the entire indoor area a pose tracking area, we attach an RGB-D camera to a VR headset, collect data with it, and use deep learning to build a model learned from the data. This method is more accurate than the conventional RGB-D SLAM method. Furthermore, to modify the VR space according to the purpose, segmentation and replacement of the 3D point cloud are performed. This is hard to do in AR, but it is essential technology for VR to be used in actual real-world work. We describe a disaster simulation including virtual evacuation drills and virtual work environments as application examples.

  8. Point Cloud Colorization Based on Densely Annotated 3D Shape Dataset International conference

    Xu Cao, Katashi Nagao

    25th International Conference on MultiMedia Modeling (MMM2019) 

     More details

    Event date: 2019.1

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Thessaloniki, Greece   Country:Greece  

    This paper introduces DensePoint, a densely sampled and annotated point cloud dataset containing over 10,000 single objects across 16 categories, by merging different kind of information from two existing datasets. Each point cloud in DensePoint contains 40,000 points, and each point is associated with two sorts of information: RGB value and part annotation. In addition, we propose a method for point cloud colorization by utilizing Generative Adversarial Networks (GANs). The network makes it possible to generate colours for point clouds of single objects by only giving the point cloud itself. Experiments on DensePoint show that there exist clear boundaries in point clouds between different parts of an object, suggesting that the proposed network is able to generate reasonably good colours. Our dataset is publicly available on the project page.

  9. Automatic Evaluation of Presenters' Discussion Performance Based on their Heart Rate International conference

    Shimeng Peng, Katashi Nagao

    The 10th International Conference on Computer Supported Education (CSEDU 2018) 

     More details

    Event date: 2018.3

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Portugal   Country:Portugal  

    Heart rate variability (HRV) has recently seen a surge in interest regarding the evaluation of cognitive performance as it always be used to measure the autonomic nervous system function. In this study, we argue that a presenters' HRV data can be used to effectively evaluate their cognitive performance, specifically presenters' performance of discussion which consists of several Q&A segments (question and answer pairs) compared with using traditional natural language processing (NLP) such as semantic analysis. To confirm this, we used a non-invasive device, i.e., Apple Watch, to collect real-time updated HR data of presenters during discussions in our lab-seminar environment, their HRV data were analyzed based on Q&A segments, and three machine-learning models were generated for evaluation: logistic regression, support vector machine, and random forest. We also discuss the meaningful HRV features (metrics). Comparative experiments were conducted involving semantic data of Q&A statements alone and a combination of HRV and semantic data. The HRV data of presenters resulted in effective evaluation of discussion performance compared with using only semantic data. The combination of these two types of data could improve the discussion performance evaluation ability to some extent.

  10. Meeting Analytics: Creative Activity Support Based on Knowledge Discovery from Discussions International conference

    Katashi Nagao

    51st Hawaii International Conference on System Sciences 

     More details

    Event date: 2018.1

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Hawaii, USA   Country:United States  

    In this paper, we present high-probability statements mentioning future tasks during meetings that should lead to innovations and facilitate creative discussions. We also propose a creative activity support system that should help users to discover and execute essential tasks.

  11. Building Scale VR: Automatically Creating Indoor 3D Maps and its Application to Simulation of Disaster Situations International conference

    Katashi Nagao, Yusuke Miyakawa

    Future Technologies Conference (FTC) 2017 

     More details

    Event date: 2017.11

    Language:English   Presentation type:Oral presentation (general)  

    Venue:Vancouver, Canada   Country:Canada  

    In this paper, we explain how to implement Building-Scale VR and its applications to disaster simulations. It is useful to express disaster situations by reconstructing actual buildings into virtual space and enable users in the building to experience such situations beforehand to learn how to properly act during a disaster.

  12. Smart Learning Environment for Discussion and Presentation Skills Training Invited International conference

    Katashi Nagao

    2016 International Conference for Top and Emerging Computer Scientists (IC-TECS 2016) 

     More details

    Event date: 2016.12

    Language:English   Presentation type:Oral presentation (keynote)  

    Venue:Taipei, Taiwan   Country:Taiwan, Province of China  

    Our university is currently developing an advanced physical-digital learning environment that can train students to enhance their discussion and presentation skills. The environment guarantees an efficient discussion among users with state-of-the-art technologies such as touch panel discussion tables, digital posters, and an interactive wall-sized whiteboard. It includes a data mining system that efficiently records, summarizes, and annotates discussions held in our facility. We also have developed a digital poster authoring tool, a novel tool for creating interactive digital posters displayed using our digital poster presentation system. Evaluation results show the efficiency of using our facilities: the data mining system and the digital poster authoring tool. In addition, our physical-digital learning environment will be further enhanced with some automated recognition systems that can detect interactions with the digital poster presentation system and the different discussion tools enabling a more automated skill evaluation and knowledge discovery from discussion and presentation records.

  13. A Smart Education System for Discussion and Presentation Skills Training Invited International conference

    Katashi Nagao

    BIT's 5th Annual World Congress of Emerging InfoTech 2016 

     More details

    Event date: 2016.11

    Language:English   Presentation type:Oral presentation (invited, special)  

    Venue:Qingdao, China   Country:China  

    Our university is currently developing an advanced physical-digital learning environment that can train students to enhance their discussion and presentation skills. The environment guarantees an efficient discussion among users with state-of-the-art technologies such as touch panel discussion tables, digital posters, and an interactive wall-sized whiteboard. It includes a data mining system that efficiently records, summarizes, and annotates discussions held inside our facility. We also developed a digital poster authoring tool, a novel tool for creating interactive digital posters displayed using our digital poster presentation system. Evaluation results show the efficiency of using our facilities: the data mining system and the digital poster authoring tool. In addition, our physical-digital learning environment will be further enhanced with some automated recognition systems that can detect interactions with the digital poster presentation system and the different discussion tools enabling a more automated skill evaluation and knowledge discovery from discussion and presentation records.

  14. Automatic Extraction of Task Statements from Structured Meeting Content International conference

    Katashi Nagao, Kei Inoue, Naoya Morita and Shigeki Matsubara

    7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management 

     More details

    Event date: 2015.11

    Language:English   Presentation type:Oral presentation (general)  

    Country:Portugal  

    We previously developed a discussion mining system that records face-to-face meetings in detail, analyzes their content, and conducts knowledge discovery. Looking back on past discussion content by browsing documents, such as minutes, is an effective means for conducting future activities. In meetings at which some research topics are regularly discussed, such as seminars in laboratories, the presenters are required to discuss future issues by checking urgent matters from the discussion records. We call statements including advice or requests proposed at previous meetings "task statements" and propose a method for automatically extracting them. With this method, based on certain semantic attributes and linguistic characteristics of statements, a probabilistic model is created using the maximum entropy method. A statement is judged whether it is a task statement according to its probability. A seminar-based experiment validated the effectiveness of the proposed extraction method.

  15. Tool and Evaluation Method for Idea Creation Support International conference

    Ryo Takeshima, Katashi Nagao

    7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management 

     More details

    Event date: 2015.11

    Language:English   Presentation type:Oral presentation (general)  

    Country:Portugal  

    We have developed a new idea creation support tool in which (1) each idea is represented by a tree structure, (2) the idea is automatically evaluated on the basis of the tree structure so that the relative advantages among several alternative ideas is found, (3) the ideas are presented in a poster format, and (4) the ideas are shared by multiple users so that the ideas can be quoted and expanded upon by individual users. In this work, we explain the mechanisms of this tool, including the evaluation and poster conversion of ideas and collaborative idea creation, and briefly discuss our plan for the future.

  16. A New Physical-Digital Environment for Discussion and Presentation Skills Training International conference

    Katashi Nagao, Mehrdad Tehrani, Jovilyn B. Fajardo

    The International Conference on Smart Learning Environments (ICSLE 2014) 

     More details

    Event date: 2014.7

    Language:English   Presentation type:Oral presentation (general)  

    Country:China  

    Our university is currently developing an advanced physical-digital learning environment that can train the students with better discussion and presentation skills. The environment guarantees an efficient discussion among users with state-of-the-art technologies such as touch panel discussion tables and posters. It includes a data mining system that efficiently records, summarizes, and annotates the discussion. It will be further enhanced by using a vision system to facilitate the interactions enabling a more automated discussion mining.

  17. 複数コンテンツの部分関連付けに基づく論文作成支援

    棚瀬達央, 大平茂輝, 長尾確

    第12回情報科学技術フォーラム 

     More details

    Event date: 2013.9

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:鳥取大学   Country:Japan  

  18. 内容の印象度に基づくインタラクティブな要約を用いたスライド推敲支援システム

    竹島亮, 大平茂輝, 長尾確

    第12回情報科学技術フォーラム 

     More details

    Event date: 2013.9

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:鳥取大学   Country:Japan  

  19. デジタルコンテンツの部分参照・引用に基づく論文作成支援

    棚瀬 達央, 大平 茂輝, 長尾 確

    情報処理学会第75回全国大会 

     More details

    Event date: 2013.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:東北大学   Country:Japan  

  20. インタラクティブな要約と内部構造の可視化によるスライド推敲支援システム

    竹島 亮, 大平 茂輝, 長尾 確

    情報処理学会第75回全国大会 

     More details

    Event date: 2013.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:東北大学   Country:Japan  

  21. 小型無人移動体との連携による個人用知的移動体の安全自動走行とその評価

    渡辺 賢, 尾崎 宏樹, 矢田 幸大, 長尾 確

    情報処理学会第75回全国大会 

     More details

    Event date: 2013.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:東北大学   Country:Japan  

  22. 学術論文からの研究資源情報の自動抽出

    井上 慧, 松原 茂樹, 長尾 確

    情報処理学会第75回全国大会 

     More details

    Event date: 2013.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:東北大学   Country:Japan  

  23. 指さし対象認識による個人用知的移動体の直感的な操作

    矢田 幸大, 渡辺 賢, 長尾 確

    情報処理学会第75回全国大会 

     More details

    Event date: 2013.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:東北大学   Country:Japan  

  24. 議論スキル向上のためのゲーミフィケーション・フレームワーク

    川西 康介, 大平 茂輝, 長尾 確

    情報処理学会第75回全国大会 

     More details

    Event date: 2013.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:東北大学   Country:Japan  

  25. タブレットデバイスによるゼミ中のスライドへの指摘とその記録・検索手法

    小林 尚弥, 川西 康介, 大平 茂輝, 長尾 確

    情報処理学会第75回全国大会 

     More details

    Event date: 2013.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:東北大学   Country:Japan  

  26. 複数移動体の安全走行のための移動体間の連携・協調

    久保田 芙衣, 尾崎 宏樹, 渡辺 賢, 長尾 確

    情報処理学会第75回全国大会 

     More details

    Event date: 2013.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:東北大学   Country:Japan  

  27. 屋内3次元地図の自動生成とWebサービス化による実世界情報の高度利用

    尾崎 宏樹, 渡辺 賢, 長尾 確

    情報処理学会第75回全国大会 

     More details

    Event date: 2013.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:東北大学   Country:Japan  

  28. ビデオスクラップブックによる映像シーンの作成・管理と論文執筆支援への応用

    西脇 雅幸, 棚瀬 達央, 大平 茂輝, 長尾 確

    情報処理学会第75回全国大会 

     More details

    Event date: 2013.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:東北大学   Country:Japan  

  29. ディスカッションマイニングへのゲーミフィケーションの導入

    川西 康介, 小林 尚弥, 大平 茂輝, 長尾 確

    第3回デジタルコンテンツクリエーション研究発表会 

     More details

    Event date: 2013.1

    Language:Japanese   Presentation type:Oral presentation (general)  

    Venue:多摩美術大学   Country:Japan  

  30. Social Annotation to Indoor 3D Maps Generated Automatically

    Katashi Nagao, Hiroki Osaki, Ken Watanabe, Daisuke Yamamoto

    Special Interest Group on Digital Content Creation of the Information Processing Society of Japan 

     More details

    Event date: 2012.5

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

    While 3D maps are useful to visualize complicated shapes of objects inside buildings, it is very costly to create and maintain them.
    We developed a mechanism to automatically generate indoor 3D maps by using a Small Unmanned Vehicle (SUV, for short).
    SUV is capable of autonomous run and collection of 3D data captured by an RGB-D (RGB and Depth) image sensor.
    SUV also has a function of generation of a 2D environmental map for localization of current position.
    Futhermore, we also developed a Web-based system for social annotation to automatically-generated indoor 3D maps.
    Using annotated 3D maps and SUV, we can realize highly informative indoor navigations based on intuitive and interactive views and location-aware information.

  31. Video Scene Retrieval Based on Quotations of Video Scenes and Technical Documents

    Tatsuo Tanase, Keisuke Yamamoto, Shigeki Ohira, Katashi Nagao

    IPSJ 73th Annual Convention 

     More details

    Event date: 2011.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  32. A Study on Structualization of Discussion based on Pointing Information and Its Applications in Meeting

    Keisuke Kiuchi, Takahiro Tsuchida, Shigeki Ohira, Katashi Nagao

    IPSJ 73th Annual Convention 

     More details

    Event date: 2011.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  33. Improving Efficiency of Automatic Transportation in Dynamic Environments of Personal Intelligent Vehicles

    Taisuke Inoue, Ken Watanabe, Katashi Nagao

    IPSJ 73th Annual Convention 

     More details

    Event date: 2011.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  34. Extension of Sensing Areas of Personal Intelligent Vehicles by Small Unmanned Vehicles

    Ken Watanabe, Taisuke Inoue, Katashi Nagao

    IPSJ 73th Annual Convention 

     More details

    Event date: 2011.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  35. A Support System for Understanding Technical Documents Based on Annotations of Video Scenes and Technical Documents

    Keisuke Yamamoto, Tatsuo Tanase, Kentaro Ishitoya, Shigeki Ohira, Katashi Nagao

    IPSJ 73th Annual Convention 

     More details

    Event date: 2011.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  36. Relational Search of Text Contents for Personal Knowledge Activity

    Isao Takahashi, Kentaro Ishitoya, Shigeki Ohira, Katashi Nagao

    IPSJ 73th Annual Convention 

     More details

    Event date: 2011.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  37. Acquisition of Context Information Among Meeting Contents andits Visualization for Meeting Reminding Support

    Kentaro Ishitoya, Isao Takahashi, Shigeki Ohira, Katashi Nagao

    IPSJ 73th Annual Convention 

     More details

    Event date: 2011.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  38. Future Prospects of Intelligent Computing Systems

    Katashi Nagao

    Special Interest Group on Intelligent Computing Systems of the Information Processing Society of Japan 

     More details

    Event date: 2010.11

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

    While research results of artificial intelligence have been looked forward to be available, for example, in Web-based advanced information systems and intelligent robotics, previous AI researches have not sufficiently shown that how many principles of intelligence are already explained or are yet to be solved.
    Therefore we should make sure that intelligent computing systems are means to verify hypotheses on principles of intelligence.
    Being always aware of this point, we can acquire series of verified or being verified hypotheses.
    Based on these hypotheses, we can explain design principles and appropriateness of our developed systems with some sort of intelligent mechanisms.
    In this article, I describe three perspectives of guideline for developing and deploying intelligent computing systems; they are real, social, and material.

  39. 個人用知的移動体のための制御プラットフォームの開発

    井上 泰佑, 安田 知加, 岸 佳奈恵, 長尾 確

    情報処理学会第72回全国大会 

     More details

    Event date: 2010.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  40. カジュアルミーティングにおける議論の構造化とその応用

    石戸谷 顕太朗, 大平 茂輝, 長尾 確

    第75回グループウェアとネットワークサービス研究会 

     More details

    Event date: 2010.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  41. 知識活動支援システムによる会議コンテンツ間の関連性の獲得とその応用

    第75回グループウェアとネットワークサービス研究会 

     More details

    Event date: 2010.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  42. 知的移動体による美術館での鑑賞体験の個人化

    第159回知能と複雑系研究会 

     More details

    Event date: 2010.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  43. カジュアルミーティングにおける議論コンテンツの構造化とその応用

    石戸谷 顕太朗, 小幡 耕大, 大平 茂輝, 長尾 確

    情報処理学会第72回全国大会 

     More details

    Event date: 2010.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  44. 利用者間のコミュニケーション履歴を共有する協調学習型eラーニングシステム

    山本 圭介, 笠嶋 公一朗, 大平 茂輝, 長尾 確

    情報処理学会第72回全国大会 

     More details

    Event date: 2010.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  45. 知識活動支援システムによる会議コンテンツ間の関連性の獲得とその応用

    土田 貴裕, 大平 茂輝, 長尾 確

    情報処理学会第72回全国大会 

     More details

    Event date: 2010.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  46. ディスカッションマイニングにおける発言中の指示対象の取得とその応用

    清水 元規, 木内 啓輔, 土田 貴裕, 大平 茂輝, 長尾 確

    情報処理学会第72回全国大会 

     More details

    Event date: 2010.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  47. 自動走行可能な移動体によるミュージアムでの鑑賞体験の個人化

    安田 知加, 井上 泰佑, 岸 佳奈恵, 長尾 確

    情報処理学会第72回全国大会 

     More details

    Event date: 2010.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  48. 個人用知的移動体による狭空間での安全走行支援

    岸 佳奈恵, 安田 知加, 井上 泰佑, 長尾 確

    情報処理学会第72回全国大会 

     More details

    Event date: 2010.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  49. Visualization of Discussions in Face-to-Face Meetings International conference

    Fifth International Conference on Collaboration Technologies 

     More details

    Event date: 2009.8

    Language:English   Presentation type:Oral presentation (general)  

  50. TimeMachineBoard: A Casual Meeting System Capable of Reusing Previous Discussions International conference

    Fifth International Conference on Collaboration Technologies 

     More details

    Event date: 2009.8

    Language:English   Presentation type:Oral presentation (general)  

  51. *Acquisition and Use of Real World Information for Personal Intelligent Vehicles

    IPSJ 71th Annual Convention 

     More details

    Event date: 2009.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  52. *TimeMachineBoard: A Casual Meeting System Enables Quotation of Discussion Content

    IPSJ 71th Annual Convention 

     More details

    Event date: 2009.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

    Ordinary casual meetings held by small groups are one of the most important knowledge activities in an organization, and the organization's performance deeply depends on such meetings. There are many systems that help facilitate casual meetings, but there are no common way to assist in reusing content from previous meetings during a current meeting. We developed a new casual meeting system called the TimeMachineBoard that uses multiple displays to enable users to retrieve and quote discussions f

  53. *Moving Obstacle Avoidance by Personal Intelligent Vehicles

    IPSJ 71th Annual Convention 

     More details

    Event date: 2009.3

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

    We have been developing an IT-powered electric personal vehicle called the AT. AT stands for Attentive Townvehicle, which is a human-centered and context-aware vehicle that makes people's urban lives more comfortable and enjoyable.
    AT is capable of omnidirectional movement, obstacle and collision avoidance, and autonomous travel to user specified destinations in indoors.
    In this paper, we present AT's fundamental mechanism and techniques for moving obstacle avoidance.

  54. *Augmented Human: Extension of Human Body and Intelligence Based on Mountable Computing (Keynote Speech)

    9th SICE System Integration Division Annual Conference 

     More details

    Event date: 2008.12

    Language:Japanese   Presentation type:Oral presentation (invited, special)  

    Country:Japan  

    A personal intelligent vehicle that is a main topic of this research project is an example product of a new research field called Mountable Computing, which supports not only anytime/anywhere computing functionalities but also computer-augmented physical movement and transportation activities. Mountable Computing contributes to augmentation of human body (physical movement abilities) and intelligence (sensing, remembering, and other cognitive abilities). We have been also studying physical and i

  55. *Knowledge Activity Support System Based on Discussion Content International conference

    The Fourth International Conference on Collaboration Technologies 

     More details

    Event date: 2008.8

    Language:English   Presentation type:Oral presentation (general)  

    We have developed a system to support knowledge activity. Knowledge activity isactivity in which people continuously create ideas about themes and developthem into knowledge. First, the system generates minutes (linked to video andaudio of discussions) and metadata (on significant discussions). Then, userstag the minutes in order to identify certain parts contemplate the discussioncontent and write notes derived from the minutes with the system for use asdiscussion content. The system converts i

  56. divie: Tag-Based Video Scene Retrieval System

     More details

    Event date: 2008.3

    Language:Japanese  

  57. A Traffic Accident Prevention System Using Autonomous Driving Control at Intersection Areas

     More details

    Event date: 2008.3

    Language:Japanese  

  58. Semantic Integration of Multimedia Contents Based on Structured Quotation

     More details

    Event date: 2008.3

    Language:Japanese  

  59. Sharing and Reusing Robotic Behaviors for Robot-Assisted Therapy

     More details

    Event date: 2008.3

    Language:Japanese  

  60. Lecture Content Sharing System to Acquire and Manage Video Annotations

     More details

    Event date: 2008.3

    Language:Japanese  

  61. Video Scene Annotation Based on Social Bookmarking of Video Scenes

     More details

    Event date: 2008.3

    Language:Japanese  

  62. A Casual Meeting System Facilitating Reuse of Creative Discussions

     More details

    Event date: 2008.3

    Language:Japanese  

  63. Quotation of Web Documents Using Reading Annotation and its Applications

     More details

    Event date: 2008.3

    Language:Japanese  

  64. Indoor Automatic Transportation by Personal Intelligent Vehicles

     More details

    Event date: 2008.3

    Language:Japanese  

  65. Personal Intelligent Vehicles Capable of Omni-Directional Movement and their Applications

     More details

    Event date: 2008.3

    Language:Japanese  

  66. Discussion Mining: Knowledge Discovery from Face-to-Face Meetings,

     More details

    Event date: 2008.2

    Language:Japanese  

  67. A Research Activity Support System Based on Creation and Reuse of Technical Papers

     More details

    Event date: 2007.6

    Language:Japanese  

  68. Creating and Sharing Robotic Behavior Contents for a Communication Robot

     More details

    Event date: 2007.6

    Language:Japanese  

  69. Music Player System Adapted to Walking/Jogging Rhythm

     More details

    Event date: 2007.6

    Language:Japanese  

  70. Synvie: Applications Based on Annotation Acquired From Videoblog Community

     More details

    Event date: 2007.6

    Language:Japanese  

  71. Video Scene Retrieval Using Online Video Annotation

     More details

    Event date: 2007.6

    Language:Japanese  

  72. Discussion Media: Structuring and Browsing System for Discussion Contents

     More details

    Event date: 2007.6

    Language:Japanese  

  73. A Knowledge Activity Support System Based on Discussion Contents

     More details

    Event date: 2007.6

    Language:Japanese  

  74. Casual Meeting Support to Activate Creative Discussions

     More details

    Event date: 2007.6

    Language:Japanese  

  75. Advanced Use of Web Content: Annotation and Transcoding

    IPSJ Tokai Section 

     More details

    Event date: 2007

    Language:Japanese   Presentation type:Oral presentation (invited, special)  

    Country:Japan  

  76. Personal Intelligent Vehicles for Recording and Reuse of Experiences

    Interaction 2007 

     More details

    Event date: 2007

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  77. Personal Intelligent Vehicles as New Experience Media

    IPSJ2007 

     More details

    Event date: 2007

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  78. Automatic Information Compilation Based on Semantically Structured Quotations

    JSAI2007 

     More details

    Event date: 2007

    Language:Japanese   Presentation type:Oral presentation (general)  

    Country:Japan  

  79. Wordlogue: Weblog Classification and Retrieval Using Data on Word Frequencies and Linguistic Annotations

    IPSJ2006 

     More details

    Event date: 2006

    Language:Japanese  

    Country:Japan  

  80. Weblog-style Video Annotation and Syndication International conference

    AXMEDIS 2005 Conference 

     More details

    Event date: 2005

    Language:English   Presentation type:Oral presentation (general)  

  81. SemCode2: Ontology-Based Annotation and Transcoding

    IPSJ 67th Annual Convention 

     More details

    Event date: 2005

    Language:Japanese  

    Country:Japan  

  82. A Music Recommendation System Based on Annotations about Listeners' Preferences and Situations International conference

    AXMEDIS 2005 Conference 

     More details

    Event date: 2005

    Language:English   Presentation type:Oral presentation (general)  

  83. iVAS: Web-based Video Annotation System and its Applications International conference

    The 3rd International Semantic Web Conference 

     More details

    Event date: 2004

    Language:English   Presentation type:Poster presentation  

    Country:Japan  

  84. Discussion Mining: Annotation-Based Knowledge Discovery from Real World Activities International conference

    The Fifth Pacific-Rim Conference on Multimedia (PCM 2004) 

     More details

    Event date: 2004

    Language:English   Presentation type:Poster presentation  

    Country:Japan  

  85. MiXA: A Musical Annotation System International conference

    The 3rd International Semantic Web Conference 

     More details

    Event date: 2004

    Language:English   Presentation type:Poster presentation  

    Country:Japan  

  86. Semantic Transcoding: What we should do now for building the Semantic Web

    IPSJ 65th Annual Convention 

     More details

    Event date: 2003

    Language:Japanese  

    Country:Japan  

  87. Situated Conversation with a Communicative Interface Robot International conference

    JSAI 

     More details

    Event date: 2002

    Language:English   Presentation type:Oral presentation (invited, special)  

    Country:Japan  

  88. Annotaiton-Based Multimedia Summarization and Translation International conference

    COLING-2002 

     More details

    Event date: 2002

    Language:English   Presentation type:Oral presentation (general)  

  89. Semantic Annotation and Transcoding: Making Text and Multimedia Contents More Usable on the Web International conference

    International Workshop on Multimedia Annotation 

     More details

    Event date: 2001

    Language:English   Presentation type:Oral presentation (invited, special)  

    Country:Japan  

  90. Automatic Text Summarization Based on the Global Document Annotation International conference

    COLING-98 

     More details

    Event date: 1998

    Language:English  

  91. Ubiquitous Talker: Spoken Language Interaction with Real World Objects International conference

    The Fourteenth International Joint Conference on Artificial Intelligence (IJCAI-95) 

     More details

    Event date: 1995

    Language:English  

  92. A Preferential Constraint Satisfaction Technique for Natural Language Analysis International conference

    The Tenth European Conference on Artificial Intelligence (ECAI-92) 

     More details

    Event date: 1992

    Language:English  

▼display all

Research Project for Joint Research, Competitive Funding, etc. 8

  1. 実用ロボット対話システムに関する研究

    2007.8 - 2008.3

    企業からの受託研究 

  2. t-roomによる遠隔コラボレーションの研究

    2007.7 - 2008.2

    企業からの受託研究 

  3. 生活者支援のための知的コンテンツ基盤

    2007.4 - 2008.3

  4. t-roomによる遠隔コラボレーションの研究

    2006.8 - 2007.3

    企業からの受託研究 

  5. 実用ロボット対話システムに関する研究

    2006.8 - 2007.3

    企業からの受託研究 

  6. 生活者支援のための知的コンテンツ基盤

    2006.4 - 2007.3

  7. 実用ロボット対話システムに関する研究

    2005.9 - 2006.3

    企業からの受託研究 

  8. 生活者支援のための知的コンテンツ基盤

    2005.7 - 2006.3

▼display all

KAKENHI (Grants-in-Aid for Scientific Research) 5

  1. 双方向性意思伝達支援アプリを応用した革新的コミュニケーション支援システムの開発

    Grant number:19H01598  2019.4 - 2022.3

    科学研究費助成事業  基盤研究(B)

    辰巳 寛, 山本 正彦, 長尾 確, 木村 航

      More details

    Authorship:Coinvestigator(s) 

    重篤な発語困難を呈する居宅療養者の在宅医療においては,緊密な多職種間の機能的連携が保たれた包括的ケア(リハビリテーション含む)が重要である.その実現のためには,技術革新が著しい情報通信技術(ICT)の活用が鍵となる.
    本研究は,人工知能の機械学習を組み込んだコミュニケーション支援アプリケーション(intelligent speech assistive tool:iSAT)の臨床サービスを開始するために,iSATの自動診断能力の精度を向上させ,ホームワーク機能を実装したiSATによる臨床研究を行う.
    慢性期脳卒中患者に対する言語支援アプリケーション(i-SAT)の臨床的有用性を検証するための基礎的研究を実施した。
    対象は慢性期運動性失語症者2名。i-SATは、iPadProで起動し、画面には発話のモデル口型動画(発話速度調整可)と絵カード(意味的刺激)、文字刺激が同時に複数回反復提示され、患者の発話状況も同時録画する機能を有する。
    方法:患者はアプリ内の自主訓練システムを使用し、毎回ランダムに提示される20単語/回の発語訓練(約20分)を8セット/日、8週間連続で実施。介入前後の評価はSLTA、SLTA-ST、RCPM、TMT-J、ROCFTを実施した。アプリの使用に関する負担や満足度に関する自記式アンケートを実施した。脳機能画像検証として、言語関連神経ネットワーク、注意関連神経ネットワークに関連する脳領域に関心領域を設定し解析を行い、リハビリ介入前後で機能的結合の変化を検証した。
    結果:2名ともSLTA、SLTA-STの呼称(低頻度語)と注意機能の改善を認めた。アンケートではアプリに伴う心的負担や副作用はなかった。一方、アプリ訓練には高い満足度を示し、自由会話場面での発話量の増加や喚語困難が軽減したとの意見が得られた。画像評価においては、左島や左上側頭回など言語処理ネットワークに関係する脳領域、注意関連神経ネットワークでは背側注意ネットワークに関連する脳領域の機能的結合強化を確認した。
    考察;慢性期失語症者2名に対し言語支援アプリによる短期集中訓練を実施し、呼称の改善と日常会話場面の質的変化、注意障害の改善を確認した。またi-SATは患者の訓練の自由度を保証し、高い満足度も得られることが示唆された。i-SATによる短期集中訓練は言語機能関連脳領域、注意機能関連脳領域の賦活化に肯定的影響を及ぼす可能性が推測された。
    発語障害者の自動診断システムの基礎プログラムは完成し、人工知能による機械学習のための教師的役割を担う健常者発話のデータもほぼ収集を完了した。臨床データの収集については、リハビリテーション実施時のマスク着用の義務にともない、患者の顔面全体の動画が記録しにくい状況だが、患者の同意を得た上で、感染防止策を徹底のうえ、安全かつ正確に動画データの収集を継続している。
    すでに開発済みの発語練習用アプリケーションを機能拡張し、自動診断の機械学習用データの収集のための教師データ入力機能の追加に加えて、失語症者の発語動画データ(患者数:20名、単語数:総計2500語)と、その臨床評価(5名の言語聴覚士による聴覚的印象評価)のデータをもとに、顔特徴時系列データからの特徴抽出、および、評価値を予測する機械学習モデル(再帰型ニューラルネットワークモデル)の構築を試みる。さらに、パーキンソン病や運動障害性構音障害者の発話動画データを収集し、人工知能による発話自動診断システムの精度を検証する作業を予定している。今後は、タブレット上で、ユーザーの練習中の動画に対して音韻単位で評価値をアノテーションできるシステムの構築を予定している。

  2. 知的創造活動の大規模マルチモーダルデータベースの構築と生産性向上手法の発見

    2016.4 - 2019.3

    科学研究費補助金  基盤研究(B)

    長尾 確

      More details

    Authorship:Principal investigator 

    アイディア創造や創造的会議のように知的創造に関する人間活動は様々なものが行われている。そのための支援技術も多く提案されてきている。問題は知的創造活動を詳細に分析して、特徴抽出をするための大規模なデータベースが存在しないことである。我々は、大学研究室内のセミナーを詳細に記録して、議論構造などを分析したデータベースを構築した。それは、約10年間かけて行われた650回(総時間は1000時間以上)のセミナー情報を含んでいる。そのデータベースを拡張し、セミナー以外の知的活動も記録・分析する仕組みを構築する。その結果は、人間の知的創造活動を多様な観点から分析するために有効なデータベースとなる。また、機械学習によって人間が気づかなかった特徴を発見し活動を支援できるので、知的創造活動の生産性を向上させる仕組みも実現できる。

  3. 災害対策問題を集合知的に解決するための実世界ゲーミフィケーション環境の構築と評価

    2014.4 - 2016.3

    科学研究費補助金  挑戦的萌芽研究

    長尾 確

      More details

    Authorship:Principal investigator 

  4. 辞典コンテンツを自動編集・個人適応する仕組み

    2005 - 2007

    科学研究費補助金  萌芽研究,課題番号:17650040

    長尾 確

      More details

    Authorship:Principal investigator 

  5. 環境と人間に適応し協調的に行動する個人用知的小型移動体システム

    2004 - 2007

    科学研究費補助金  基盤研究(B),課題番号:16300044

    長尾 確

      More details

    Authorship:Principal investigator 

Industrial property rights 1

  1. 動画表示装置

     More details

    Application no:特願2007-160738  Date applied:2007.6

    Country of applicant:Domestic  

 

Teaching Experience (On-campus) 3

  1. Knowledge Processing

    2018

  2. Introduction to Information Science

    2011

  3. First Year Seminar A

    2011

Teaching Experience (Off-campus) 2

  1. 知識処理

    Nagoya University)

  2. Knowledge Processing

    Nagoya University)