2023/12/06 更新

写真a

オオタニ ケント
大谷 健登
OHTANI Kento
所属
大学院情報学研究科 知能システム学専攻 基盤知能情報学 特任助教
職名
特任助教

学位 1

  1. 博士(情報科学) ( 2018年3月   名古屋大学 ) 

 

論文 18

  1. L-DIG: A GAN-Based Method for LiDAR Point Cloud Processing under Snow Driving Conditions.

    Zhang Y, Ding M, Yang H, Niu Y, Feng Y, Ohtani K, Takeda K

    Sensors (Basel, Switzerland)   23 巻 ( 21 )   2023年10月

     詳細を見る

    記述言語:英語   出版者・発行元:Sensors (Basel, Switzerland)  

    LiDAR point clouds are significantly impacted by snow in driving scenarios, introducing scattered noise points and phantom objects, thereby compromising the perception capabilities of autonomous driving systems. Current effective methods for removing snow from point clouds largely rely on outlier filters, which mechanically eliminate isolated points. This research proposes a novel translation model for LiDAR point clouds, the 'L-DIG' (LiDAR depth images GAN), built upon refined generative adversarial networks (GANs). This model not only has the capacity to reduce snow noise from point clouds, but it also can artificially synthesize snow points onto clear data. The model is trained using depth image representations of point clouds derived from unpaired datasets, complemented by customized loss functions for depth images to ensure scale and structure consistencies. To amplify the efficacy of snow capture, particularly in the region surrounding the ego vehicle, we have developed a pixel-attention discriminator that operates without downsampling convolutional layers. Concurrently, the other discriminator equipped with two-step downsampling convolutional layers has been engineered to effectively handle snow clusters. This dual-discriminator approach ensures robust and comprehensive performance in tackling diverse snow conditions. The proposed model displays a superior ability to capture snow and object features within LiDAR point clouds. A 3D clustering algorithm is employed to adaptively evaluate different levels of snow conditions, including scattered snowfall and snow swirls. Experimental findings demonstrate an evident de-snowing effect, and the ability to synthesize snow effects.

    DOI: 10.3390/s23218660

    Scopus

    PubMed

  2. Learning to Predict Navigational Patterns From Partial Observations

    Karlsson, R; Carballo, A; Lepe-Salazar, F; Fujii, K; Ohtani, K; Takeda, K

    IEEE ROBOTICS AND AUTOMATION LETTERS   8 巻 ( 9 ) 頁: 5592 - 5599   2023年9月

     詳細を見る

    出版者・発行元:IEEE Robotics and Automation Letters  

    Human beings cooperatively navigate rule-constrained environments by adhering to mutually known navigational patterns, which may be represented as directional pathways or road lanes. Inferring these navigational patterns from incompletely observed environments is required for intelligent mobile robots operating in unmapped locations. However, algorithmically defining these navigational patterns is nontrivial. This letter presents the first self-supervised learning (SSL) method for learning to infer navigational patterns in real-world environments from partial observations only. We explain how geometric data augmentation, predictive world modeling, and an information-theoretic regularizer enable our model to predict an unbiased local directional soft lane probability (DSLP) field in the limit of infinite data. We demonstrate how to infer global navigational patterns by fitting a maximum likelihood graph to the DSLP field. Experiments show that our SSL model outperforms two SOTA supervised lane graph prediction models on the nuScenes dataset. We propose our SSL method as a scalable and interpretable continual learning paradigm for navigation by perception.

    DOI: 10.1109/LRA.2023.3291924

    Web of Science

    Scopus

  3. Synthesizing Realistic Snow Effects in Driving Images Using GANs and Real Data with Semantic Guidance

    Yang, HT; Ding, M; Carballo, A; Zhang, YX; Ohtani, K; Niu, YJ; Ge, MN; Feng, Y; Takeda, K

    2023 IEEE INTELLIGENT VEHICLES SYMPOSIUM, IV   2023-June 巻   2023年

     詳細を見る

    出版者・発行元:IEEE Intelligent Vehicles Symposium, Proceedings  

    Intelligent vehicle perception algorithms often have difficulty accurately analyzing and interpreting images in adverse weather conditions. Snow is a corner case that not only reduces visibility and contrast but also affects the stability of the road environment. While it is possible to train deep learning models on real-world driving datasets in snow weather, obtaining such data can be challenging. Synthesizing snow effects on existing driving datasets is a viable alternative. In this work, we propose a method based on Cycle Consistent Generative Adversarial Networks (CycleGANs) that utilizes additional semantic information to generate snow effects. We apply deep supervision by using intermediate outputs from the last two convolutional layers in the generator as multi-scale supervision signals for training. We collect a small set of driving image data captured under heavy snow as the translation source. We compare the generated images with those produced by various network architectures and evaluate the results qualitatively and quantitatively on the Cityscapes and EuroCity Persons datasets. Experiment results indicate that our model can synthesize realistic snow effects in driving images.

    DOI: 10.1109/IV55152.2023.10186565

    Web of Science

    Scopus

  4. Predictive World Models from Real-World Partial Observations

    Karlsson R., Carballo A., Fujii K., Ohtani K., Takeda K.

    Proceedings - 2023 IEEE International Conference on Mobility, Operations, Services and Technologies, MOST 2023     頁: 152 - 166   2023年

     詳細を見る

    出版者・発行元:Proceedings - 2023 IEEE International Conference on Mobility, Operations, Services and Technologies, MOST 2023  

    Cognitive scientists believe adaptable intelligent agents like humans perform reasoning through learned causal mental simulations of agents and environments. The problem of learning such simulations is called predictive world modeling. Recently, reinforcement learning (RL) agents leveraging world models have achieved SOTA performance in game environments. However, understanding how to apply the world modeling approach in complex real-world environments relevant to mobile robots remains an open question. In this paper, we present a framework for learning a probabilistic predictive world model for real-world road environments. We implement the model using a hierarchical VAE (HVAE) capable of predicting a diverse set of fully observed plausible worlds from accumulated sensor observations. While prior HVAE methods require complete states as ground truth for learning, we present a novel sequential training method to allow HVAEs to learn to predict complete states from partially observed states only. We experimentally demonstrate accurate spatial structure prediction of deterministic regions achieving 96.21 IoU, and close the gap to perfect prediction by 62 % for stochastic regions using the best prediction. By extending HVAEs to cases where complete ground truth states do not exist, we facilitate continual learning of spatial prediction as a step towards realizing explainable and comprehensive predictive world models for real-world mobile robotics applications. Code is available at https://github.com/robin-karlsson0/predictive-world-models.

    DOI: 10.1109/MOST57249.2023.00024

    Scopus

  5. Efficient Training Method for Point Cloud-Based Object Detection Models by Combining Environmental Transitions and Active Learning

    Yamamoto T., Ohtani K., Hayashi T., Carballo A., Takeda K.

    Lecture Notes in Networks and Systems   642 LNNS 巻   頁: 292 - 303   2023年

     詳細を見る

    出版者・発行元:Lecture Notes in Networks and Systems  

    The perceptive systems used in automated driving need to function accurately and reliably in a variety of traffic environments. These systems generally perform object detection to identify the positions and attributes of potential obstacles. Among the methods which have been proposed, object detection using three-dimensional (3D) point cloud data obtained using LiDAR has attracted much attention. However, when attempting to create a detection model, annotation must be performed on a huge amount of data. Furthermore, the accuracy of 3D object detection models is dependent on the data domains used for training, such as geographic or traffic environments, so it is necessary to train models for each domain, which requires large amounts of training data for each domain. Therefore, the objective of this study is to develop a 3D object detector for new domains, even when trained with relatively small amounts of annotated data from new domains. We propose using a model that has been trained with a large amount of labeled data for pre-trained model, and simultaneously using transfer learning with limited amount of highly effective training data, selected from the target domain by active learning. Experimental evaluations show that 3D object detection models created using the proposed method perform well at a new location. We also confirm that active learning is particularly effective only limited training data available.

    DOI: 10.1007/978-3-031-26889-2_26

    Scopus

  6. Auditory and visual warning information generation of the risk object in driving scenes based on weakly supervised learning

    Niu, YJ; Ding, M; Zhang, YX; Ohtani, KT; Takeda, K

    2022 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV)   2022-June 巻   頁: 1572 - 1577   2022年

     詳細を見る

    記述言語:日本語   出版者・発行元:IEEE Intelligent Vehicles Symposium, Proceedings  

    In this research, a two-stage risk object warning method is proposed to generate the auditory and visual warning information simultaneously from the driving scene. The auditory warning module (AWM) is designed as a classification task by combining the rough location and type information as warning sentences and treating each sentence as one class. The visual warning module (VWM) is designed as a weakly supervised method to save the labor-intensive bounding box marking of risk objects. To confirm the effectiveness of the proposed method, we also create a linguistic risk notification (LRN) dataset by describing the driving scenario as several different sentences. The average accuracy of auditory warning is 96.4% for generating the warning sentences. The average accuracy of the weakly supervised visual warning algorithm is 81.3% for getting the risk vehicle localization without any supervisory information.

    DOI: 10.1109/IV51971.2022.9827382

    Web of Science

    Scopus

  7. Methods of Gently Notifying Pedestrians of Approaching Objects when Listening to Music

    Sakashita, Y; Ishiguro, Y; Ohtani, K; Nishino, T; Takeda, K

    ADJUNCT PROCEEDINGS OF THE 35TH ACM SYMPOSIUM ON USER INTERFACE SOFTWARE & TECHNOLOGY, UIST 2022     2022年

     詳細を見る

    出版者・発行元:UIST 2022 Adjunct - Adjunct Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology  

    Many people now listen to music with earphones while walking, and are less likely to notice approaching people, cars, etc. Many methods of detecting approaching objects and notifying pedestrians have been proposed, but few have focused on low urgency situations or music listeners, and many notification methods are unpleasant. Therefore, in this work, we propose methods of gently notifying pedestrians listening to music of approaching objects using environmental sound. We conducted experiments in a virtual environment to assess directional perception accuracy and comfort. Our results show the proposed method allows participants to detect the direction of approaching objects as accurately as explicit notification methods, with less discomfort.

    DOI: 10.1145/3526114.3558728

    Web of Science

    Scopus

  8. ViCE: Improving Dense Representation Learning by Superpixelization and Contrasting Cluster Assignment

    Karlsson R., Hayashi T., Fujii K., Carballo A., Ohtani K., Takeda K.

    BMVC 2022 - 33rd British Machine Vision Conference Proceedings     2022年

     詳細を見る

    出版者・発行元:BMVC 2022 - 33rd British Machine Vision Conference Proceedings  

    Recent self-supervised models have demonstrated equal or better performance than supervised methods, opening for AI systems to learn visual representations from practically unlimited data. However, these methods are typically classification-based and thus ineffective for learning high-resolution feature maps that preserve precise spatial information. This work introduces superpixels to improve self-supervised learning of dense semantically rich visual concept embeddings. Decomposing images into a small set of visually coherent regions reduces the computational complexity by O(1000) while preserving detail. We experimentally show that contrasting over regions improves the effectiveness of contrastive learning methods, extends their applicability to high-resolution images, improves overclustering performance, superpixels are better than grids, and regional masking improves performance. The expressiveness of our dense embeddings is demonstrated by improving the SOTA unsupervised semantic segmentation benchmark on Cityscapes, and for convolutional models on COCO. Code is available at https://github.com/robin-karlsson0/vice.

    Scopus

  9. FollowSelect: 直観的なナビゲーションが可能な経路追従型のメニュー選択手法 査読有り

    榮井 優介, 石黒 祥生, 大谷 健登, 西野 隆典, 武田 一哉

    情報処理学会論文誌   62 巻 ( 10 ) 頁: 1669 - 1680   2021年10月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: doi/10.20729/00213195

  10. Manipulation of Speed Perception While in Motion Using Auditory Stimuli 査読有り

    Yuta Kanayama, Yoshio Ishiguro, Takanori Nishino, Kento Ohtani, Kazuya Takeda

    2021 18th International Conference on Ubiquitous Robots (UR)     2021年7月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)  

  11. 電気式人工喉頭を用いた歌唱システムにおける自然な身体動作を利用した歌唱表現付与の提案 査読有り

    大川舜平, 石黒 祥生, 大谷 健登, 西野 隆典, 小林 和弘, 戸田 智基, 武田 一哉

    情報処理学会インタラクション2021論文集     頁: 261 - 266   2021年3月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(研究会,シンポジウム資料等)  

  12. 目的方向情報の強調による分岐選択可能なEnd-to-End自動運転 査読有り

    清谷 竣也, 大谷 健登, カルバヨ アレックサンダー, 竹内 栄二朗, 武田 一哉

    自動車技術会論文集   52 巻 ( 6 ) 頁: 1368 - 1374   2021年

     詳細を見る

    記述言語:日本語   出版者・発行元:公益社団法人 自動車技術会  

    本研究では,End-to-End自動運転を用いて分岐含む経路を追従するための手法について研究を行う.本手法では,L2正則化を用いて入力画像の特徴量にかかる重みの大きさを制限し,もう一つの入力である分岐の進行方向を示すベクトルを強調することで,ロボットが分岐を含む経路を追従できる学習手法を提案する.

    DOI: 10.11351/jsaeronbun.52.1368

    CiNii Research

  13. Driving Behavior Aware Caption Generation for Egocentric Driving Videos Using In-Vehicle Sensors

    Zhang, HK; Takeda, K; Sasano, R; Adachi, Y; Ohtani, K

    2021 IEEE INTELLIGENT VEHICLES SYMPOSIUM WORKSHOPS (IV WORKSHOPS)     頁: 287 - 292   2021年

     詳細を見る

    記述言語:日本語   出版者・発行元:IEEE Intelligent Vehicles Symposium, Proceedings  

    Video captioning aims to generate textual descriptions according to the video contents. The risk assessment of autonomous driving vehicles has become essential for an insurance company for providing adequate insurance coverage, in particular, for emerging MaaS business. The insurers need to assess the risk of autonomous driving business plans with a fixed route by analyzing a large number of driving data, including videos recorded by dash cameras and sensor signals. To make the process more efficient, generating captions for driving videos can provide insurers concise information to understand the video contents quickly. A natural problem with driving video captioning is, since the absence of egovehicles in these egocentric videos, descriptions of latent driving behaviors are difficult to be grounded in specific visual cues. To address this issue, we focus on generating driving video captions with accurate behavior descriptions, and propose to incorporate in-vehicle sensors which encapsulate the driving behavior information to assist the caption generation. We evaluate our method on the Japanese driving video captioning dataset called City Traffic, where the results demonstrate the effectiveness of in-vehicle sensors on improving the overall performance of generated captions, especially on generating more accurate descriptions for the driving behaviors.

    DOI: 10.1109/IVWorkshops54471.2021.9669259

    Web of Science

    Scopus

  14. Improving target selection accuracy for vehicle touch screens

    Ito K., Nishino T., Ohtani K., Takeda K., Ishiguro Y.

    Adjunct Proceedings - 11th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2019     頁: 176 - 180   2019年9月

     詳細を見る

    記述言語:日本語   出版者・発行元:Adjunct Proceedings - 11th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2019  

    When operating the touch screen in a car, the touch point can shift due to the vibration, resulting in selection errors. Using larger target is a possible solution, but this significantly limits the amount of content that can be displayed on the touch screen. Therefore, we propose a method for in-vehicle touch screen target selection that can be used with a variety of sensors to increase selection accuracy. In this method, the vibration feature is learned by Variational Auto-Encoder based model, and it is used for estimating touch point distribution. Our experimental results demonstrate that the proposed method allows users to achieve higher target selection accuracy than conventional methods.

    DOI: 10.1145/3349263.3351327

    Scopus

  15. 畳み込み雑音除去自己符号化器と対数周波数領域振幅スペクトル特徴を用いた楽曲音源強調

    大谷 健登, 丹羽 健太, 西野 隆典, 武田 一哉

    電子情報通信学会論文誌D 情報・システム   J101-D 巻 ( 3 ) 頁: 615 - 627   2018年3月

     詳細を見る

    記述言語:日本語  

    本論文では,畳み込み雑音除去自己符号化器(convolutional denoising autoencoder: CDAE)と対数周波数領域振幅スペクトル特徴を利用し,楽曲信号から個々の楽器音信号を強調するための技術について提案する.これまでに提案されてきた深層ニューラルネットワーク(deep neural network: DNN)を用いて音源信号の振幅スペクトルを推定する試みの多くは,楽器音信号の物理的な性質がDNNの構造に考慮されていない.本論文では,多くの楽器音の対数周波数領域振幅スペクトルが,楽器ごとに固有の包絡構造成分と様々な基本周波数に対応する調波構造成分の重みづけ和との掛け合わせでモデル化されることに着目し,楽器音信号の特性を考慮したDNN構造を用いることで,楽器音の振幅スペクトル推定精度が向上すると考えた.対数周波数領域の振幅スペクトル特徴量をCDAEに入力することで目的音の振幅スペクトルを推定する方式を提案し,実験を通して従来方式より信号対干渉音比(signal to interference ratio: SIR)が改善することを確認した.また,目的音と雑音間の相補性に着目し,目的音だけでなく,雑音の振幅スペクトル推定を同時に行い,それらを組み合わせたところ,SIR改善量が更に上昇した.

    DOI: 10.14923/transinfj.2017pdp0021

    CiNii Research

  16. 畳み込み雑音除去自己符号化器と対数周波数領域スペクトル特徴を用いた楽曲音源強調 査読有り

    大谷 健登

    電子情報通信学会論文誌(D)   J101-D 巻   頁: 615 - 627   2018年

     詳細を見る

  17. A Single-Dimensional Interface for Arranging Multiple Audio Sources in Three-Dimensional Space

    Ohtani, K; Niwa, K; Takeda, K

    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS   E100D 巻 ( 10 ) 頁: 2635 - 2643   2017年10月

     詳細を見る

    記述言語:英語   出版者・発行元:IEICE Transactions on Information and Systems  

    A single-dimensional interface which enables users to obtain diverse localizations of audio sources is proposed. In many conventional interfaces for arranging audio sources, there are multiple arrangement parameters, some of which allow users to control positions of audio sources. However, it is difficult for users who are unfamiliar with these systems to optimize the arrangement parameters since the number of possible settings is huge. We propose a simple, single-dimensional interface for adjusting arrangement parameters, allowing users to sample several diverse audio source arrangements and easily find their preferred auditory localizations. To select subsets of arrangement parameters from all of the possible choices, auditory-localization space vectors (ASVs) are defined to represent the auditory localization of each arrangement parameter. By selecting subsets of ASVs which are approximately orthogonal, we can choose arrangement parameters which will produce diverse auditory localizations. Experimental evaluations were conducted using music composed of three audio sources. Subjective evaluations confirmed that novice users can obtain diverse localizations using the proposed interface.

    DOI: 10.1587/transinf.2017EDP7028

    Web of Science

    Scopus

    CiNii Research

  18. Music staging AI

    Niwa K., Ohtani K., Takeda K.

    ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings   2017-March 巻   頁: 6588 - 6589   2017年

     詳細を見る

    記述言語:日本語   出版者・発行元:ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings  

    Through smartphones, user enables to download/listen music anytime and anywhere. As a concept of a future audio player, we propose a framework of music staging artificial intelligence (AI). In that framework, audio object signals, e.g. vocal, guitar, bass, drums and keyboards, are assumed to be extracted from stereo music signals. To visualize music as if live performance is virtually conducted, playing motion sequence is estimated by using separated signals. After adjusting the spatial arrangement of audio objects so as to each user prefers it, audio/visual rendering is conducted. We constructed two types of demonstration systems for music staging AI. In the smartphone-based implementation, each user enables to change the spatial arrangement through sliderbar dragging. Since information of user preferable spatial arrangement can be sent from each smartphone to server, it would enable to predict/recommend the user preferable spatial arrangement. In another implementation, head mount display (HMD) was utilized to dive into virtual music live performance. Each user enables to walk/teleport anywhere and audio is then changing corresponding to the user view.

    DOI: 10.1109/ICASSP.2017.8005294

    Scopus

▼全件表示

科研費 3

  1. 集団運動における動画からシミュレートできる説明可能な戦術評価技術の開発

    研究課題/研究課題番号:23H03282  2023年4月 - 2026年3月

    科学研究費助成事業  基盤研究(B)

    藤井 慶輔, カルバヨ アレックサンダー, 大谷 健登

      詳細を見る

    担当区分:研究分担者 

    集団スポーツの戦術評価において、計測技術と機械学習技術の発展を応用し、選手やボールの追跡、現実に適応した集団運動のシミュレーション技術の開発、そして現場で利用しやすい基盤的技術の開発を目的とする。自動運転分野との知見を共有し、実データを活用した強化学習モデル化によるシミュレーション技術の開発や、チーム戦術理解・遂行能力の向上に役立つ技術の開発を進める。

  2. 根拠と因果に基づく実世界インタラクションの予測・制御に関する領域横断的研究

    研究課題/研究課題番号:21H04892  2021年4月 - 2024年3月

    科学研究費助成事業  基盤研究(A)

    武田 一哉, 藤井 慶輔, 戸田 智基, 石黒 祥生, カルバヨ アレックサンダー, 宮島 千代美, 大谷 健登, 筒井 和詩, 竹内 栄二朗, 丁 明

      詳細を見る

    担当区分:研究分担者 

    自動運転のように、実世界で人間と環境が相互作用する実世界インタラクションを予測・制御する方法論が求められている。深層学習の活用に期待が集まっているが、根拠に基づくモデル化の困難さ、場合を尽くしたデータ収集の困難さなどから、未だ機械学習で実環境の人間行動を再現することは容易ではない。研究は応用領域毎に進められ多様な知見が蓄積されているが、この問題構造は共通している。本研究の目的は、原理や因果を陽にモデルに組み入れる方法で、運転・スポーツ・音声に関する実世界インタラクションが予測・制御可能なことを横断的・実証的に確認し、実世界インタラクションの複数領域における共通構造を明らかにすることである。
    当該年度は、主に2つのアプローチによる研究を行った。(1)各ドメインでのインタラクションを機械学習ベース、ルールベース、あるいはそれらを組み合わせた予測技術について研究した。(2)インタラクションの制御に関して人間が介入する方法や、介入効果を推定する方法について研究した。国際連携においては、スペインや中国の複数の大学からインターン学生を受け入れたり、学会(IV2021)やドイツの大学と自動運転に関するワークショップを開いたり、中国の複数の大学でセミナーを行った。
    (1)について、運転に関しては、複雑な実世界の運転環境での可視性の推定に関する研究(Narksri et al. ITSC2021) 、ドライバー適応型の深層生成モデルによるパーソナライズされた運転行動の予測に関する研究(Bao et al. IV2021)などの研究がITS(Intelligent Transportation System)の主要国際会議に採択された。スポーツ分野においては、サッカーにおける軌道予測に基づいた選手の評価に関する研究が行われ、現在国際会議に投稿中である。音声情報処理においては、音声生成過程を考慮した深層変分自己符号化器に基づく歌声基本周波数パターンモデリングに基づく研究が国際会議に採択された(Seki et al. IEEE MLSP, 2021)。(2)について、運転に関しては、自動運転への認識介入インターフェース(Kuribayashi et al. ITSC 2021)などがITS系の主要国際会議に採択された。さらに、自動運転シミュレータや動物の群れモデルへの介入、集団スポーツのプレーなど、領域横断的なマルチエージェントの複雑なシナリオにおける経時的介入結果の推定手法に関する研究が行われ、現在国際会議に投稿中である。
    成果として、研究実績にあるように様々なドメインに関する実世界インタラクションの予測に関する技術を開発したため。特に、インタラクションを機械学習ベース、ルールベース、あるいはそれらを組み合わせた方法にて、予測する技術や、インタラクションの制御に関して人間が介入する方法や、介入効果を推定する方法について新たに開発したため。
    研究については、昨年度に引き続き、(1)様々なドメインでの実世界インタラクションを機械学習ベース、ルールベース、あるいはそれらを組み合わせた予測技術、(2)インタラクションの制御に関して人間が介入する方法や、介入効果を推定する技術、に加えて、(3)強化学習により制御介入方策を自動決定する方法を実装・評価する研究にも着手する。また、ITS関連の研究テーマに関して、英語による専門書の執筆準備を進める。国際連携については、引き続き海外の大学のインターンを受け入れたり、セミナー・ワークショップを開催するとともに、代表・分担者の研究室の大学院生の海外派遣も支援する。

  3. 楽器の空間配置制御に基づく楽曲の聴覚印象を多様に変化させる音響空間再生システム

    研究課題/研究課題番号:16J11472  2016年4月 - 2018年3月

    科学研究費助成事業  特別研究員奨励費

    大谷 健登

      詳細を見る

    担当区分:研究代表者 

    配分額:1300000円 ( 直接経費:1300000円 )