Updated on 2024/03/28

写真a

 
TSUTAKE Chihiro
 
Organization
Graduate School of Engineering Information and Communication Engineering 1 Assistant Professor
Graduate School
Graduate School of Engineering
Undergraduate School
School of Engineering Electrical Engineering, Electronics, and Information Engineering
Title
Assistant Professor

Degree 1

  1. 博士(工学) ( 2020.3   福井大学 ) 

Research Interests 4

  1. Compressed Sensing

  2. Convex Optimization

  3. Video Coding

  4. Image Processing

Research Areas 1

  1. Manufacturing Technology (Mechanical Engineering, Electrical and Electronic Engineering, Chemical Engineering) / Communication and network engineering

 

Papers 22

  1. Compressive Acquisition of Light Field Video Using Aperture-Exposure-Coded Camera

    Mizuno, R; Takahashi, K; Yoshida, M; Tsutake, C; Fujii, T; Nagahara, H

    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS   Vol. 12 ( 1 ) page: 22 - 35   2024

     More details

  2. 時間多重符号化開口法と符号化フォーカルスタック法—ライトフィールドのためのスナップショット圧縮撮像手法の比較研究

    立石 航平, 都竹 千尋, 高橋 桂太, 藤井 俊彰

    情報・システムソサイエティ誌   Vol. 28 ( 3 ) page: 7 - 7   2023.11

     More details

    Language:Japanese   Publisher:一般社団法人電子情報通信学会  

    DOI: 10.1587/ieiceissjournal.28.3_7

    CiNii Research

  3. Reconstructing Continuous Light Field From Single Coded Image

    Ishikawa, Y; Takahashi, K; Tsutake, C; Fujii, T

    IEEE ACCESS   Vol. 11   page: 99387 - 99396   2023

     More details

    Publisher:IEEE Access  

    We propose a method for reconstructing a continuous light field of a target scene from a single observed image. Our method takes the best of two worlds: joint aperture-exposure coding for compressive light-field acquisition, and a neural radiance field (NeRF) for view synthesis. Joint aperture-exposure coding implemented in a camera enables effective embedding of 3-D scene information into an observed image, but in previous works, it was used only for reconstructing discretized light-field views. NeRF-based neural rendering enables high quality view synthesis of a 3-D scene from continuous viewpoints, but when only a single image is given as the input, it struggles to achieve satisfactory quality. Our method integrates these two techniques into an efficient and end-to-end trainable pipeline. Trained on a wide variety of scenes, our method can reconstruct continuous light fields accurately and efficiently without any test time optimization. To our knowledge, this is the first work to bridge two worlds: camera design for efficiently acquiring 3-D information and neural rendering.

    DOI: 10.1109/ACCESS.2023.3314340

    Web of Science

    Scopus

  4. Direct Super Resolution for Multiplane Images

    Sato, C; Tsutake, C; Takahashi, K; Fujii, T

    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS   Vol. 11 ( 2 ) page: 34 - 42   2023

     More details

  5. Compressing Sign Information in DCT-based Image Coding via Deep Sign Retrieval

    Suzuki, K; Tsutake, C; Takahashi, K; Fujii, T

    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS   Vol. 12 ( 1 ) page: 110 - 122   2023

     More details

  6. Compressing Light Field as Multiplane Image

    Kawakami, M; Tsutake, C; Takahashi, K; Fujii, T

    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS   Vol. 11 ( 2 ) page: 27 - 33   2023

     More details

  7. Time-Multiplexed Coded Aperture and Coded Focal Stack -Comparative Study on Snapshot Compressive Light Field Imaging

    TATEISHI Kohei, TSUTAKE Chihiro, TAKAHASHI Keita, FUJII Toshiaki

    IEICE Transactions on Information and Systems   Vol. E105D ( 10 ) page: 1679 - 1690   2022.10

     More details

    Language:English   Publisher:The Institute of Electronics, Information and Communication Engineers  

    <p>A light field (LF), which is represented as a set of dense, multi-view images, has been used in various 3D applications. To make LF acquisition more efficient, researchers have investigated compressive sensing methods by incorporating certain coding functionalities into a camera. In this paper, we focus on a challenging case called snapshot compressive LF imaging, in which an entire LF is reconstructed from only a single acquired image. To embed a large amount of LF information in a single image, we consider two promising methods based on rapid optical control during a single exposure: time-multiplexed coded aperture (TMCA) and coded focal stack (CFS), which were proposed individually in previous works. Both TMCA and CFS can be interpreted in a unified manner as extensions of the coded aperture (CA) and focal stack (FS) methods, respectively. By developing a unified algorithm pipeline for TMCA and CFS, based on deep neural networks, we evaluated their performance with respect to other possible imaging methods. We found that both TMCA and CFS can achieve better reconstruction quality than the other snapshot methods, and they also perform reasonably well compared to methods using multiple acquired images. To our knowledge, we are the first to present an overall discussion of TMCA and CFS and to compare and validate their effectiveness in the context of compressive LF imaging.</p>

    DOI: 10.1587/transinf.2022PCP0003

    Web of Science

    Scopus

    CiNii Research

  8. Unrolled Network for Light Field Display

    MATSUURA Kotaro, TSUTAKE Chihiro, TAKAHASHI Keita, FUJII Toshiaki

    IEICE Transactions on Information and Systems   Vol. E105D ( 10 ) page: 1721 - 1725   2022.10

     More details

    Language:English   Publisher:The Institute of Electronics, Information and Communication Engineers  

    <p>Inspired by the framework of algorithm unrolling, we propose a scalable network architecture that computes layer patterns for light field displays, enabling control of the trade-off between the display quality and the computational cost on a single pre-trained network.</p>

    DOI: 10.1587/transinf.2022PCL0002

    Web of Science

    Scopus

    CiNii Research

  9. Pixel-density enhanced integral three-dimensional display with two-dimensional image synthesis

    Watanabe Hayato, Rai Juna, Tsutake Chihiro, Takahashi Keita, Fujii Toshiaki

    OPTICS EXPRESS   Vol. 30 ( 20 ) page: 36038 - 36054   2022.9

     More details

    Publisher:Optics Express  

    Integral three-dimensional (3D) displays can display naturally viewable 3D images. However, displaying 3D images with high pixel density is difficult because the maximum pixel number is restricted by the number of lenses of a lens array. Therefore, we propose a method for increasing the maximum pixel density of 3D images by optically synthesizing the displayed images of an integral 3D display and high-definition two-dimensional display using a half mirror. We evaluated the improvements in 3D image resolution characteristics through simulation analysis of the modulation transfer function. We developed a prototype display system that can display 3D images with a maximum resolution of 4K and demonstrated the effectiveness of the proposed method.

    DOI: 10.1364/OE.469045

    Web of Science

    Scopus

  10. Denoising multi-view images by soft thresholding: A short-time DFT approach

    Tomita Keigo, Tsutake Chihiro, Takahashi Keita, Fujii Toshiaki

    SIGNAL PROCESSING-IMAGE COMMUNICATION   Vol. 105   2022.7

     More details

    Publisher:Signal Processing: Image Communication  

    Short-time discrete Fourier transform (ST-DFT) is known as a promising technique for image and video denoising. The seminal work by Saito and Komatsu hypothesized that natural video sequences can be represented by sparse ST-DFT coefficients and noisy video sequences can be denoised on the basis of statistical modeling and shrinkage of the ST-DFT coefficients. Motivated by their theory, we develop an application of ST-DFT for denoising multi-view images. We first show that multi-view images have sparse ST-DFT coefficients as well and then propose a new statistical model, which we call the multi-block Laplacian model, based on the block-wise sparsity of ST-DFT coefficients. We finally utilize this model to carry out denoising by solving a convex optimization problem, referred to as the least absolute shrinkage and selection operator. A closed-form solution can be computed by soft thresholding, and the optimal threshold value is derived by minimizing the error function in the ST-DFT domain. We demonstrate through experiments the effectiveness of our denoising method compared with several previous denoising techniques. Our method implemented in Python language is available from https://github.com/ctsutake/mviden.

    DOI: 10.1016/j.image.2022.116710

    Web of Science

    Scopus

  11. Acquiring a Dynamic Light Field through a Single-Shot Coded Image

    Mizuno Ryoya, Takahashi Keita, Yoshida Michitaka, Tsutake Chihiro, Fujii Toshiaki, Nagahara Hajime

    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022)   Vol. 2022-June   page: 19798 - 19808   2022

     More details

    Publisher:Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition  

    We propose a method for compressively acquiring a dynamic light field (a 5-D volume) through a single-shot coded image (a 2-D measurement). We designed an imaging model that synchronously applies aperture coding and pixel-wise exposure coding within a single exposure time. This coding scheme enables us to effectively embed the original information into a single observed image. The observed image is then fed to a convolutional neural network (CNN) for light-field reconstruction, which is jointly trained with the camera-side coding patterns. We also developed a hardware prototype to capture a real 3-D scene moving over time. We succeeded in acquiring a dynamic light field with 5x5 viewpoints over 4 temporal sub-frames (100 views in total)from a single observed image. Repeating capture and reconstruction processes over time, we can acquire a dynamic light field at 4x the frame rate of the camera. To our knowledge, our method is the first to achieve a finer temporal resolution than the camera itself in compressive light-field acquisition. Our software is available from our project webpage.11https://www.fujii.nuee.nagoya-u.ac.jp/Research/CompCam2

    DOI: 10.1109/CVPR52688.2022.01921

    Web of Science

    Scopus

  12. Restoration of JPEG Compressed Image with Narrow Quantization Constraint Set without Parameter Optimization

    Tsutake Chihiro, Yoshida Toshiyuki

    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS   Vol. 10 ( 3 ) page: 130 - 139   2022

     More details

  13. Displaying Multiple 3D Scenes with a Single Layered Display

    Sato C., Tsutake C., Takahashi K., Fujii T.

    Proceedings of the International Display Workshops   Vol. 29   page: 596 - 599   2022

     More details

    Publisher:Proceedings of the International Display Workshops  

    We propose a method of displaying two different 3D scenes on a single layered light-field display, where the layer patterns are optimized for the two scenes simultaneously. We demonstrate that both scenes can be displayed in high quality when the viewing zones for them are separated sufficiently.

    Scopus

  14. Image Generation Method Using Weight Maps for Subjective Quality Improvement in Two-Dimensional Image Synthetic Integral Three-Dimensional Display

    Watanabe H., Arai J., Tsutake C., Takahashi K., Fujii T.

    Proceedings of the International Display Workshops   Vol. 29   page: 521 - 524   2022

     More details

    Publisher:Proceedings of the International Display Workshops  

    We propose an image generation method to display three-dimensional (3D) images with high maximum pixel density and improved subjective quality on a two-dimensional image synthetic integral 3D display. In addition to the target light field image, weight maps obtained from the depth information were used to generate the images.

    Scopus

  15. AN EFFICIENT COMPRESSION METHOD FOR SIGN INFORMATION OF DCT COEFFICIENTS VIA SIGN RETRIEVAL

    Tsutake Chihiro, Takahashi Keita, Fujii Toshiaki

    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)   Vol. 2021-September   page: 2024 - 2028   2021

     More details

    Publisher:Proceedings - International Conference on Image Processing, ICIP  

    Compression of the sign information of discrete cosine transform coefficients is an intractable problem in image compression schemes due to the equiprobable occurrence of the sign bits. To overcome this difficulty, we propose an efficient compression method for such sign information based on phase retrieval, which is a classical signal restoration problem attempting to find the phase information of discrete Fourier transform coefficients from their magnitudes. In our compression strategy, the sign bits of all the AC components in the cosine domain are excluded from a bitstream at the encoder and are complemented at the decoder by solving a sign recovery problem, which we call sign retrieval. The experimental results demonstrate that the proposed method outperforms previous techniques for sign compression in terms of a rate-distortion criterion. Our method implemented in Python language is available from https://github.com/ctsutake/sr.

    DOI: 10.1109/ICIP42928.2021.9506155

    Web of Science

    Scopus

  16. FACTORIZED MODULATION FOR SINGLE-SHOT LIGHT-FIELD ACQUISITION

    Tateishi Kohei, Sakai Kohei, Tsutake Chihiro, Takahashi Keita, Fujii Toshiaki

    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)   Vol. 2021-September   page: 3253 - 3257   2021

     More details

    Publisher:Proceedings - International Conference on Image Processing, ICIP  

    A light field (LF), which is represented as a set of dense multiview images, has been utilized in various 3D applications. To make LF acquisition more efficient, researchers have investigated compressive sensing methods by incorporating modulation or coding functions into the camera. In this work, we investigate a challenging case of compressive LF acquisition in which an entire LF should be reconstructed from only a single coded image. To achieve this goal, we propose a new modulation scheme called factorized modulation that can approximate arbitrary 4D modulation patterns in a factorized manner. Our method can be hardwareimplemented by combining the architectures for coded aperture and pixelwise coded exposure imaging. The modulation pattern is jointly optimized with a CNNbased reconstruction algorithm. Our method is validated through extensive evaluations against other modulation schemes.

    DOI: 10.1109/ICIP42928.2021.9506797

    Web of Science

    Scopus

  17. AN EFFICIENT IMAGE COMPRESSION METHOD BASED ON NEURAL NETWORK: AN OVERFITTING APPROACH

    Mikami Yu, Tsutake Chihiro, Takahashi Keita, Fujii Toshiaki

    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)   Vol. 2021-September   page: 2084 - 2088   2021

     More details

    Publisher:Proceedings - International Conference on Image Processing, ICIP  

    Over the past decade, nonlinear image compression techniques based on neural networks have been rapidly developed to achieve more efficient storage and transmission of images compared with conventional linear techniques. A typical nonlinear technique is implemented as a neural network trained on a vast set of images, and the latent representation of a target image is transmitted. In contrast to the previous nonlinear techniques, we propose a new image compression method in which a neural network model is trained exclusively on a single target image, rather than a set of images. Such an overfitting strategy enables us to embed fine image features in not only the latent representation but also the network parameters, which helps reduce the reconstruction error against the target image. The effectiveness of our method is validated through a comparison with conventional image compression techniques in terms of a rate-distortion criterion.

    DOI: 10.1109/ICIP42928.2021.9506367

    Web of Science

    Scopus

  18. Video Denoising by BM3D Technique with an Improved Cube Construction and SURE Shrinkage Techniques

    Yamada Ryoya, Tsutake Chihiro, Yoshida Toshiyuki

    INTERNATIONAL WORKSHOP ON ADVANCED IMAGING TECHNOLOGY (IWAIT) 2021   Vol. 11766   2021

     More details

    Publisher:Proceedings of SPIE - The International Society for Optical Engineering  

    This paper attempts to improve denoising efficiency of BM3D technique for videos, i.e., VBM3D. VBM3D constructs 3D cubes from target video frames by a block matching algorithm that minimizes the residual matching error. However, such a cube formation results in sacrificing the pixel correlation in the temporal direction. This paper thus modifies this step to preserve the sub-pixel alignment, which makes the Fourier coefficients of each cube located on a vicinity of a certain plane in the 3-D Fourier domain. Then, SURE-shrinkage technique is separately applied to the inside and outside of the vicinity of the plane to denoise each cube. The experimental results given in this paper demonstrate the validity of our approach.

    DOI: 10.1117/12.2591104

    Web of Science

    Scopus

  19. Acceleration of Sparse Representation Based Coded Exposure Photography Reviewed

    Chihiro Tsutake, Toshiyuki Yoshida

    The Journal of The Institute of Image Information and Television Engineers   Vol. 74 ( 1 ) page: 198-207   2020

     More details

    Authorship:Lead author   Language:Japanese   Publishing type:Research paper (scientific journal)  

    DOI: 10.3169/itej.74.198

  20. Vaguelette-Wavelet Deconvolution via Compressive Sampling Reviewed

    Chihiro Tsutake, Toshiyuki Yoshida

    IEEE Access   Vol. 7   page: 54533-54541   2019

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

    DOI: 10.1109/access.2019.2913024

  21. Block-Matching-Based Implementation of Affine Motion Estimation for HEVC Reviewed

    Chihiro Tsutake, Toshiyuki Yoshida

    IEICE Transactions on Information and Systems   Vol. E101.D ( 4 ) page: 1151-1158   2018

     More details

    Authorship:Lead author   Language:English   Publishing type:Research paper (scientific journal)  

    DOI: 10.1587/transinf.2017EDP7201

  22. Fast Mode Decision Technique for HEVC Intra Prediction Based on Reliability Metric for Motion Vectors Reviewed

    Chihiro Tsutake, Yutaka Nakano, Toshiyuki Yoshida

    IEICE Transactions on Information and Systems   Vol. E99.D ( 4 ) page: 1193-1201   2016

     More details

    Authorship:Lead author   Language:English   Publishing type:Research paper (scientific journal)  

    DOI: 10.1587/transinf.2015EDP7244

▼display all

KAKENHI (Grants-in-Aid for Scientific Research) 2

  1. シャック・ハルトマン波面センサ用いた動的光線空間の撮影及び処理に関する基礎研究

    Grant number:22K17909  2022.4 - 2025.3

    科学研究費助成事業  若手研究

    都竹 千尋

      More details

    Authorship:Principal investigator 

    Grant amount:\4550000 ( Direct Cost: \3500000 、 Indirect Cost:\1050000 )

    シャック・ハルトマン波面センサは,マイクロレンズアレイとその後側焦点距離に配置された撮像素子からなる光学機器である.これまでの研究では,マイクロレンズアレイ+撮像素子というカメラに類似した構造を積極的に活かした取得・処理方法が検討されておらず,光線群の明るさを記録するカメラとして応用されることがなかった.そこで本研究では,様々な位置・方向から入射される光線群の明るさ(光線空間)をリアルタイム撮影するカメラシステムを提案する.すなわち,動きを伴う光線空間(動的光線空間)を撮影できる光学系を波面センサに実装し,撮影した動的光線空間の処理について基礎研究する.

  2. A Study on Image Capturing and Processing System using Optical Control Device and Event Camera

    Grant number:21H03464  2021.4 - 2024.3

    Grants-in-Aid for Scientific Research  Grant-in-Aid for Scientific Research (B)

      More details

    Authorship:Coinvestigator(s) 

 

Teaching Experience (On-campus) 1

  1. 離散数学及び演習

    2020