Updated on 2025/04/01

写真a

 
TSUTAKE Chihiro
 
Organization
Graduate School of Engineering Information and Communication Engineering 1 Assistant Professor
Graduate School
Graduate School of Engineering
Undergraduate School
School of Engineering Electrical Engineering, Electronics, and Information Engineering
Title
Assistant Professor
Contact information
メールアドレス
External link

Degree 1

  1. Ph.D in Engineering ( 2020.3   University of Fukui ) 

Research Interests 3

  1. Holography

  2. Phase Retrieval

  3. Multidimensional Signal Processing

Research Areas 3

  1. Informatics / Intelligent informatics

  2. Manufacturing Technology (Mechanical Engineering, Electrical and Electronic Engineering, Chemical Engineering) / Communication and network engineering

  3. Nanotechnology/Materials / Optical engineering and photon science

Professional Memberships 2

  1. The Institute of Image Information and Television Engineers

  2. The Institute of Electronics, Information and Communication Engineers

 

Papers 40

  1. Holographic Phase Retrieval via Wirtinger Flow: Cartesian Form with Auxiliary Amplitude Reviewed

    Optics Express   Vol. 32 ( 12 ) page: 20600 - 20617   2024.6

     More details

    Authorship:Corresponding author   Publishing type:Research paper (scientific journal)  

    DOI: 10.1364/OE.523855

    Web of Science

    Scopus

    PubMed

    Other Link: https://arxiv.org/abs/2403.10560

  2. An Efficient Compression Method for Sign Information of DCT Coefficients via Sign Retrieval Reviewed Open Access

    2021 IEEE International Conference on Image Processing     page: 2024 - 2028   2021

     More details

    Publishing type:Research paper (international conference proceedings)  

    DOI: 10.1109/ICIP42928.2021.9506155

    Web of Science

    Scopus

    Other Link: https://arxiv.org/abs/2405.07487v1

  3. Denoising Multi-View Images by Soft Thresholding: A Short-Time DFT Approach Reviewed Open Access

      Vol. 105   2022.7

  4. Shack-Hartmann Holographic Stereogram: Acquisition of Light Field using Wavefront Sensor and Real-Time Holography Reviewed

    MATSUOKA Koki, TSUTAKE Chihiro, TAKAHASHI Keita, FUJII Toshiaki

      Vol. J107-D ( 10 ) page: 480 - 490   2024.10

     More details

    Language:Japanese   Publisher:The Institute of Electronics, Information and Communication Engineers  

    For achieving comfortable 3D perception via holographic stereogram, we use a light field (LF) that is densely sampled along the viewing direction. We acquire the LF by using a Shack-Hartmann wavefront sensor. Our system realizes the acquisition of dense LFs and holographic display in real time.

    DOI: 10.14923/transinfj.2024iet0001

    CiNii Research

  5. Compressive Acquisition of Light Field Video Using Aperture-Exposure-Coded Camera

    Mizuno, R; Takahashi, K; Yoshida, M; Tsutake, C; Fujii, T; Nagahara, H

    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS   Vol. 12 ( 1 ) page: 22 - 35   2024

     More details

  6. [Paper] Compressing Sign Information in DCT-based Image Coding via Deep Sign Retrieval Reviewed Open Access

    Suzuki Kei, Tsutake Chihiro, Takahashi Keita, Fujii Toshiaki

    ITE Transactions on Media Technology and Applications   Vol. 12 ( 1 ) page: 110 - 122   2024

     More details

    Authorship:Corresponding author   Language:English   Publisher:The Institute of Image Information and Television Engineers  

    <p>Compressing the sign information of discrete cosine transform (DCT) coefficients is an intractable problem in image coding schemes due to the equiprobable characteristics of the signs. To overcome this difficulty, we propose an efficient compression method for the sign information called “sign retrieval.” This method is inspired by phase retrieval, which is a classical signal restoration problem of finding the phase information of discrete Fourier transform coefficients from their magnitudes. The sign information of all DCT coefficients is excluded from a bitstream at the encoder and is complemented at the decoder through our sign retrieval method. We show through experiments that our method outperforms previous ones in terms of the bit amount for the signs and computation cost. Our method, implemented in Python language, is available from https://github.com/ctsutake/dsr.</p>

    DOI: 10.3169/mta.12.110

    Open Access

    Scopus

    CiNii Research

    Other Link: https://arxiv.org/abs/2209.10712

  7. Warm-start NeRF: Accelerating Per-scene Training of NeRF-based Light-Field Representation

    Nishio T., Tsutake C., Takahashi K., Fujii T.

    2024 IEEE International Conference on Visual Communications and Image Processing, VCIP 2024     2024

     More details

    Publisher:2024 IEEE International Conference on Visual Communications and Image Processing, VCIP 2024  

    A light field is represented as a set of multi-view images captured from a dense 2-D array of viewpoints. To treat a light field as being continuous, we represent it as a neural radiance field (NeRF), which is a learned representation of a 3-D scene. NeRFs are renowned for their ability to reconstruct a target 3-D scene with compelling visual quality, but they are slow to train. A solution for this problem is to use a tiny neural network and trainable volumetric features as the scene representation, which is considered the baseline of our research. For further acceleration, we propose a method for warm-starting the per-scene training by setting good initial values for the trainable parameters. To this end, we introduce another encoder network to obtain the initial volumetric features from the target light field. Starting with the appropriate initial values, our method can achieve better rendering quality with fewer training iterations than the baseline.

    DOI: 10.1109/VCIP63160.2024.10849784

    Scopus

  8. Unsupervised Framerate Upsampling from Events

    Okuno, H; Tsutake, C; Takahashi, K; Fujii, T

    INTERNATIONAL WORKSHOP ON ADVANCED IMAGING TECHNOLOGY, IWAIT 2024   Vol. 13164   2024

     More details

    Publisher:Proceedings of SPIE - The International Society for Optical Engineering  

    An event camera adopts a bio-inspired sensing mechanism that can record the luminance changes over time. The recorded information, called events, are detected asynchronously at each pixel in the order of microseconds. Events are quite useful for framerate upsampling of a video, because the information between the low-framerate video frames (key-frames) can be supplemented from the events. We propose a method for framerate upsampling from events on the basis of an unsupervised approach; our method does not require ground-truth high-framerate videos for pre-training but can be trained solely on the key-frames and events taken from the target scene. We also report some promising experimental results with a fast moving scene captured by a DAVIS346 event camera.

    DOI: 10.1117/12.3018663

    Web of Science

    Scopus

  9. Toward Neural Light-Field Compression

    Ishikawa, Y; Tsutake, C; Takahashi, K; Fujii, T

    INTERNATIONAL WORKSHOP ON ADVANCED IMAGING TECHNOLOGY, IWAIT 2024   Vol. 13164   2024

     More details

    Publisher:Proceedings of SPIE - The International Society for Optical Engineering  

    We propose a data compression method for a light field using a compact and computationally-efficient neural representation. We first train a neural network with learnable parameters to reproduce the target light field. We then compress the set of learned parameters as an alternative representation of the light field. Our method is significantly different in concept from the traditional approaches where a light field is encoded as a set of images or a video (as a pseudo-temporal sequence) using off-the-shelf image/video codecs. We experimentally show that our method achieves a promising rate-distortion performance.

    DOI: 10.1117/12.3018716

    Web of Science

    Scopus

  10. Time-Efficient Light-Field Acquisition Using Coded Aperture and Events Open Access

    Habuchi S., Takahashi K., Tsutake C., Fujii T., Nagahara H.

    Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition     page: 24923 - 24933   2024

     More details

    Publisher:Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition  

    We propose a computational imaging method for time-efficient light-field acquisition that combines a coded aperture with an event-based camera. Differentfrom the conventional coded-aperture imaging method, our method applies a sequence of coding patterns during a single exposure for an image frame. The parallax information, which is related to the differences in coding patterns, is recorded as events. The image frame and events, all of which are measured in a single exposure, are jointly used to computationally reconstruct a light field. We also designed an algorithm pipeline for our method that is end-to-end trainable on the basis of deep optics and compatible with real camera hardware. We experimentally showed that our method can achieve more accurate reconstruction than several other imaging methods with a single exposure. We also developed a hardware prototype with the potential to complete the measurement on the camera within 22 msec and demonstrated that light fields from real 3-D scenes can be obtained with convincing visual quality. Our software and supplementary video are available from our project website.

    DOI: 10.1109/CVPR52733.2024.02354

    Scopus

  11. Mono+Sub: Compressing Light Field as Monocular Image and Subsidiary Data

    Imazu R., Tsutake C., Takahashi K., Fujii T.

    2024 IEEE International Conference on Visual Communications and Image Processing, VCIP 2024     2024

     More details

    Publisher:2024 IEEE International Conference on Visual Communications and Image Processing, VCIP 2024  

    A light field is usually represented as a set of multi-view images captured from a two-dimensional (2-D) array of viewpoints and requires a large amount of data compared with a standard 2-D image. We propose a 2-D compatible light-field compression method for encoding a light field as a 2-D monocular image and subsidiary data. In terms of the image quality, we prioritize the central image (regarded as the 2-D monocular image) over the other images in the light field, because the light field is considered an extension of the 2-D monocular image. To this end, we encode and decode the monocular image using a standard image codec and introduce a learned encoder and decoder pair for the subsidiary data. Experimental results indicate that our method achieved promising rate-distortion performance, especially for extremely low bit-rate ranges. Even though our method requires only a small amount of subsidiary data compared with those for the monocular image, the entire light field can be reconstructed with reasonable visual quality.

    DOI: 10.1109/VCIP63160.2024.10849860

    Scopus

  12. [Paper] Compressive Acquisition of Light Field Video Using Aperture-Exposure-Coded Camera Open Access

    Mizuno Ryoya, Takahashi Keita, Yoshida Michitaka, Tsutake Chihiro, Fujii Toshiaki, Nagahara Hajime

    ITE Transactions on Media Technology and Applications   Vol. 12 ( 1 ) page: 22 - 35   2024

     More details

    Language:English   Publisher:The Institute of Image Information and Television Engineers  

    <p>We propose a method for compressively acquiring a light field video using a single camera equipped with an optical aperture-exposure coding mechanism. The aperture-exposure coding is applied to each exposure time, enabling the embedding of the information of a light field video (a 5-D volume) into a single observed image (a 2-D measurement). Temporally-successive images obtained from the camera are used to computationally reconstruct the light field video at a faster frame rate than that of the camera. We also developed a hardware prototype to validate our method on real 3-D time-varying scenes. Using our method, we can obtain a light field video with 5 × 5 viewpoints over 4 temporal sub-frames (100 views in total) per each observed image. By repeating the capture and reconstruction processes over time, we can acquire a light field video of arbitrary length at 4 × the frame rate of the camera. To the best of our knowledge, we are the first to propose a method of joint angular-temporal compression for light-field acquisition, achieving a finer temporal resolution than that of the camera. A supplementary video is available from https://youtu.be/FAujrak8Dok.</p>

    DOI: 10.3169/mta.12.22

    Open Access

    Scopus

    CiNii Research

  13. 時間多重符号化開口法と符号化フォーカルスタック法—ライトフィールドのためのスナップショット圧縮撮像手法の比較研究

    立石 航平, 都竹 千尋, 高橋 桂太, 藤井 俊彰

    情報・システムソサイエティ誌   Vol. 28 ( 3 ) page: 7 - 7   2023.11

     More details

    Language:Japanese   Publisher:一般社団法人電子情報通信学会  

    DOI: 10.1587/ieiceissjournal.28.3_7

    CiNii Research

  14. PhaseMax法に基づく凸最適化型の位相回復ホログラフィ

    和田 達希, 都竹 千尋, 高橋 桂太, 藤井 俊彰

    三次元画像コンファレンス講演論文集   Vol. 31 ( 0 ) page: 13 - 16   2023.7

     More details

    Language:Japanese   Publisher:3次元画像コンファレンス実行委員会  

    DOI: 10.60374/sanjigen.31.0_13

    CiNii Research

  15. 深層学習に基づくホログラム圧縮に関する基礎検討

    渡部 義貴, 和田 達希, 都竹 千尋, 高橋 桂太, 藤井 俊彰

    三次元画像コンファレンス講演論文集   Vol. 31 ( 0 ) page: 31 - 34   2023.7

     More details

    Language:Japanese   Publisher:3次元画像コンファレンス実行委員会  

    DOI: 10.60374/sanjigen.31.0_31

    CiNii Research

  16. Reconstructing Continuous Light Field From Single Coded Image Open Access

    Ishikawa, Y; Takahashi, K; Tsutake, C; Fujii, T

    IEEE ACCESS   Vol. 11   page: 99387 - 99396   2023

     More details

    Publisher:IEEE Access  

    We propose a method for reconstructing a continuous light field of a target scene from a single observed image. Our method takes the best of two worlds: joint aperture-exposure coding for compressive light-field acquisition, and a neural radiance field (NeRF) for view synthesis. Joint aperture-exposure coding implemented in a camera enables effective embedding of 3-D scene information into an observed image, but in previous works, it was used only for reconstructing discretized light-field views. NeRF-based neural rendering enables high quality view synthesis of a 3-D scene from continuous viewpoints, but when only a single image is given as the input, it struggles to achieve satisfactory quality. Our method integrates these two techniques into an efficient and end-to-end trainable pipeline. Trained on a wide variety of scenes, our method can reconstruct continuous light fields accurately and efficiently without any test time optimization. To our knowledge, this is the first work to bridge two worlds: camera design for efficiently acquiring 3-D information and neural rendering.

    DOI: 10.1109/ACCESS.2023.3314340

    Open Access

    Web of Science

    Scopus

  17. Direct Super Resolution for Multiplane Images

    Sato, C; Tsutake, C; Takahashi, K; Fujii, T

    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS   Vol. 11 ( 2 ) page: 34 - 42   2023

     More details

  18. Compressing Sign Information in DCT-based Image Coding via Deep Sign Retrieval

    Suzuki, K; Tsutake, C; Takahashi, K; Fujii, T

    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS   Vol. 12 ( 1 ) page: 110 - 122   2023

     More details

  19. Compressing Light Field as Multiplane Image

    Kawakami, M; Tsutake, C; Takahashi, K; Fujii, T

    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS   Vol. 11 ( 2 ) page: 27 - 33   2023

     More details

  20. [Paper] Compressing Light Field as Multiplane Image Open Access

    Kawakami Masaki, Tsutake Chihiro, Takahashi Keita, Fujii Toshiaki

    ITE Transactions on Media Technology and Applications   Vol. 11 ( 2 ) page: 27 - 33   2023

     More details

    Language:English   Publisher:The Institute of Image Information and Television Engineers  

    <p>A light field contains a large amount of data because it is represented as a dense set of multi-view images. We propose a method of compressing a light field as a newly emerging representation called a multiplane image (MPI), a graphics-oriented representation composed of a stack of semi-transparent images. Our method was constructed on a deep convolutional neural network (CNN), which was trained to generate an MPI from the given light field. To draw out the potential of this representation, we trained the CNN to be overfitted to the target light field. We also encouraged spatial smoothness to make the MPI easier to compress. Despite being in the early stage of development, our method has already achieved promising rate-distortion performance.</p>

    DOI: 10.3169/MTA.11.27

    Scopus

    CiNii Research

  21. [Paper] Direct Super Resolution for Multiplane Images Open Access

    Sato Chisaki, Tsutake Chihiro, Takahashi Keita, Fujii Toshiaki

    ITE Transactions on Media Technology and Applications   Vol. 11 ( 2 ) page: 34 - 42   2023

     More details

    Language:English   Publisher:The Institute of Image Information and Television Engineers  

    <p>A multiplane image (MPI) is a useful 3-D representation composed of a stack of semi-transparent images, from which arbitrary views can be rendered with little computational cost. In this paper, we tackled the problem of super-resolution for MPIs, where a high-resolution MPI is inferred from a lower resolution one. By analyzing the anti-aliasing condition for the light field that would be produced from an MPI, we clarified that such a high-resolution MPI should have smaller sampling intervals over not only the spatial dimension but also the depth dimension. On the basis of this analysis, we constructed a learning-based method to transform a low-resolution MPI into a higher resolution one with depth resolution enhancement. Tested on BasicLFSR dataset, our method achieved 30.54 dB on average, which was 1.29 dB higher than the case without depth resolution enhancement. Visual results indicated that our method can accurately restore high-frequency components. Although super-resolution techniques have been studied extensively for images, videos, and light fields, this is the first work to address the problem of direct super-resolution for MPIs.</p>

    DOI: 10.3169/MTA.11.34

    Open Access

    Scopus

    CiNii Research

  22. Time-Multiplexed Coded Aperture and Coded Focal Stack -Comparative Study on Snapshot Compressive Light Field Imaging Open Access

    TATEISHI Kohei, TSUTAKE Chihiro, TAKAHASHI Keita, FUJII Toshiaki

    IEICE Transactions on Information and Systems   Vol. E105D ( 10 ) page: 1679 - 1690   2022.10

     More details

    Language:English   Publisher:The Institute of Electronics, Information and Communication Engineers  

    <p>A light field (LF), which is represented as a set of dense, multi-view images, has been used in various 3D applications. To make LF acquisition more efficient, researchers have investigated compressive sensing methods by incorporating certain coding functionalities into a camera. In this paper, we focus on a challenging case called snapshot compressive LF imaging, in which an entire LF is reconstructed from only a single acquired image. To embed a large amount of LF information in a single image, we consider two promising methods based on rapid optical control during a single exposure: time-multiplexed coded aperture (TMCA) and coded focal stack (CFS), which were proposed individually in previous works. Both TMCA and CFS can be interpreted in a unified manner as extensions of the coded aperture (CA) and focal stack (FS) methods, respectively. By developing a unified algorithm pipeline for TMCA and CFS, based on deep neural networks, we evaluated their performance with respect to other possible imaging methods. We found that both TMCA and CFS can achieve better reconstruction quality than the other snapshot methods, and they also perform reasonably well compared to methods using multiple acquired images. To our knowledge, we are the first to present an overall discussion of TMCA and CFS and to compare and validate their effectiveness in the context of compressive LF imaging.</p>

    DOI: 10.1587/transinf.2022PCP0003

    Web of Science

    Scopus

    CiNii Research

  23. Unrolled Network for Light Field Display Open Access

    MATSUURA Kotaro, TSUTAKE Chihiro, TAKAHASHI Keita, FUJII Toshiaki

    IEICE Transactions on Information and Systems   Vol. E105D ( 10 ) page: 1721 - 1725   2022.10

     More details

    Language:English   Publisher:The Institute of Electronics, Information and Communication Engineers  

    <p>Inspired by the framework of algorithm unrolling, we propose a scalable network architecture that computes layer patterns for light field displays, enabling control of the trade-off between the display quality and the computational cost on a single pre-trained network.</p>

    DOI: 10.1587/transinf.2022PCL0002

    Web of Science

    Scopus

    CiNii Research

  24. Pixel-density enhanced integral three-dimensional display with two-dimensional image synthesis Open Access

    Watanabe, H; Rai, J; Tsutake, C; Takahashi, K; Fujii, T

    OPTICS EXPRESS   Vol. 30 ( 20 ) page: 36038 - 36054   2022.9

     More details

    Publisher:Optics Express  

    Integral three-dimensional (3D) displays can display naturally viewable 3D images. However, displaying 3D images with high pixel density is difficult because the maximum pixel number is restricted by the number of lenses of a lens array. Therefore, we propose a method for increasing the maximum pixel density of 3D images by optically synthesizing the displayed images of an integral 3D display and high-definition two-dimensional display using a half mirror. We evaluated the improvements in 3D image resolution characteristics through simulation analysis of the modulation transfer function. We developed a prototype display system that can display 3D images with a maximum resolution of 4K and demonstrated the effectiveness of the proposed method.

    DOI: 10.1364/OE.469045

    Open Access

    Web of Science

    Scopus

    PubMed

  25. SHWセンサによる動的光線空間の撮影及びレイヤ型ディスプレイによる3次元表示

    松岡 恒希, 佐藤 千幸, 都竹 千尋, 高橋 桂太, 藤井 俊彰

    三次元画像コンファレンス講演論文集   Vol. 30 ( 0 ) page: 75 - 78   2022.7

     More details

    Language:Japanese   Publisher:3次元画像コンファレンス実行委員会  

    DOI: 10.60374/sanjigen.30.0_75

    CiNii Research

  26. 多視点ステレオと単眼深度推定に基づく深度補完とその自由視点映像生成への応用

    木舩 涼太, 都竹 千尋, 高橋 桂太, 藤井 俊彰

    三次元画像コンファレンス講演論文集   Vol. 30 ( 0 ) page: 39 - 42   2022.7

     More details

    Language:Japanese   Publisher:3次元画像コンファレンス実行委員会  

    DOI: 10.60374/sanjigen.30.0_39

    CiNii Research

  27. レイヤ型ディスプレイによる複数シーンの同時立体表示の検討

    佐藤 千幸, 松浦 孝太朗, 都竹 千尋, 高橋 桂太, 藤井 俊彰

    三次元画像コンファレンス講演論文集   Vol. 30 ( 0 ) page: 71 - 74   2022.7

     More details

    Language:Japanese   Publisher:3次元画像コンファレンス実行委員会  

    DOI: 10.60374/sanjigen.30.0_71

    CiNii Research

  28. Acquiring a Dynamic Light Field through a Single-Shot Coded Image Open Access

    Mizuno, R; Takahashi, K; Yoshida, M; Tsutake, C; Fujii, T; Nagahara, H

    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022)   Vol. 2022-June   page: 19798 - 19808   2022

     More details

    Publisher:Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition  

    We propose a method for compressively acquiring a dynamic light field (a 5-D volume) through a single-shot coded image (a 2-D measurement). We designed an imaging model that synchronously applies aperture coding and pixel-wise exposure coding within a single exposure time. This coding scheme enables us to effectively embed the original information into a single observed image. The observed image is then fed to a convolutional neural network (CNN) for light-field reconstruction, which is jointly trained with the camera-side coding patterns. We also developed a hardware prototype to capture a real 3-D scene moving over time. We succeeded in acquiring a dynamic light field with 5x5 viewpoints over 4 temporal sub-frames (100 views in total)from a single observed image. Repeating capture and reconstruction processes over time, we can acquire a dynamic light field at 4x the frame rate of the camera. To our knowledge, our method is the first to achieve a finer temporal resolution than the camera itself in compressive light-field acquisition. Our software is available from our project webpage.11https://www.fujii.nuee.nagoya-u.ac.jp/Research/CompCam2

    DOI: 10.1109/CVPR52688.2022.01921

    Web of Science

    Scopus

  29. Restoration of JPEG Compressed Image with Narrow Quantization Constraint Set without Parameter Optimization

    Tsutake, C; Yoshida, T

    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS   Vol. 10 ( 3 ) page: 130 - 139   2022

     More details

  30. Displaying Multiple 3D Scenes with a Single Layered Display

    Sato C., Tsutake C., Takahashi K., Fujii T.

    Proceedings of the International Display Workshops   Vol. 29   page: 596 - 599   2022

     More details

    Publisher:Proceedings of the International Display Workshops  

    We propose a method of displaying two different 3D scenes on a single layered light-field display, where the layer patterns are optimized for the two scenes simultaneously. We demonstrate that both scenes can be displayed in high quality when the viewing zones for them are separated sufficiently.

    Scopus

  31. Image Generation Method Using Weight Maps for Subjective Quality Improvement in Two-Dimensional Image Synthetic Integral Three-Dimensional Display

    Watanabe H., Arai J., Tsutake C., Takahashi K., Fujii T.

    Proceedings of the International Display Workshops   Vol. 29   page: 521 - 524   2022

     More details

    Publisher:Proceedings of the International Display Workshops  

    We propose an image generation method to display three-dimensional (3D) images with high maximum pixel density and improved subjective quality on a two-dimensional image synthetic integral 3D display. In addition to the target light field image, weight maps obtained from the depth information were used to generate the images.

    Scopus

  32. レイヤ型ディスプレイとして見たMulti Plane Image表現

    佐藤 千幸, 川上 真生, 都竹 千尋, 高橋 桂太, 藤井 俊彰

    三次元画像コンファレンス講演論文集   Vol. 29 ( 0 ) page: 97 - 100   2021.7

     More details

    Language:Japanese   Publisher:3次元画像コンファレンス実行委員会  

    DOI: 10.60374/sanjigen.29.0_97

    CiNii Research

  33. レイヤ型ディスプレイの広視域化に向けた時分割多重と周縁視域画質の検討

    松浦 孝太朗, 都竹 千尋, 高橋 桂太, 藤井 俊彰, 伊達 宗和, 志水 信哉

    三次元画像コンファレンス講演論文集   Vol. 29 ( 0 ) page: 93 - 96   2021.7

     More details

    Language:Japanese   Publisher:3次元画像コンファレンス実行委員会  

    DOI: 10.60374/sanjigen.29.0_93

    CiNii Research

  34. AN EFFICIENT IMAGE COMPRESSION METHOD BASED ON NEURAL NETWORK: AN OVERFITTING APPROACH

    Mikami, Y; Tsutake, C; Takahashi, K; Fujii, T

    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)   Vol. 2021-September   page: 2084 - 2088   2021

     More details

    Publisher:Proceedings - International Conference on Image Processing, ICIP  

    Over the past decade, nonlinear image compression techniques based on neural networks have been rapidly developed to achieve more efficient storage and transmission of images compared with conventional linear techniques. A typical nonlinear technique is implemented as a neural network trained on a vast set of images, and the latent representation of a target image is transmitted. In contrast to the previous nonlinear techniques, we propose a new image compression method in which a neural network model is trained exclusively on a single target image, rather than a set of images. Such an overfitting strategy enables us to embed fine image features in not only the latent representation but also the network parameters, which helps reduce the reconstruction error against the target image. The effectiveness of our method is validated through a comparison with conventional image compression techniques in terms of a rate-distortion criterion.

    DOI: 10.1109/ICIP42928.2021.9506367

    Web of Science

    Scopus

  35. Video Denoising by BM3D Technique with an Improved Cube Construction and SURE Shrinkage Techniques

    Yamada, R; Tsutake, C; Yoshida, T

    INTERNATIONAL WORKSHOP ON ADVANCED IMAGING TECHNOLOGY (IWAIT) 2021   Vol. 11766   2021

     More details

    Publisher:Proceedings of SPIE - The International Society for Optical Engineering  

    This paper attempts to improve denoising efficiency of BM3D technique for videos, i.e., VBM3D. VBM3D constructs 3D cubes from target video frames by a block matching algorithm that minimizes the residual matching error. However, such a cube formation results in sacrificing the pixel correlation in the temporal direction. This paper thus modifies this step to preserve the sub-pixel alignment, which makes the Fourier coefficients of each cube located on a vicinity of a certain plane in the 3-D Fourier domain. Then, SURE-shrinkage technique is separately applied to the inside and outside of the vicinity of the plane to denoise each cube. The experimental results given in this paper demonstrate the validity of our approach.

    DOI: 10.1117/12.2591104

    Web of Science

    Scopus

  36. FACTORIZED MODULATION FOR SINGLE-SHOT LIGHT-FIELD ACQUISITION

    Tateishi, K; Sakai, K; Tsutake, C; Takahashi, K; Fujii, T

    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)   Vol. 2021-September   page: 3253 - 3257   2021

     More details

    Publisher:Proceedings - International Conference on Image Processing, ICIP  

    A light field (LF), which is represented as a set of dense multiview images, has been utilized in various 3D applications. To make LF acquisition more efficient, researchers have investigated compressive sensing methods by incorporating modulation or coding functions into the camera. In this work, we investigate a challenging case of compressive LF acquisition in which an entire LF should be reconstructed from only a single coded image. To achieve this goal, we propose a new modulation scheme called factorized modulation that can approximate arbitrary 4D modulation patterns in a factorized manner. Our method can be hardwareimplemented by combining the architectures for coded aperture and pixelwise coded exposure imaging. The modulation pattern is jointly optimized with a CNNbased reconstruction algorithm. Our method is validated through extensive evaluations against other modulation schemes.

    DOI: 10.1109/ICIP42928.2021.9506797

    Web of Science

    Scopus

  37. Acceleration of Sparse Representation Based Coded Exposure Photography Reviewed

    Chihiro Tsutake, Toshiyuki Yoshida

    The Journal of The Institute of Image Information and Television Engineers   Vol. 74 ( 1 ) page: 198-207   2020

     More details

    Authorship:Lead author   Language:Japanese   Publishing type:Research paper (scientific journal)  

    DOI: 10.3169/itej.74.198

  38. Vaguelette-Wavelet Deconvolution via Compressive Sampling Reviewed

    Chihiro Tsutake, Toshiyuki Yoshida

    IEEE Access   Vol. 7   page: 54533-54541   2019

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

    DOI: 10.1109/access.2019.2913024

  39. Block-Matching-Based Implementation of Affine Motion Estimation for HEVC Reviewed

    Chihiro Tsutake, Toshiyuki Yoshida

    IEICE Transactions on Information and Systems   Vol. E101.D ( 4 ) page: 1151-1158   2018

     More details

    Authorship:Lead author   Language:English   Publishing type:Research paper (scientific journal)  

    DOI: 10.1587/transinf.2017EDP7201

  40. Fast Mode Decision Technique for HEVC Intra Prediction Based on Reliability Metric for Motion Vectors Reviewed

    Chihiro Tsutake, Yutaka Nakano, Toshiyuki Yoshida

    IEICE Transactions on Information and Systems   Vol. E99.D ( 4 ) page: 1193-1201   2016

     More details

    Authorship:Lead author   Language:English   Publishing type:Research paper (scientific journal)  

    DOI: 10.1587/transinf.2015EDP7244

▼display all

KAKENHI (Grants-in-Aid for Scientific Research) 2

  1. シャック・ハルトマン波面センサ用いた動的光線空間の撮影及び処理に関する基礎研究

    Grant number:22K17909  2022.4 - 2025.3

    科学研究費助成事業  若手研究

    都竹 千尋

      More details

    Authorship:Principal investigator 

    Grant amount:\4550000 ( Direct Cost: \3500000 、 Indirect Cost:\1050000 )

    シャック・ハルトマン波面センサは,マイクロレンズアレイとその後側焦点距離に配置された撮像素子からなる光学機器である.これまでの研究では,マイクロレンズアレイ+撮像素子というカメラに類似した構造を積極的に活かした取得・処理方法が検討されておらず,光線群の明るさを記録するカメラとして応用されることがなかった.そこで本研究では,様々な位置・方向から入射される光線群の明るさ(光線空間)をリアルタイム撮影するカメラシステムを提案する.すなわち,動きを伴う光線空間(動的光線空間)を撮影できる光学系を波面センサに実装し,撮影した動的光線空間の処理について基礎研究する.
    本研究では,シャック・ハルトマン波面センサの光学系がマイクロレンズアレイと撮像素子で構成されることに着目して,実空間を飛び交う動的な光線群の明るさ(動的光線空間)を記録するカメラシステムの構築を目指すと共に,光線群に含まれるリッチな時空間情報に対する三次元信号処理の体系化を目的とする.本研究で提案するカメラは空間方向のサンプリングが疎であるものの,角度及び時間方向のサンプリングが密であり,従来のプレノプティックカメラと真逆の性質を持つ.このようなカメラで撮影された動的光線空間の応用を念頭に,2023年度は,ホログラフィックステレオグラム用いて動的光線空間を実機表示する研究に取り組んだ.本実機システムは,焦点距離75[mm]と35[mm]の凸レンズを組み合わせたフーリエ光学系からなる.動的光線空間から計算したフーリエホログラムを空間光変調器に表示し,これにコヒーレント参照光を照射することによって,三次元像が空中結像する.焦点距離25[mm]のレンズを備えたカメラで三次元像を観測したところ,奥行きに依存した視差及びぼけ感が観測された.
    2023年度計画は,SH波面センサで撮影した動的光線空間の処理及び応用に取り組む予定であったため,順調に計画が進んでいると判断される.
    最終年度に当たる2024年度は,動的光線空間の撮影及び表示の高精度化を目指す.具体的には,2023年度に構築した光学システムでは収差に代表される様々な歪が生じるため,これを打ち消すように,カメラで観測した三次元像をホログラムにフィードバックするシステムを構築する.このようなシステムはカメラインザループと呼ばれ,静的ホログラフィに対して有効であることが知られている.本システムにおいては動的な三次元像をフィードバックすることになるため,このような動的システムに適用できるように進めていく予定である.

  2. A Study on Image Capturing and Processing System using Optical Control Device and Event Camera

    Grant number:21H03464  2021.4 - 2024.3

    Grants-in-Aid for Scientific Research  Grant-in-Aid for Scientific Research (B)

    FUJII Toshiaki

      More details

    Authorship:Coinvestigator(s) 

    This research proposes a new camera system that “obtains information by actively moving the eyes” by combining an event camera, which is a camera that detects and outputs “temporal changes in luminance values” of each pixel, and visual feedback. The system acquires event information by controlling the optical system that forms an image on the sensor at high speed, and then provides feedback to the optical system based on the analysis results to acquire information on the scene. In this study, we investigated a spatial light modulator that can be controlled at high speed, analyzed event signals in detail, investigated information processing methods, and fabricated and evaluated a prototype to verify the principle of this new image acquisition and processing system.

 

Teaching Experience (On-campus) 1

  1. 離散数学及び演習

    2020