2024/03/28 更新

写真a

ツタケ チヒロ
都竹 千尋
TSUTAKE Chihiro
所属
大学院工学研究科 情報・通信工学専攻 情報通信 助教
大学院担当
大学院工学研究科
学部担当
工学部 電気電子情報工学科
職名
助教

学位 1

  1. 博士(工学) ( 2020年3月   福井大学 ) 

研究キーワード 4

  1. 圧縮センシング

  2. 凸最適化

  3. 動画像圧縮符号化

  4. 画像処理

研究分野 1

  1. ものづくり技術(機械・電気電子・化学工学) / 通信工学

 

論文 22

  1. Compressive Acquisition of Light Field Video Using Aperture-Exposure-Coded Camera

    Mizuno, R; Takahashi, K; Yoshida, M; Tsutake, C; Fujii, T; Nagahara, H

    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS   12 巻 ( 1 ) 頁: 22 - 35   2024年

     詳細を見る

  2. 時間多重符号化開口法と符号化フォーカルスタック法—ライトフィールドのためのスナップショット圧縮撮像手法の比較研究

    立石 航平, 都竹 千尋, 高橋 桂太, 藤井 俊彰

    情報・システムソサイエティ誌   28 巻 ( 3 ) 頁: 7 - 7   2023年11月

     詳細を見る

    記述言語:日本語   出版者・発行元:一般社団法人電子情報通信学会  

    DOI: 10.1587/ieiceissjournal.28.3_7

    CiNii Research

  3. Compressing Light Field as Multiplane Image

    Kawakami, M; Tsutake, C; Takahashi, K; Fujii, T

    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS   11 巻 ( 2 ) 頁: 27 - 33   2023年

     詳細を見る

  4. Reconstructing Continuous Light Field From Single Coded Image

    Ishikawa, Y; Takahashi, K; Tsutake, C; Fujii, T

    IEEE ACCESS   11 巻   頁: 99387 - 99396   2023年

     詳細を見る

    出版者・発行元:IEEE Access  

    We propose a method for reconstructing a continuous light field of a target scene from a single observed image. Our method takes the best of two worlds: joint aperture-exposure coding for compressive light-field acquisition, and a neural radiance field (NeRF) for view synthesis. Joint aperture-exposure coding implemented in a camera enables effective embedding of 3-D scene information into an observed image, but in previous works, it was used only for reconstructing discretized light-field views. NeRF-based neural rendering enables high quality view synthesis of a 3-D scene from continuous viewpoints, but when only a single image is given as the input, it struggles to achieve satisfactory quality. Our method integrates these two techniques into an efficient and end-to-end trainable pipeline. Trained on a wide variety of scenes, our method can reconstruct continuous light fields accurately and efficiently without any test time optimization. To our knowledge, this is the first work to bridge two worlds: camera design for efficiently acquiring 3-D information and neural rendering.

    DOI: 10.1109/ACCESS.2023.3314340

    Web of Science

    Scopus

  5. Direct Super Resolution for Multiplane Images

    Sato, C; Tsutake, C; Takahashi, K; Fujii, T

    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS   11 巻 ( 2 ) 頁: 34 - 42   2023年

     詳細を見る

  6. Compressing Sign Information in DCT-based Image Coding via Deep Sign Retrieval

    Suzuki, K; Tsutake, C; Takahashi, K; Fujii, T

    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS   12 巻 ( 1 ) 頁: 110 - 122   2023年

     詳細を見る

  7. Time-Multiplexed Coded Aperture and Coded Focal Stack -Comparative Study on Snapshot Compressive Light Field Imaging

    Tateishi Kohei, Tsutake Chihiro, Takahashi Keita, Fujii Toshiaki

    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS   E105D 巻 ( 10 ) 頁: 1679 - 1690   2022年10月

     詳細を見る

    記述言語:英語   出版者・発行元:IEICE Transactions on Information and Systems  

    A light field (LF), which is represented as a set of dense, multi-view images, has been used in various 3D applications. To make LF acquisition more efficient, researchers have investigated compressive sensing methods by incorporating certain coding functionalities into a camera. In this paper, we focus on a challenging case called snapshot compressive LF imaging, in which an entire LF is reconstructed from only a single acquired image. To embed a large amount of LF information in a single image, we consider two promising methods based on rapid optical control during a single exposure: time-multiplexed coded aperture (TMCA) and coded focal stack (CFS), which were proposed individually in previous works. Both TMCA and CFS can be interpreted in a unified manner as extensions of the coded aperture (CA) and focal stack (FS) methods, respectively. By developing a unified algorithm pipeline for TMCA and CFS, based on deep neural networks, we evaluated their performance with respect to other possible imaging methods. We found that both TMCA and CFS can achieve better reconstruction quality than the other snapshot methods, and they also perform reasonably well compared to methods using multiple acquired images. To our knowledge, we are the first to present an overall discussion of TMCA and CFS and to compare and validate their effectiveness in the context of compressive LF imaging.

    DOI: 10.1587/transinf.2022PCP0003

    Web of Science

    Scopus

    CiNii Research

  8. Unrolled Network for Light Field Display

    Matsuura Kotaro, Tsutake Chihiro, Takahashi Keita, Fujii Toshiaki

    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS   E105D 巻 ( 10 ) 頁: 1721 - 1725   2022年10月

     詳細を見る

    記述言語:英語   出版者・発行元:IEICE Transactions on Information and Systems  

    Inspired by the framework of algorithm unrolling, we propose a scalable network architecture that computes layer patterns for light field displays, enabling control of the trade-off between the display quality and the computational cost on a single pre-trained network.

    DOI: 10.1587/transinf.2022PCL0002

    Web of Science

    Scopus

    CiNii Research

  9. Pixel-density enhanced integral three-dimensional display with two-dimensional image synthesis

    Watanabe Hayato, Rai Juna, Tsutake Chihiro, Takahashi Keita, Fujii Toshiaki

    OPTICS EXPRESS   30 巻 ( 20 ) 頁: 36038 - 36054   2022年9月

     詳細を見る

    出版者・発行元:Optics Express  

    Integral three-dimensional (3D) displays can display naturally viewable 3D images. However, displaying 3D images with high pixel density is difficult because the maximum pixel number is restricted by the number of lenses of a lens array. Therefore, we propose a method for increasing the maximum pixel density of 3D images by optically synthesizing the displayed images of an integral 3D display and high-definition two-dimensional display using a half mirror. We evaluated the improvements in 3D image resolution characteristics through simulation analysis of the modulation transfer function. We developed a prototype display system that can display 3D images with a maximum resolution of 4K and demonstrated the effectiveness of the proposed method.

    DOI: 10.1364/OE.469045

    Web of Science

    Scopus

  10. Denoising multi-view images by soft thresholding: A short-time DFT approach

    Tomita Keigo, Tsutake Chihiro, Takahashi Keita, Fujii Toshiaki

    SIGNAL PROCESSING-IMAGE COMMUNICATION   105 巻   2022年7月

     詳細を見る

    出版者・発行元:Signal Processing: Image Communication  

    Short-time discrete Fourier transform (ST-DFT) is known as a promising technique for image and video denoising. The seminal work by Saito and Komatsu hypothesized that natural video sequences can be represented by sparse ST-DFT coefficients and noisy video sequences can be denoised on the basis of statistical modeling and shrinkage of the ST-DFT coefficients. Motivated by their theory, we develop an application of ST-DFT for denoising multi-view images. We first show that multi-view images have sparse ST-DFT coefficients as well and then propose a new statistical model, which we call the multi-block Laplacian model, based on the block-wise sparsity of ST-DFT coefficients. We finally utilize this model to carry out denoising by solving a convex optimization problem, referred to as the least absolute shrinkage and selection operator. A closed-form solution can be computed by soft thresholding, and the optimal threshold value is derived by minimizing the error function in the ST-DFT domain. We demonstrate through experiments the effectiveness of our denoising method compared with several previous denoising techniques. Our method implemented in Python language is available from https://github.com/ctsutake/mviden.

    DOI: 10.1016/j.image.2022.116710

    Web of Science

    Scopus

  11. Acquiring a Dynamic Light Field through a Single-Shot Coded Image

    Mizuno Ryoya, Takahashi Keita, Yoshida Michitaka, Tsutake Chihiro, Fujii Toshiaki, Nagahara Hajime

    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022)   2022-June 巻   頁: 19798 - 19808   2022年

     詳細を見る

    出版者・発行元:Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition  

    We propose a method for compressively acquiring a dynamic light field (a 5-D volume) through a single-shot coded image (a 2-D measurement). We designed an imaging model that synchronously applies aperture coding and pixel-wise exposure coding within a single exposure time. This coding scheme enables us to effectively embed the original information into a single observed image. The observed image is then fed to a convolutional neural network (CNN) for light-field reconstruction, which is jointly trained with the camera-side coding patterns. We also developed a hardware prototype to capture a real 3-D scene moving over time. We succeeded in acquiring a dynamic light field with 5x5 viewpoints over 4 temporal sub-frames (100 views in total)from a single observed image. Repeating capture and reconstruction processes over time, we can acquire a dynamic light field at 4x the frame rate of the camera. To our knowledge, our method is the first to achieve a finer temporal resolution than the camera itself in compressive light-field acquisition. Our software is available from our project webpage.11https://www.fujii.nuee.nagoya-u.ac.jp/Research/CompCam2

    DOI: 10.1109/CVPR52688.2022.01921

    Web of Science

    Scopus

  12. Restoration of JPEG Compressed Image with Narrow Quantization Constraint Set without Parameter Optimization

    Tsutake Chihiro, Yoshida Toshiyuki

    ITE TRANSACTIONS ON MEDIA TECHNOLOGY AND APPLICATIONS   10 巻 ( 3 ) 頁: 130 - 139   2022年

     詳細を見る

  13. Displaying Multiple 3D Scenes with a Single Layered Display

    Sato C., Tsutake C., Takahashi K., Fujii T.

    Proceedings of the International Display Workshops   29 巻   頁: 596 - 599   2022年

     詳細を見る

    出版者・発行元:Proceedings of the International Display Workshops  

    We propose a method of displaying two different 3D scenes on a single layered light-field display, where the layer patterns are optimized for the two scenes simultaneously. We demonstrate that both scenes can be displayed in high quality when the viewing zones for them are separated sufficiently.

    Scopus

  14. Image Generation Method Using Weight Maps for Subjective Quality Improvement in Two-Dimensional Image Synthetic Integral Three-Dimensional Display

    Watanabe H., Arai J., Tsutake C., Takahashi K., Fujii T.

    Proceedings of the International Display Workshops   29 巻   頁: 521 - 524   2022年

     詳細を見る

    出版者・発行元:Proceedings of the International Display Workshops  

    We propose an image generation method to display three-dimensional (3D) images with high maximum pixel density and improved subjective quality on a two-dimensional image synthetic integral 3D display. In addition to the target light field image, weight maps obtained from the depth information were used to generate the images.

    Scopus

  15. AN EFFICIENT COMPRESSION METHOD FOR SIGN INFORMATION OF DCT COEFFICIENTS VIA SIGN RETRIEVAL

    Tsutake Chihiro, Takahashi Keita, Fujii Toshiaki

    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)   2021-September 巻   頁: 2024 - 2028   2021年

     詳細を見る

    出版者・発行元:Proceedings - International Conference on Image Processing, ICIP  

    Compression of the sign information of discrete cosine transform coefficients is an intractable problem in image compression schemes due to the equiprobable occurrence of the sign bits. To overcome this difficulty, we propose an efficient compression method for such sign information based on phase retrieval, which is a classical signal restoration problem attempting to find the phase information of discrete Fourier transform coefficients from their magnitudes. In our compression strategy, the sign bits of all the AC components in the cosine domain are excluded from a bitstream at the encoder and are complemented at the decoder by solving a sign recovery problem, which we call sign retrieval. The experimental results demonstrate that the proposed method outperforms previous techniques for sign compression in terms of a rate-distortion criterion. Our method implemented in Python language is available from https://github.com/ctsutake/sr.

    DOI: 10.1109/ICIP42928.2021.9506155

    Web of Science

    Scopus

  16. Video Denoising by BM3D Technique with an Improved Cube Construction and SURE Shrinkage Techniques

    Yamada Ryoya, Tsutake Chihiro, Yoshida Toshiyuki

    INTERNATIONAL WORKSHOP ON ADVANCED IMAGING TECHNOLOGY (IWAIT) 2021   11766 巻   2021年

     詳細を見る

    出版者・発行元:Proceedings of SPIE - The International Society for Optical Engineering  

    This paper attempts to improve denoising efficiency of BM3D technique for videos, i.e., VBM3D. VBM3D constructs 3D cubes from target video frames by a block matching algorithm that minimizes the residual matching error. However, such a cube formation results in sacrificing the pixel correlation in the temporal direction. This paper thus modifies this step to preserve the sub-pixel alignment, which makes the Fourier coefficients of each cube located on a vicinity of a certain plane in the 3-D Fourier domain. Then, SURE-shrinkage technique is separately applied to the inside and outside of the vicinity of the plane to denoise each cube. The experimental results given in this paper demonstrate the validity of our approach.

    DOI: 10.1117/12.2591104

    Web of Science

    Scopus

  17. FACTORIZED MODULATION FOR SINGLE-SHOT LIGHT-FIELD ACQUISITION

    Tateishi Kohei, Sakai Kohei, Tsutake Chihiro, Takahashi Keita, Fujii Toshiaki

    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)   2021-September 巻   頁: 3253 - 3257   2021年

     詳細を見る

    出版者・発行元:Proceedings - International Conference on Image Processing, ICIP  

    A light field (LF), which is represented as a set of dense multiview images, has been utilized in various 3D applications. To make LF acquisition more efficient, researchers have investigated compressive sensing methods by incorporating modulation or coding functions into the camera. In this work, we investigate a challenging case of compressive LF acquisition in which an entire LF should be reconstructed from only a single coded image. To achieve this goal, we propose a new modulation scheme called factorized modulation that can approximate arbitrary 4D modulation patterns in a factorized manner. Our method can be hardwareimplemented by combining the architectures for coded aperture and pixelwise coded exposure imaging. The modulation pattern is jointly optimized with a CNNbased reconstruction algorithm. Our method is validated through extensive evaluations against other modulation schemes.

    DOI: 10.1109/ICIP42928.2021.9506797

    Web of Science

    Scopus

  18. AN EFFICIENT IMAGE COMPRESSION METHOD BASED ON NEURAL NETWORK: AN OVERFITTING APPROACH

    Mikami Yu, Tsutake Chihiro, Takahashi Keita, Fujii Toshiaki

    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)   2021-September 巻   頁: 2084 - 2088   2021年

     詳細を見る

    出版者・発行元:Proceedings - International Conference on Image Processing, ICIP  

    Over the past decade, nonlinear image compression techniques based on neural networks have been rapidly developed to achieve more efficient storage and transmission of images compared with conventional linear techniques. A typical nonlinear technique is implemented as a neural network trained on a vast set of images, and the latent representation of a target image is transmitted. In contrast to the previous nonlinear techniques, we propose a new image compression method in which a neural network model is trained exclusively on a single target image, rather than a set of images. Such an overfitting strategy enables us to embed fine image features in not only the latent representation but also the network parameters, which helps reduce the reconstruction error against the target image. The effectiveness of our method is validated through a comparison with conventional image compression techniques in terms of a rate-distortion criterion.

    DOI: 10.1109/ICIP42928.2021.9506367

    Web of Science

    Scopus

  19. スパース表現に基づくCoded Exposure Photographyの高速化について 査読有り

    都竹 千尋,吉田 俊之

    映像情報メディア学会誌   74 巻 ( 1 ) 頁: 198-207   2020年

     詳細を見る

    担当区分:筆頭著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.3169/itej.74.198

  20. Vaguelette-Wavelet Deconvolution via Compressive Sampling 査読有り

    Chihiro Tsutake, Toshiyuki Yoshida

    IEEE Access   7 巻   頁: 54533-54541   2019年

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1109/access.2019.2913024

  21. Block-Matching-Based Implementation of Affine Motion Estimation for HEVC 査読有り

    Chihiro Tsutake, Toshiyuki Yoshida

    IEICE Transactions on Information and Systems   E101.D 巻 ( 4 ) 頁: 1151-1158   2018年

     詳細を見る

    担当区分:筆頭著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1587/transinf.2017EDP7201

  22. Fast Mode Decision Technique for HEVC Intra Prediction Based on Reliability Metric for Motion Vectors 査読有り

    Chihiro Tsutake, Yutaka Nakano, Toshiyuki Yoshida

    IEICE Transactions on Information and Systems   E99.D 巻 ( 4 ) 頁: 1193-1201   2016年

     詳細を見る

    担当区分:筆頭著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1587/transinf.2015EDP7244

▼全件表示

科研費 2

  1. シャック・ハルトマン波面センサ用いた動的光線空間の撮影及び処理に関する基礎研究

    研究課題/研究課題番号:22K17909  2022年4月 - 2025年3月

    科学研究費助成事業  若手研究

    都竹 千尋

      詳細を見る

    担当区分:研究代表者 

    配分額:4550000円 ( 直接経費:3500000円 、 間接経費:1050000円 )

    シャック・ハルトマン波面センサは,マイクロレンズアレイとその後側焦点距離に配置された撮像素子からなる光学機器である.これまでの研究では,マイクロレンズアレイ+撮像素子というカメラに類似した構造を積極的に活かした取得・処理方法が検討されておらず,光線群の明るさを記録するカメラとして応用されることがなかった.そこで本研究では,様々な位置・方向から入射される光線群の明るさ(光線空間)をリアルタイム撮影するカメラシステムを提案する.すなわち,動きを伴う光線空間(動的光線空間)を撮影できる光学系を波面センサに実装し,撮影した動的光線空間の処理について基礎研究する.

  2. 制御光学系とイベントカメラを用いた新しい画像情報取得・処理システムの研究

    研究課題/研究課題番号:21H03464  2021年4月 - 2024年3月

    科学研究費助成事業  基盤研究(B)

    藤井 俊彰, 都竹 千尋

      詳細を見る

    担当区分:研究分担者 

    各画素の輝度値の(時間)変化情報のみを非同期に出力する「イベントカメラ」と制御光学系を組み合わせることにより, 視覚フィードバックを備えた「目を能動的に動かして情報を得る」新しいカメラシステムの開発を行う.センサ上に像を形成する光学系を高速に制御してイベント情報を取得し,その解析結果を元に光学系にフィードバックを行いつつシーンの情報を取得するシステムを研究する.本研究ではこのカメラシステムの原理検証,プロトタイプ作製,評価を行い,新しい画像取得・処理システムとして確立する.
    本研究では,視覚フィードバックを備えた「目を能動的に動かして情報を得る」新しいカメラシステム,すなわちセンサ上に像を形成する光学系を高速に制御してイベント情報を取得し,その解析結果を元に光学系にフィードバックを行いつつシーンの情報を取得するシステムを研究することが目的である.今年度は,フレームとイベントが同じセンサーで撮影できるカメラを用いて撮影を行い,得られた撮影画像とイベントデータから視点を補間し光線空間を復元する研究を行った.
    ネットワークへの入力に2フレームの撮影画像のみを用いた場合と,2フレームの撮影画像に加えて入力にイベントデータを用いた場合と用いない場合の視点補間の精度の比較を行った.この比較によって,画像のみを入力に用いた視点補間画像と比較して,イベントデータを入力に加えた方が精度が向上することが確認できた.
    次にS.TulyakovらによるTimeLensの研究を参考に,撮影画像とイベントデータから視点を補間する復元ネットワークを構築した. 被写体を机上に配置し,カメラを電動ステージに乗せて並行移動させることで撮影するという実機による実験を行い,視点の補間結果を先行研究による結果と比較した.実機による実験では先行研究に比べて視点補間が十分な精度で実現できなかった.これらの結果から,視点補間においてイベントデータは有効であるが,補間画像の精度を上げるためにはイベントデータをそのまま使うだけでは困難であり,何らかの工夫が必要あるとの知見が得られた.
    当初の初年度計画は,利用可能な空間光変調器について検討を行うことであった.一方で,イベントカメラの最先端の研究としてS.TulyakovらによるTimeLensの研究が出てきたことから,光線空間の補間についての研究にまず着手した.その中で,補間を行うCNN(Convolutional Neural Network)を構築し,イベント情報を用いる場合と用いない場合の比較を試みた.その結果,イベント情報の有用性を確認することができた.
    今後は,イベント信号の情報処理技術の開発に取り組む.低レベルな信号の問題としてイベント信号のノイズの問題と時刻同一性の問題がある.イベント信号はある画素値の輝度値の変化値が閾値を超えた場合に生ずるものであるが,特に閾値が小さい場合には非常にノイズの多い信号となる.この中から真に意味のある注目イベントを抽出する手法を確立する必要がある.従来のノイズ除去手法の適用の可能性と限界の把握を行うとともに,イベント情報に特化した新たなノイズ除去手法の開発を行う.もう一つの時刻同一性の問題とは,多くの画素において一斉に輝度変化が生じた場合に,それらのイベントに含まれる時刻が完全に同一とならず,時刻に対するイベントヒストグラムが裾野を引いたような頻度分布になる,という問題である.イベントベースビジョンで試みられている深層学習の手法も参考にしつつ,目的関数の最適化などの解析的な手法と深層学習に基づく手法について検討を進めていく.最後に,得られた処理結果を元に光学系に対してフィードバックしていく系を検討し,実験的に検証を行う.

 

担当経験のある科目 (本学) 1

  1. 離散数学及び演習

    2020