2024/04/01 更新

写真a

モリ ケンサク
森 健策
MORI, Kensaku
所属
大学院情報学研究科 知能システム学専攻 システム知能情報学 教授
大学院担当
大学院情報科学研究科
大学院情報学研究科
学部担当
情報学部 コンピュータ科学科
職名
教授
連絡先
メールアドレス
ホームページ
外部リンク

学位 1

  1. 博士(工学)

研究キーワード 2

  1. 画像処理 高次画像処理 パターン認識 機械学習 医用画像処理 コンピュータ外科 コンピュータ支援画像診断 人工知能

  2. artificial intelligence

研究分野 3

  1. その他 / その他  / 知覚情報処理

  2. その他 / その他  / 医用システム

  3. その他 / その他  / 知能情報学

現在の研究課題とSDGs 1

  1. 3次元画像処理とその医学応用に関する研究

経歴 11

  1. 名古屋大学   運営支援組織等 情報連携推進本部   副本部長

    2017年4月 - 現在

  2. 名古屋大学   情報基盤センター   所長(センター長)

    2016年4月 - 現在

  3. 名古屋大学   運営支援組織等 情報連携推進本部 情報戦略室   室長

    2016年4月 - 現在

  4. 名古屋大学   大学院情報科学研究科 メディア科学専攻 知能メディア工学/工学部   教授

    2009年10月 - 2017年3月

  5. 名古屋大学大学院情報科学研究科メディア科学専攻准教授

    2007年4月

      詳細を見る

    国名:日本国

  6. 名古屋大学大学院情報科学研究科メディア科学専攻助教授

    2003年4月 - 2007年3月

      詳細を見る

    国名:日本国

  7. 名古屋大学難処理人工物研究センター助教授

    2001年4月 - 2003年3月

      詳細を見る

    国名:日本国

  8. 名古屋大学大学院工学研究科計算理工学専攻講師

    2000年4月 - 2001年3月

      詳細を見る

    国名:日本国

  9. 名古屋大学大学院工学研究科計算理工学専攻助手

    1997年4月 - 2000年3月

      詳細を見る

    国名:日本国

  10. 日本学術振興会特別研究員(PD)

    1996年10月

      詳細を見る

    国名:日本国

  11. 日本学術振興会特別研究員(DC1)

    1994年4月

      詳細を見る

    国名:日本国

▼全件表示

学歴 3

  1. 名古屋大学   工学研究科   情報工学

    1994年4月 - 1996年9月

      詳細を見る

    国名: 日本国

  2. 名古屋大学   工学研究科   情報工学

    1992年4月 - 1994年3月

      詳細を見る

    国名: 日本国

  3. 名古屋大学   工学部   電子工学科

    1988年4月 - 1992年3月

      詳細を見る

    国名: 日本国

所属学協会 38

  1. 日本医用画像工学会

    2020年 - 現在

  2. 電子情報通信学会   医用画像研究会専門委員会幹事

    2007年4月 - 現在

  3. 日本医用画像工学会   幹事

    2009年4月 - 現在

  4. 日本生体医工学会   代議員

    2007年4月 - 2010年3月

  5. IEEE

  6. SPIE   SPIE Medical Imaging Program Committee

    2005年4月 - 現在

  7. 日本CT検診学会

    2020年4月 - 現在

  8. 東海支部連合大会

    2020年4月 - 現在

  9. 情報処理学会

    2020年4月 - 現在

  10. 日本がん検診・診断学会

    2020年4月 - 現在

  11. 日本呼吸内視鏡学会

    2020年4月 - 現在

  12. 日本生体医工学会

    2020年4月 - 現在

  13. 日本消化器内視鏡学会

    2020年4月 - 現在

  14. 日本VR医学会   評議員

    2009年4月 - 2011年3月

  15. 日本コンピュータ外科学会   理事

    2008年4月 - 現在

  16. 電子情報通信学会   医用画像研究会幹事

    2007年4月 - 現在

  17. MICCAI (Medical Image Computing and Computer Assisted Surgery   Program Committee

    2007年4月 - 現在

  18. SPIE Medical Imaging Conference 2007, Computer Aided Diagnosis   Program Committee

    2007年4月 - 2009年3月

  19. SPIE Medical Imaging Conference 2007, Image Processing   Program Committee

    2007年4月 - 2009年3月

  20. 電子情報通信学会   医療情報通信技術時限研究専門委員会専門委員

    2006年4月 - 2008年3月

  21. SPIE Medical Imaging Conference 2007, Image Processing   Program Committee

    2006年4月 - 2008年3月

  22. SPIE Medical Imaging Conference 2007, Computer Aided Diagnosis   Program Committee

    2006年4月 - 2008年3月

  23. コンピュータ支援画像診断学会   評議員

    2005年4月 - 現在

  24. International Journal of Computer Assisted Radiology and Surgery   Editorial Board

    2005年4月 - 現在

  25. Journal Medical Image Analysis   Editorial Board

    2005年4月 - 現在

  26. 電子情報通信学会   医用画像研究会幹事補佐

    2005年4月 - 2008年3月

  27. MICCAI (Medical Image Computing and Computer Assisted Intervention)   Area chair

    2005年4月 - 2007年3月

  28. SPIE Medical Imaging Conference 2006, Image Processing   Program Committee

    2005年4月 - 2007年3月

  29. MICCAI (Medical Image Computing and Computer Assisted Intervention)   Reviewer

    2004年4月 - 2006年3月

  30. 電子情報通信学会   医用画像研究会専門委員

    2003年4月 - 2005年3月

  31. 日本エム・イー学会   研究奨励賞選定委員会委員

    2003年4月 - 2005年3月

  32. MICCAI (Medical Image Computing and Computer Assisted Intervention)   Referee

    2003年4月 - 2004年3月

  33. 日本コンピュータ外科学会   編集委員

    2002年4月 - 現在

  34. CARS (Computer Assisted Radiology and Surgery)   Program Committee

    2001年4月 - 2008年3月

  35. IEEE Transactions on Medical Imaging   Ad hoc Reviewer

  36. Medical Image Analysis   Ad hoc Reviewer

  37. International Journal of Computer Assisted Radiology and Surgery

  38. CARS (Computer Assisted Radiology and Surgery)

▼全件表示

委員歴 27

  1. 学内   TMI卓越大学院広報委員会委員  

    2020年12月 - 現在   

  2. 一般財団法人高度情報科学技術研究機構   HPCI連携サービス委員  

    2020年5月 - 2021年3月   

  3. 学内   東山キャンパス倫理審査委員  

    2020年4月 - 現在   

  4. 学内   情報連携推進本部業務会議 委員  

    2020年4月 - 現在   

  5. 学内   情報メディア教育システム運営協議会  

    2020年4月 - 現在   

  6. 学内   情報セキュリティ組織連絡協議会委員  

    2020年4月 - 現在   

  7. 学内   情報メディア教育システム専門委員  

    2020年4月 - 現在   

  8. 学内   プロジェクト・業務専門員会委員  

    2020年4月 - 現在   

  9. 学内   セキュリティ専門委員  

    2020年4月 - 現在   

  10. 学内   全国共同利用システム専門委員  

    2020年4月 - 現在   

  11. 学内   情報連携推進本部会議  

    2020年4月 - 現在   

  12. 学内   情報連携統括本部情報戦略室  

    2020年4月 - 現在   

  13. 学内   情報連携統括本部会議委員  

    2020年4月 - 現在   

  14. 学内   数理・データ科学教育研究センター 教育専門委員  

    2020年4月 - 現在   

  15. 学内   数理・データ科学教育研究センター運営委員  

    2020年4月 - 現在   

  16. 学内   全学技術センター運営専門委員会技術支援室委員  

    2020年4月 - 現在   

  17. 学内   全学技術センター運営委員会運営専門委員  

    2020年4月 - 現在   

  18. 学内   全学技術センター運営委員会人事委員  

    2020年4月 - 現在   

  19. 学内   連合第2群会議委員  

    2020年4月 - 現在   

  20. 学内   将来構想分科会委員  

    2020年4月 - 現在   

  21. 学内   教育分科会(全学教育委員会)委員  

    2020年4月 - 現在   

  22. 学内   安全保障委員  

    2020年4月 - 現在   

  23. 公益財団法人 テルモ生命科学振興財団   研究開発助成選考委員  

    2020年4月 - 2022年3月   

  24. 国立研究開発法人医薬基盤・健康・栄養研究所   国立研究開発法人医薬基盤・健康・栄養研究所SIP評価委員  

    2020年4月 - 2022年3月   

  25. 学内   CIBoG 運営委員  

    2019年4月 - 現在   

  26. 学内   CIBoGカリキュラム委員  

    2019年4月 - 現在   

  27. 一般社団法人 日本コンピュータ外科学会   国際委員会委員  

    2017年12月 - 2019年9月   

▼全件表示

受賞 21

  1. 令和4年度科学技術分野の文部科学大臣表彰科学技術賞

    2022年4月   文部科学省   医療ビッグデータクラウド基 盤のAI自動診断研究への貢献

    佐藤真一、合田憲人、原田達也、森健策

     詳細を見る

    受賞区分:国内外の国際的学術賞  受賞国:日本国

    文部科学大臣は、科学技術に携わる者の意欲の向上を図り、我が国の科学技術の水準の向上に寄与することを目的として、次のとおり各賞を授賞しています。

  2. 日本医用画像工学会論文賞

    2009年8月   日本医用画像工学会  

     詳細を見る

    受賞国:日本国

    計算機支援医用画像のための共通基盤システムの開発

  3. 功績賞

    2021年10月   日本医用画像工学会  

  4. 文部科学大臣表彰 若手科学者賞

    2006年4月   文部科学省  

     詳細を見る

    受賞国:日本国

    仮想化内視鏡システム開発に対する貢献が評価されたものである。

  5. Certificate of Merit, Education Exhibit, Radiological Society of North America

    2009年11月   RSNA (Radiological Society of North America)  

  6. 日本コンピュータ外科学会 2016年度講演論文賞

    2017年10月   一般社団法人 日本コンピュータ外科学会  

     詳細を見る

    受賞区分:国内学会・会議・シンポジウム等の賞  受賞国:日本国

  7. Fellow

    2015年10月   MICCAI  

     詳細を見る

    受賞国:ドイツ連邦共和国

  8. Magna Cum Laude Education Exhibit

    2014年12月   Radiological Society of North America  

     詳細を見る

    受賞国:アメリカ合衆国

  9. Conference 8315, Honorable Mention Poster Award

    2012年2月   SPIE Medical Imaging 2012: Computer-Aided Diagnosis  

     詳細を見る

    受賞国:アメリカ合衆国

  10. Honorable Mention Poster Award

    2011年2月   SPIE Medical Imaging 2011: Image Processing  

     詳細を見る

    受賞国:アメリカ合衆国

  11. 2009年度 CAS Young Investigator Award ゴールド賞(日立メディコ賞)

    2010年11月   第19回日本コンピュータ外科学会大会  

     詳細を見る

    受賞国:日本国

  12. Honorable Mention Poster Award

    2010年2月   SPIE Medical Imaging 2010: Computer-Aided Diagnosi  

     詳細を見る

    受賞国:アメリカ合衆国

  13. Certificate of Merit Education Exhibit

    2009年11月   RSNA(Radiological Society of North America)  

  14. Certificate of Merit

    2004年11月   Radiological Society of North America  

  15. International Society of Computer Assisted Surgery

    2004年6月   Best Poster Award (1st Prize)  

  16. 電子情報通信学会ソサイエティ論文賞

    2000年  

     詳細を見る

    受賞国:日本国

  17. 日本気管支学会論文賞

    2000年  

     詳細を見る

    受賞国:日本国

  18. 丹羽記念賞

    1998年  

     詳細を見る

    受賞国:日本国

  19. 日本医用画像工学会奨励賞

    1997年  

     詳細を見る

    受賞国:日本国

  20. 日本エム・イー学会論文賞

    1997年  

     詳細を見る

    受賞国:日本国

  21. 日本医用画像工学会奨励賞

    1995年  

     詳細を見る

    受賞国:日本国

▼全件表示

 

論文 1772

  1. Label cleaning and propagation for improved segmentation performance using fully convolutional networks 査読有り

    Takaaki Sugino, Yutaro Suzuki, Taichi Kin, Nobuhito Saito, Shinya Onogi, Toshihiro Kawase, Kensaku Mori, Yoshikazu Nakajima

    International Journal of Computer Assisted Radiology and Surgery     2021年3月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: https://doi.org/10.1007/s11548-021-02312-5

  2. Predicting Violence Rating Based on Pairwise Comparison. 査読有り

    Ying JI, Yu WANG, Jien KATO, Kensaku MORI

    IEICE Transactions on Information and Systems   E103.D 巻 ( 12 ) 頁: 2578 - 2589   2020年12月

     詳細を見る

    担当区分:最終著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: https://doi.org/10.1587/transinf.2020EDP7056

  3. Anatomical attention can help to segment the dilated pancreatic duct in abdominal CT. 招待有り 査読有り

    Shen C, Roth HR, Hayashi Y, Oda M, Sato G, Miyamoto T, Rueckert D, Mori K

    International journal of computer assisted radiology and surgery   19 巻 ( 4 ) 頁: 655 - 664   2024年4月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose: Pancreatic duct dilation is associated with an increased risk of pancreatic cancer, the most lethal malignancy with the lowest 5-year relative survival rate. Automatic segmentation of the dilated pancreatic duct from contrast-enhanced CT scans would facilitate early diagnosis. However, pancreatic duct segmentation poses challenges due to its small anatomical structure and poor contrast in abdominal CT. In this work, we investigate an anatomical attention strategy to address this issue. Methods: Our proposed anatomical attention strategy consists of two steps: pancreas localization and pancreatic duct segmentation. The coarse pancreatic mask segmentation is used to guide the fully convolutional networks (FCNs) to concentrate on the pancreas’ anatomy and disregard unnecessary features. We further apply a multi-scale aggregation scheme to leverage the information from different scales. Moreover, we integrate the tubular structure enhancement as an additional input channel of FCN. Results: We performed extensive experiments on 30 cases of contrast-enhanced abdominal CT volumes. To evaluate the pancreatic duct segmentation performance, we employed four measurements, including the Dice similarity coefficient (DSC), sensitivity, normalized surface distance, and 95 percentile Hausdorff distance. The average DSC achieves 55.7%, surpassing other pancreatic duct segmentation methods on single-phase CT scans only. Conclusions: We proposed an anatomical attention-based strategy for the dilated pancreatic duct segmentation. Our proposed strategy significantly outperforms earlier approaches. The attention mechanism helps to focus on the pancreas region, while the enhancement of the tubular structure enables FCNs to capture the vessel-like structure. The proposed technique might be applied to other tube-like structure segmentation tasks within targeted anatomies.

    DOI: 10.1007/s11548-023-03049-z

    Scopus

    PubMed

  4. Artificial intelligence-based diagnostic imaging system with virtual enteroscopy and virtual unfolded views to evaluate small bowel lesions in Crohn's disease. 招待有り 査読有り

    Furukawa K, Oda M, Watanabe O, Nakamura M, Yamamura T, Maeda K, Mori K, Kawashima H

    Revista espanola de enfermedades digestivas     2024年3月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.17235/reed.2024.10405/2024

    PubMed

  5. Development of real-time navigation system for laparoscopic hepatectomy using magnetic micro sensor. 招待有り 査読有り

    Igami T, Hayashi Y, Yokyama Y, Mori K, Ebata T

    Minimally invasive therapy & allied technologies : MITAT : official journal of the Society for Minimally Invasive Therapy     頁: 1 - 11   2024年1月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Minimally Invasive Therapy and Allied Technologies  

    Background: We report a new real-time navigation system for laparoscopic hepatectomy (LH), which resembles a car navigation system. Material and methods: Virtual three-dimensional liver and body images were reconstructed using the “New-VES” system, which worked as roadmap during surgery. Several points of the patient’s body were registered in virtual images using a magnetic position sensor (MPS). A magnetic transmitter, corresponding to an artificial satellite, was placed about 40 cm above the patient’s body. Another MPS, corresponding to a GPS antenna, was fixed on the handling part of the laparoscope. Fiducial registration error (FRE, an error between real and virtual lengths) was utilized to evaluate the accuracy of this system. Results: Twenty-one patients underwent LH with this system. Mean FRE of the initial five patients was 17.7 mm. Mean FRE of eight patients in whom MDCT was taken using radiological markers for registration of body parts as first improvement, was reduced to 10.2 mm (p =.014). As second improvement, a new MPS as an intraoperative body position sensor was fixed on the right-sided chest wall for automatic correction of postural gap. The preoperative and postoperative mean FREs of 8 patients with both improvements were 11.1 mm and 10.1 mm (p =.250). Conclusions: Our system may provide a promising option that virtually guides LH.

    DOI: 10.1080/13645706.2023.2301594

    Scopus

    PubMed

  6. Deep learning model for extensive smartphone-based diagnosis and triage of cataracts and multiple corneal diseases. 招待有り 査読有り

    Ueno Y, Oda M, Yamaguchi T, Fukuoka H, Nejima R, Kitaguchi Y, Miyake M, Akiyama M, Miyata K, Kashiwagi K, Maeda N, Shimazaki J, Noma H, Mori K, Oshika T

    The British journal of ophthalmology     2024年1月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:British Journal of Ophthalmology  

    Aim To develop an artificial intelligence (AI) algorithm that diagnoses cataracts/corneal diseases from multiple conditions using smartphone images. Methods This study included 6442 images that were captured using a slit-lamp microscope (6106 images) and smartphone (336 images). An AI algorithm was developed based on slit-lamp images to differentiate 36 major diseases (cataracts and corneal diseases) into 9 categories. To validate the AI model, smartphone images were used for the testing dataset. We evaluated AI performance that included sensitivity, specificity and receiver operating characteristic (ROC) curve for the diagnosis and triage of the diseases. Results The AI algorithm achieved an area under the ROC curve of 0.998 (95% CI, 0.992 to 0.999) for normal eyes, 0.986 (95% CI, 0.978 to 0.997) for infectious keratitis, 0.960 (95% CI, 0.925 to 0.994) for immunological keratitis, 0.987 (95% CI, 0.978 to 0.996) for cornea scars, 0.997 (95% CI, 0.992 to 1.000) for ocular surface tumours, 0.993 (95% CI, 0.984 to 1.000) for corneal deposits, 1.000 (95% CI, 1.000 to 1.000) for acute angle-closure glaucoma, 0.992 (95% CI, 0.985 to 0.999) for cataracts and 0.993 (95% CI, 0.985 to 1.000) for bullous keratopathy. The triage of referral suggestion using the smartphone images exhibited high performance, in which the sensitivity and specificity were 1.00 (95% CI, 0.478 to 1.00) and 1.00 (95% CI, 0.976 to 1.000) for’urgent’, 0.867 (95% CI, 0.683 to 0.962) and 1.00 (95% CI, 0.971 to 1.000) for’semi-urgent’, 0.853 (95% CI, 0.689 to 0.950) and 0.983 (95% CI, 0.942 to 0.998) for’routine’ and 1.00 (95% CI, 0.958 to 1.00) and 0.896 (95% CI, 0.797 to 0.957) for’observation’, respectively. Conclusions The AI system achieved promising performance in the diagnosis of cataracts and corneal diseases.

    DOI: 10.1136/bjo-2023-324488

    Scopus

    PubMed

  7. Endoscope Automation Framework with Hierarchical Control and Interactive Perception for Multi-Tool Tracking in Minimally Invasive Surgery. 招待有り 査読有り

    Fozilov K, Colan J, Davila A, Misawa K, Qiu J, Hayashi Y, Mori K, Hasegawa Y

    Sensors (Basel, Switzerland)   23 巻 ( 24 )   2023年12月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Sensors  

    In the context of Minimally Invasive Surgery, surgeons mainly rely on visual feedback during medical operations. In common procedures such as tissue resection, the automation of endoscopic control is crucial yet challenging, particularly due to the interactive dynamics of multi-agent operations and the necessity for real-time adaptation. This paper introduces a novel framework that unites a Hierarchical Quadratic Programming controller with an advanced interactive perception module. This integration addresses the need for adaptive visual field control and robust tool tracking in the operating scene, ensuring that surgeons and assistants have optimal viewpoint throughout the surgical task. The proposed framework handles multiple objectives within predefined thresholds, ensuring efficient tracking even amidst changes in operating backgrounds, varying lighting conditions, and partial occlusions. Empirical validations in scenarios involving single, double, and quadruple tool tracking during tissue resection tasks have underscored the system’s robustness and adaptability. The positive feedback from user studies, coupled with the low cognitive and physical strain reported by surgeons and assistants, highlight the system’s potential for real-world application.

    DOI: 10.3390/s23249865

    Scopus

    PubMed

  8. Artificial intelligence for evaluating the risk of gastric cancer: reliable detection and scoring of intestinal metaplasia with deep learning algorithms. 招待有り 査読有り

    Iwaya M, Hayashi Y, Sakai Y, Yoshizawa A, Iwaya Y, Uehara T, Kitagawa M, Fukayama M, Mori K, Ota H

    Gastrointestinal endoscopy   98 巻 ( 6 ) 頁: 925 - 933.e1   2023年12月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Gastrointestinal Endoscopy  

    Background and Aims: Gastric cancer (GC) is associated with chronic gastritis. To evaluate the risk, the Operative Link on Gastric Intestinal Metaplasia Assessment (OLGIM) system was constructed and showed a higher GC risk in stage III or IV patients, determined by the degree of intestinal metaplasia (IM). Although the OLGIM system is useful, evaluating the degree of IM requires substantial experience to produce precise scoring. Whole-slide imaging is becoming routine, but most artificial intelligence (AI) systems in pathology are focused on neoplastic lesions. Methods: Hematoxylin and eosin–stained slides were scanned. Images were divided into each gastric biopsy tissue sample and labeled with an IM score. IM was scored as follows: 0 (no IM), 1 (mild IM), 2 (moderate IM), and 3 (severe IM). Overall, 5753 images were prepared. A deep convolutional neural network (DCNN) model, ResNet50, was used for classification. Results: ResNet50 classified images with and without IM with a sensitivity of 97.7% and specificity of 94.6%. IM scores 2 and 3, involved as criteria of stage III or IV in the OLGIM system, were classified by ResNet50 in 18%. The respective sensitivity and specificity values of classifying IM between scores 0 and 1 and 2 and 3 were 98.5% and 94.9%, respectively. The IM scores classified by pathologists and the AI system were different in only 438 images (7.6%), and we found that ResNet50 tended to miss small foci of IM but successfully identified minimal IM areas that pathologists missed during the review. Conclusions: Our findings suggested that this AI system would contribute to evaluating the risk of GC accuracy, reliability, and repeatability with worldwide standardization.

    DOI: 10.1016/j.gie.2023.06.056

    Scopus

    PubMed

  9. Deep learning-based prediction model for postoperative complications of cervical posterior longitudinal ligament ossification. 招待有り 査読有り

    Ito S, Nakashima H, Yoshii T, Egawa S, Sakai K, Kusano K, Tsutui S, Hirai T, Matsukura Y, Wada K, Katsumi K, Koda M, Kimura A, Furuya T, Maki S, Nagoshi N, Nishida N, Nagamoto Y, Oshima Y, Ando K, Takahata M, Mori K, Nakajima H, Murata K, Miyagi M, Kaito T, Yamada K, Banno T, Kato S, Ohba T, Inami S, Fujibayashi S, Katoh H, Kanno H, Oda M, Mori K, Taneichi H, Kawaguchi Y, Takeshita K, Matsumoto M, Yamazaki M, Okawa A, Imagama S

    European spine journal : official publication of the European Spine Society, the European Spinal Deformity Society, and the European Section of the Cervical Spine Research Society   32 巻 ( 11 ) 頁: 3797 - 3806   2023年11月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:European Spine Journal  

    Purpose: Postoperative complication prediction helps surgeons to inform and manage patient expectations. Deep learning, a model that finds patterns in large samples of data, outperform traditional statistical methods in making predictions. This study aimed to create a deep learning-based model (DLM) to predict postoperative complications in patients with cervical ossification of the posterior longitudinal ligament (OPLL). Methods: This prospective multicenter study was conducted by the 28 institutions, and 478 patients were included in the analysis. Deep learning was used to create two predictive models of the overall postoperative complications and neurological complications, one of the major complications. These models were constructed by learning the patient's preoperative background, clinical symptoms, surgical procedures, and imaging findings. These logistic regression models were also created, and these accuracies were compared with those of the DLM. Results: Overall complications were observed in 127 cases (26.6%). The accuracy of the DLM was 74.6 ± 3.7% for predicting the overall occurrence of complications, which was comparable to that of the logistic regression (74.1%). Neurological complications were observed in 48 cases (10.0%), and the accuracy of the DLM was 91.7 ± 3.5%, which was higher than that of the logistic regression (90.1%). Conclusion: A new algorithm using deep learning was able to predict complications after cervical OPLL surgery. This model was well calibrated, with prediction accuracy comparable to that of regression models. The accuracy remained high even for predicting only neurological complications, for which the case number is limited compared to conventional statistical methods.

    DOI: 10.1007/s00586-023-07562-2

    DOI: 10.1007/s00586-023-07562-2

    Scopus

    PubMed

  10. Social Relation Atmosphere Recognition with Relevant Visual Concepts 招待有り

    Ying J.I., Wang Y., Mori K., Kato J.

    IEICE Transactions on Information and Systems   E106.D 巻 ( 10 ) 頁: 1638 - 1649   2023年10月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:IEICE Transactions on Information and Systems  

    Social relationships (e.g., couples, opponents) are the foundational part of society. Social relation atmosphere describes the overall interaction environment between social relationships. Discovering social relation atmosphere can help machines better comprehend human behaviors and improve the performance of social intelligent applications. Most existing research mainly focuses on investigating social relationships, while ignoring the social relation atmosphere. Due to the complexity of the expressions in video data and the uncertainty of the social relation atmosphere, it is even difficult to define and evaluate. In this paper, we innovatively analyze the social relation atmosphere in video data. We introduce a Relevant Visual Concept (RVC) from the social relationship recognition task to facilitate social relation atmosphere recognition, because social relationships contain useful information about human interactions and surrounding environments, which are crucial clues for social relation atmosphere recognition. Our approach consists of two main steps: (1) we first generate a group of visual concepts that preserve the inherent social relationship information by utilizing a 3D explanation module; (2) the extracted relevant visual concepts are used to supplement the social relation atmosphere recognition. In addition, we present a new dataset based on the existing Video Social Relation Dataset. Each video is annotated with four kinds of social relation atmosphere attributes and one social relationship. We evaluate the proposed method on our dataset. Experiments with various 3D ConvNets and fusion methods demonstrate that the proposed method can effectively improve recognition accuracy compared to end-to-end ConvNets. The visualization results also indicate that essential information in social relationships can be discovered and used to enhance social relation atmosphere recognition.

    DOI: 10.1587/transinf.2023PCP0008

    Scopus

    CiNii Research

  11. Automated Detection and Diagnosis of Spinal Schwannomas and Meningiomas Using Deep Learning and Magnetic Resonance Imaging. 招待有り 査読有り

    Ito S, Nakashima H, Segi N, Ouchida J, Oda M, Yamauchi I, Oishi R, Miyairi Y, Mori K, Imagama S

    Journal of clinical medicine   12 巻 ( 15 )   2023年8月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Journal of Clinical Medicine  

    Spinal cord tumors are infrequently identified spinal diseases that are often difficult to diagnose even with magnetic resonance imaging (MRI) findings. To minimize the probability of overlooking these tumors and improve diagnostic accuracy, an automatic diagnostic system is needed. We aimed to develop an automated system for detecting and diagnosing spinal schwannomas and meningiomas based on deep learning using You Only Look Once (YOLO) version 4 and MRI. In this retrospective diagnostic accuracy study, the data of 50 patients with spinal schwannomas, 45 patients with meningiomas, and 100 control cases were reviewed, respectively. Sagittal T1-weighted (T1W) and T2-weighted (T2W) images were used for object detection, classification, training, and validation. The object detection and diagnosis system was developed using YOLO version 4. The accuracies of the proposed object detections based on T1W, T2W, and T1W + T2W images were 84.8%, 90.3%, and 93.8%, respectively. The accuracies of the object detection for two spine surgeons were 88.9% and 90.1%, respectively. The accuracies of the proposed diagnoses based on T1W, T2W, and T1W + T2W images were 76.4%, 83.3%, and 84.1%, respectively. The accuracies of the diagnosis for two spine surgeons were 77.4% and 76.1%, respectively. We demonstrated an accurate, automated detection and diagnosis of spinal schwannomas and meningiomas using the developed deep learning-based method based on MRI. This system could be valuable in supporting radiological diagnosis of spinal schwannomas and meningioma, with a potential of reducing the radiologist’s overall workload.

    DOI: 10.3390/jcm12155075

    Scopus

    PubMed

  12. Activities of National Institute of Informatics in Japan 招待有り 査読有り

    Kitsuregawa M., Urushidani S., Yamaji K., Takakura H., Hasuo I., Sato I., Ishikawa F., Echizen I., Mori K.

    Communications of the ACM   66 巻 ( 7 ) 頁: 58 - 63   2023年6月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Communications of the ACM  

    DOI: 10.1145/3589736

    Scopus

  13. Speedometer for withdrawal time monitoring during colonoscopy: a clinical implementation trial. 招待有り 査読有り

    Barua I, Misawa M, Glissen Brown JR, Walradt T, Kudo SE, Sheth SG, Nee J, Iturrino J, Mukherjee R, Cheney CP, Sawhney MS, Pleskow DK, Mori K, Løberg M, Kalager M, Wieszczy P, Bretthauer M, Berzin TM, Mori Y

    Scandinavian journal of gastroenterology   58 巻 ( 6 ) 頁: 664 - 670   2023年6月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Scandinavian Journal of Gastroenterology  

    Objectives: Meticulous inspection of the mucosa during colonoscopy, represents a lengthier withdrawal time, but has been shown to increase adenoma detection rate (ADR). We investigated if artificial intelligence-aided speed monitoring can improve suboptimal withdrawal time. Methods: We evaluated the implementation of a computer-aided speed monitoring device during colonoscopy at a large academic endoscopy center. After informed consent, patients ≥18 years undergoing colonoscopy between 5 March and 29 April 2021 were examined without the use of the speedometer, and with the speedometer between 29 April and 30 June 2021. All colonoscopies were recorded, and withdrawal time was assessed based on the recordings in a blinded fashion. We compared mean withdrawal time, percentage of withdrawal time ≥6 min, and ADR with and without the speedometer. Results: One hundred sixty-six patients in each group were eligible for analyses. Mean withdrawal time was 9 min and 6.6 s (95% CI: 8 min and 34.8 s to 9 min and 39 s) without the use of the speedometer, and 9 min and 9 s (95% CI: 8 min and 45 s to 9 min and 33.6 s) with the speedometer; difference 2.3 s (95% CI: −42.3–37.7, p = 0.91). The ADRs were 45.2% (95% CI: 37.6–52.8) without the speedometer as compared to 45.8% (95% CI: 38.2–53.4) with the speedometer (p = 0.91). The proportion of colonoscopies with withdrawal time ≥6 min without the speedometer was 85.5% (95% CI: 80.2–90.9) versus 86.7% (95% CI: 81.6–91.9) with the speedometer (p = 0.75). Conclusions: Use of speed monitoring during withdrawal did not increase withdrawal time or ADR in colonoscopy. ClinicalTrials.gov Identifier: NCT04710251.

    DOI: 10.1080/00365521.2022.2154616

    Scopus

    PubMed

  14. Correction to: Gaussian affinity and GIoU-based loss for perforation detection and localization from colonoscopy videos. 招待有り 査読有り

    Jiang K, Itoh H, Oda M, Okumura T, Mori Y, Misawa M, Hayashi T, Kudo SE, Mori K

    International journal of computer assisted radiology and surgery   18 巻 ( 5 ) 頁: 807   2023年5月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    The original version of this article unfortunately contained a mistake. The incorrect notations were given in the author’s affiliations.

    DOI: 10.1007/s11548-023-02899-x

    Scopus

    PubMed

  15. Gaussian affinity and GIoU-based loss for perforation detection and localization from colonoscopy videos. 招待有り 査読有り

    Jiang K, Itoh H, Oda M, Okumura T, Mori Y, Misawa M, Hayashi T, Kudo SE, Mori K

    International journal of computer assisted radiology and surgery   18 巻 ( 5 ) 頁: 795 - 805   2023年5月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose: Endoscopic submucosal dissection (ESD) is a minimally invasive treatment for early gastric cancer. However, perforations may happen and cause peritonitis during ESD. Thus, there is a potential demand for a computer-aided diagnosis system to support physicians in ESD. This paper presents a method to detect and localize perforations from colonoscopy videos to avoid perforation ignoring or enlarging by ESD physicians. Method: We proposed a training method for YOLOv3 by using GIoU and Gaussian affinity losses for perforation detection and localization in colonoscopic images. In this method, the object functional contains the generalized intersection over Union loss and Gaussian affinity loss. We propose a training method for the architecture of YOLOv3 with the presented loss functional to detect and localize perforations precisely. Results: To qualitatively and quantitatively evaluate the presented method, we created a dataset from 49 ESD videos. The results of the presented method on our dataset revealed a state-of-the-art performance of perforation detection and localization, which achieved 0.881 accuracy, 0.869 AUC, and 0.879 mean average precision. Furthermore, the presented method is able to detect a newly appeared perforation in 0.1 s. Conclusions: The experimental results demonstrated that YOLOv3 trained by the presented loss functional were very effective in perforation detection and localization. The presented method can quickly and precisely remind physicians of perforation happening in ESD. We believe a future CAD system can be constructed for clinical applications with the proposed method.

    DOI: 10.1007/s11548-022-02821-x

    Scopus

    PubMed

  16. Class-wise confidence-aware active learning for laparoscopic images segmentation 招待有り 査読有り

    Qiu, J; Hayashi, Y; Oda, M; Kitasaka, T; Mori, K

    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY   18 巻 ( 3 ) 頁: 473 - 482   2023年3月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose: Segmentation tasks are important for computer-assisted surgery systems as they provide the shapes of organs and the locations of instruments. What prevents the most powerful segmentation approaches from becoming practical applications is the requirement for annotated data. Active learning provides strategies to dynamically select the most informative samples to reduce the annotation workload. However, most previous active learning literature has failed to select the frames that containing low-appearing frequency classes, even though the existence of these classes is common in laparoscopic videos, resulting in poor performance in segmentation tasks. Furthermore, few previous works have explored the unselected data to improve active learning. Therefore, in this work, we focus on these classes to improve the segmentation performance. Methods: We propose a class-wise confidence bank that stores and updates the confidence scores for each class and a new acquisition function based on a confidence bank. We apply confidence scores to explore an unlabeled dataset by combining it with a class-wise data mixture method to exploit unlabeled datasets without any annotation. Results: We validated our proposal on two open-source datasets, CholecSeg8k and RobSeg2017, and observed that its performance surpassed previous active learning studies with about 10 % improvement on CholecSeg8k, especially for classes with a low-appearing frequency. For robSeg2017, we conducted experiments with a small and large annotation budgets to validate situation that shows the effectiveness of our proposal. Conclusions: We presented a class-wise confidence score to improve the acquisition function for active learning and explored unlabeled data with our proposed class-wise confidence score, which results in a large improvement over the compared methods. The experiments also showed that our proposal improved the segmentation performance for classes with a low-appearing frequency.

    DOI: 10.1007/s11548-022-02773-2

    Web of Science

    Scopus

    PubMed

  17. A skeleton context-aware 3D fully convolutional network for abdominal artery segmentation 招待有り 査読有り

    Zhu, RY; Oda, M; Hayashi, Y; Kitasaka, T; Misawa, K; Fujiwara, M; Mori, K

    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY   18 巻 ( 3 ) 頁: 461 - 472   2023年3月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose: This paper aims to propose a deep learning-based method for abdominal artery segmentation. Blood vessel structure information is essential to diagnosis and treatment. Accurate blood vessel segmentation is critical to preoperative planning. Although deep learning-based methods perform well on large organs, segmenting small organs such as blood vessels is challenging due to complicated branching structures and positions. We propose a 3D deep learning network from a skeleton context-aware perspective to improve segmentation accuracy. In addition, we propose a novel 3D patch generation method which could strengthen the structural diversity of a training data set. Method: The proposed method segments abdominal arteries from an abdominal computed tomography (CT) volume using a 3D fully convolutional network (FCN). We add two auxiliary tasks to the network to extract the skeleton context of abdominal arteries. In addition, our skeleton-based patch generation (SBPG) method further enables the FCN to segment small arteries. SBPG generates a 3D patch from a CT volume by leveraging artery skeleton information. These methods improve the segmentation accuracies of small arteries. Results: We used 20 cases of abdominal CT volumes to evaluate the proposed method. The experimental results showed that our method outperformed previous segmentation accuracies. The averaged precision rate, recall rate, and F-measure were 95.5%, 91.0%, and 93.2%, respectively. Compared to a baseline method, our method improved 1.5% the averaged recall rate and 0.7% the averaged F-measure. Conclusions: We present a skeleton context-aware 3D FCN to segment abdominal arteries from an abdominal CT volume. In addition, we propose a 3D patch generation method. Our fully automated method segmented most of the abdominal artery regions. The method produced competitive segmentation performance compared to previous methods.

    DOI: 10.1007/s11548-022-02767-0

    Web of Science

    Scopus

    PubMed

  18. Deep Learning-Based Seminal Vesicle and Vas Deferens Recognition in the Posterior Approach of Robot-Assisted Radical Prostatectomy. 招待有り 査読有り

    Takeshita N, Sakamoto S, Kitaguchi D, Takeshita N, Yajima S, Koike T, Ishikawa Y, Matsuzaki H, Mori K, Masuda H, Ichikawa T, Ito M

    Urology   173 巻   頁: 98 - 103   2023年3月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Urology  

    Objective: To develop a convolutional neural network to recognize the seminal vesicle and vas deferens (SV-VD) in the posterior approach of robot-assisted radical prostatectomy (RARP) and assess the performance of the convolutional neural network model under clinically relevant conditions. Methods: Intraoperative videos of robot-assisted radical prostatectomy performed by the posterior approach from 3 institutions were obtained between 2019 and 2020. Using SV-VD dissection videos, semantic segmentation of the seminal vesicle-vas deferens area was performed using a convolutional neural network-based approach. The dataset was split into training and test data in a 10:3 ratio. The average time required by 6 novice urologists to correctly recognize the SV-VD was compared using intraoperative videos with and without segmentation masks generated by the convolutional neural network model, which was evaluated with the test data using the Dice similarity coefficient. Training and test datasets were compared using the Mann–Whitney U-test and chi-square test. Time required to recognize the SV-VD was evaluated using the Mann–Whitney U-test. Results: From 26 patient videos, 1 040 images were created (520 SV-VD annotated images and 520 SV-VD non-displayed images). The convolutional neural network model had a Dice similarity coefficient value of 0.73 in the test data. Compared with original videos, videos with the generated segmentation mask promoted significantly faster seminal vesicle and vas deferens recognition (P < .001). Conclusion: The convolutional neural network model provides accurate recognition of the SV-VD in the posterior approach RARP, which may be helpful, especially for novice urologists.

    DOI: 10.1016/j.urology.2022.12.006

    Scopus

    PubMed

  19. Database-driven patient-specific registration error compensation method for image-guided laparoscopic surgery. 招待有り 査読有り

    Hayashi Y, Misawa K, Mori K

    International journal of computer assisted radiology and surgery   18 巻 ( 1 ) 頁: 63 - 69   2023年1月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose: A surgical navigation system helps surgeons understand anatomical structures in the operative field during surgery. Patient-to-image registration, which aligns coordinate systems between the CT volume and a positional tracker, is vital for accurate surgical navigation. Although a point-based rigid registration method using fiducials on the body surface is often utilized for laparoscopic surgery navigation, precise registration is difficult due to such factors as soft tissue deformation. We propose a method that compensates a transformation matrix computed using fiducials on the body surface based on the analysis of positional information in the database. Methods: We built our database by measuring the positional information of the fiducials and the guidance targets in both the CT volume and positional tracker coordinate systems through previous surgeries. We computed two transformation matrices: using only the fiducials and using only the guidance targets in all the data in the database. We calculated the differences between the two transformation matrices in each piece of data. The compensation transformation matrix was computed by averaging these difference matrices. In this step, we selected the data from the database based on the similarity of the fiducials and the configuration of the guidance targets. Results: We evaluated our proposed method using 20 pieces of data acquired during laparoscopic gastrectomy for gastric cancer. The locations of blood vessels were used as guidance targets for computing target registration error. The mean target registration errors significantly decreased from 33.0 to 17.1 mm before and after the compensation. Conclusion: This paper described a registration error compensation method using a database for image-guided laparoscopic surgery. Since our proposed method reduced registration error without additional intraoperative measurements during surgery, it increases the accuracy of surgical navigation for laparoscopic surgery.

    DOI: 10.1007/s11548-022-02804-y

    Scopus

    PubMed

  20. Computer-Aided Size Estimation of Colorectal Polyps 招待有り 査読有り

    Hotta K., Itoh H., Mori Y., Misawa M., Mori K., Kudo S.e.

    Techniques and Innovations in Gastrointestinal Endoscopy   25 巻 ( 2 ) 頁: 186 - 188   2023年1月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Techniques and Innovations in Gastrointestinal Endoscopy  

    DOI: 10.1016/j.tige.2022.11.004

    Scopus

  21. Surgical area recognition from laparoscopic images in laparoscopic gastrectomy for gastric cancer using label smoothing and uncertainty 招待有り 査読有り

    Hayashi Y., Misawa K., Mori K.

    Progress in Biomedical Optics and Imaging - Proceedings of SPIE   12466 巻   2023年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Progress in Biomedical Optics and Imaging - Proceedings of SPIE  

    This paper presents surgical area recognition from laparoscopic images in laparoscopic gastrectomy. Laparoscopic gastrectomy is performed as a minimally invasive procedure for removing gastric cancer. In this surgery, surgeons should cut the blood vessels around the stomach before resecting cancer. Since this type of surgery requires higher surgical skill, a surgical assistance system has been developed to enhance surgeons' abilities. Recognition of the surgical area related to the blood vessels from laparoscopic videos provides essential information in the operative field to the surgical assistance system. Therefore, we develop a method for recognizing laparoscopic images into the surgical area. The proposed method classifies the laparoscopic images into seven scenes using deep neural networks. We introduce the label smoothing in time direction to obtain a soft label. Bayesian neural networks are used to classify the laparoscopic images and estimate the uncertainty. After the classification, we modify the predictions on each laparoscopic image using the estimated uncertainty and temporal information. We evaluated the proposed method using 10,818 images from 10 videos recorded during laparoscopic gastrectomy for gastric cancer. Five-fold cross-validation was performed for the performance evaluation. The mean classification accuracy was 84.0%. The experimental results showed that the proposed method could recognize the surgical area from laparoscopic images.

    DOI: 10.1117/12.2654775

    Scopus

  22. A semantic segmentation method for laparoscopic images using semantically similar groups 招待有り 査読有り

    Uramoto L., Hayashi Y., Oda M., Kitasaka T., Misawa K., Mori K.

    Progress in Biomedical Optics and Imaging - Proceedings of SPIE   12466 巻   2023年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Progress in Biomedical Optics and Imaging - Proceedings of SPIE  

    In this paper, we present a segmentation method for laparoscopic images using semantically similar groups for multi-class semantic segmentation. Accurate semantic segmentation is a key problem for computer assisted surgeries. Common segmentation models do not explicitly learn similarities between classes. We propose a model that, in addition to learning to segment an image into classes, also learns to segment it into human-defined semantically similar groups. We modify the LinkNet34 architecture by adding a second decoder with an auxiliary task of segmenting the image into these groups. The feature maps of the second decoder are merged into the final decoder. We validate our method against our base model LinkNet34 and a larger LinkNet50. We find that our proposed modification increased the performance both with mean Dice (average +1.5%) and mean Intersection over Union metrics (average +2.8%) on two laparoscopic datasets.

    DOI: 10.1117/12.2654636

    Scopus

  23. 膀胱鏡画像におけるtiny-YOLOを用いた腫瘍検出 招待有り 査読有り

    牟田口 淳, 小田 昌宏, 猪口 淳一, 森 健策, 江藤 正俊

    生体医工学   Annual61 巻 ( Abstract ) 頁: 255_2 - 255_2   2023年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:公益社団法人 日本生体医工学会  

    <p>【背景】膀胱癌は経尿道手術後に再発が多い腫瘍であり、膀胱鏡での腫瘍の見落としが原因とされている。内視鏡での観察は、従来の白色光(WLI)の他に、NBIを使用するが、いずれの腫瘍検出精度は検者の技量・経験に依存するため、検査の再現性・客観性が少ないことが課題である。近年、人工知能(AI)が多くの医療分野で活用されており、AIによる検査は、客観性・再現性を持った上で、エキスパートレベルと同程度の診断能を持つ可能性があるとされている。今回、WLI/NBI膀胱鏡画像を用いて、AIによる腫瘍検出の精度を検証した。【方法】2019年から2021年まで、経尿道的膀胱腫瘍切除術(TURBT)の際に、WLI/NBIを用いて観察を行った症例の手術動画から膀胱鏡画像を作成し、腫瘍を含む画像を腫瘍画像、腫瘍を含まない画像を正常画像と定義した。腫瘍画像内の膀胱腫瘍を矩形でアノテーションを行い、テストデータ用の画像を用いてAIによる感度、特異度、陽性的中率を評価した。AIでの物体検出はtiny-YOLOを用い、腫瘍検出精度の検証を行った。【結果】WLIとNBIから、それぞれ腫瘍画像をそれぞれtiny-YOLOで学習を行い、腫瘍画像(WLI: 525枚、NBI:219枚)と正常画像(WLI:98枚、NBI:108枚)で精度検証を行った。AIによる物体検出の感度/特異度/陽性的中率は、WLIで87.8%/88.8%/97.7%、NBIで82.2%/81.4%/90.0%であった。【結論】膀胱鏡画像において、AIにより比較的良好に腫瘍検出が可能であった。更なる精度改善、リアルタイム検出への課題について、文献的考察を加え報告する。</p>

    DOI: 10.11239/jsmbe.annual61.255_2

    CiNii Research

  24. 看護師の職域プレゼンティズム(心身不調)をIoTセンシングで早期検出する 招待有り 査読有り

    山下 佳子, 大山 慎太郎, 鈴木 輝彦, 坂本 祐二, 出野 義則, 山下 暁士, 赤川 里美, 藤井 晃子, 白鳥 義宗, 森 健策

    生体医工学   Annual61 巻 ( Abstract ) 頁: 225_1 - 225_1   2023年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:公益社団法人 日本生体医工学会  

    <p>看護職は心身不調に由来する顕在化していない労働生産性低下状態(プレゼンティズム)が他職種より多く、不規則な勤務形態や強い心身ストレスにより、離職に至る率も高いのが現状である。類似研究は腰痛に注目してウェアラブルデバイスで筋硬度測定によりプレゼンティズム発生を予測するが、性別や運動経験の有無など個人の特性を大きく反映するため予測が難しいと報告がある。そこで本研究では、プレゼンティズムの主な要因で筋骨格系疼痛の原因となる看護行動を、Internet of Things (IoT)センシングで抽出・判別することを目的とする。既設のBLE 屋内測位(AoA法)アンテナを利用した加速度センサデバイスによる看護行動データの収集と、観察者の直接観察による看護行動に対するラベルデータ(看護行動、実施場所、行動の開始・終了時刻を入力)作成を実施した。看護行動は身体プレゼンティズムリスク(腰痛・膝痛・頚肩腕痛)に分類し定義した。1分毎の行動データ(位置情報の平均値、加速度の最大・最小・平均値・標準偏差)、及び時刻を入力データとし、ラベルデータを教師データとして機械学習を行い、行動認識モデルを構築した。看護行動割合は、腰痛43.6%、膝痛3.3%、頚肩腕痛7.4%であった。IoTセンシングで個人ごとのプレゼンティズムに発展する行動をモニタリングすることにより、プレゼンティズムを予防することが可能と示唆された。</p>

    DOI: 10.11239/jsmbe.annual61.225_1

    CiNii Research

  25. 医用画像とAI 招待有り

    森 健策

    カレントテラピー   41 巻   頁: 79 - 79   2023年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    CiNii Research

  26. TriMix: A General Framework for Medical Image Segmentation from Limited Supervision 招待有り 査読有り

    Zheng Z., Hayashi Y., Oda M., Kitasaka T., Mori K.

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   13846 LNCS 巻   頁: 185 - 202   2023年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)  

    We present a general framework for medical image segmentation from limited supervision, reducing the reliance on fully and densely labeled data. Our method is simple, jointly trains triple diverse models, and adopts a mix augmentation scheme, and thus is called TriMix. TriMix imposes consistency under a more challenging perturbation, i.e., combining data augmentation and model diversity on the tri-training framework. This straightforward strategy enables TriMix to serve as a strong and general learner learning from limited supervision using different kinds of imperfect labels. We conduct extensive experiments to show TriMix’s generic purpose for semi- and weakly-supervised segmentation tasks. Compared to task-specific state-of-the-arts, TriMix achieves competitive performance and sometimes surpasses them by a large margin. The code is available at https://github.com/MoriLabNU/TriMix.

    DOI: 10.1007/978-3-031-26351-4_12

    Scopus

  27. Thrombosis region extraction and quantitative analysis in confocal laser scanning microscopic image sequence in in-vivo imaging 招待有り 査読有り

    Wu Y., Oda M., Hayashi Y., Kawamura S., Takebe T., Mori K.

    Progress in Biomedical Optics and Imaging - Proceedings of SPIE   12468 巻   2023年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Progress in Biomedical Optics and Imaging - Proceedings of SPIE  

    In this paper, we propose a scheme that includes automated extraction of thrombus regions and quantitative analysis of thrombosis in confocal laser scanning microscope (CLSM) blood flow image sequence. Making thrombosis model in animal models play an important role in the development of antithrombotic drugs and ascertaining thrombosis mechanisms. Making thrombosis model in cerebral cortex of mice is usually observed using a CLSM in the fluorescence mode. However, some small changes of thrombus regions are not easily observed in CLSM blood flow image sequences. In addition, it is not easy for researchers to quantitatively analyze the degree of thrombosis. Therefore, we propose a scheme to achieve automatic thrombosis region extraction and quantitative analysis. In which, our thrombosis region extraction method uses analysis of changing pattern of thrombosis regions in CLSM blood flow image sequence. Experimental results showed that our scheme can help biological researchers observe and analyze the changes of thrombosis in animal models and reduced the use of fluorescent thrombus markers.

    DOI: 10.1117/12.2654632

    Scopus

  28. Real Bronchoscopic Images-based Bronchial Nomenclature: a Preliminary Study 招待有り 査読有り

    Wang C., Hayashi Y., Oda M., Kitasaka T., Takabatake H., Mori M., Honma H., Natori H., Mori K.

    Progress in Biomedical Optics and Imaging - Proceedings of SPIE   12466 巻   2023年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Progress in Biomedical Optics and Imaging - Proceedings of SPIE  

    This article describes a method for bronchial nomenclature using real bronchoscopic (RB) images and pre-built knowledge base of branches. The bronchus has a complex tree-like structure, which increases the difficulty of bronchoscopy. Therefore, a bronchoscopic navigation system is used to help physicians during examination. Conventional navigation system used preoperative CT images and real bronchoscopic images to obtain the camera pose for navigation, whose accuracy is influenced by organ deformation. We propose a bronchial nomenclature method to estimate branch names for bronchoscopic navigation. This method consists of a bronchus knowledge base construction model, a camera motion estimation module, an anatomical structure tracking module, and a branch name estimation module. The knowledge base construction module is used to find the relationship of each branch. The anatomical tracking module is used to track the bronchial orifice (BO) extracted in each RB frame. The camera motion estimation module is used to estimate the camera motion between two frames. The branch name estimation module uses the pre-built bronchus knowledge base and BO tracking results to find the name of each branch. Experimental results showed that it is possible to estimate branch names using only RB images and the pre-built knowledge base of branches.

    DOI: 10.1117/12.2654508

    Scopus

  29. Priority attention network with Bayesian learning for fully automatic segmentation of substantia nigra from neuromelanin MRI 招待有り 査読有り

    Hu T., Itoh H., Oda M., Saiki S., Hattori N., Kamagata K., Aoki S., Mori K.

    Progress in Biomedical Optics and Imaging - Proceedings of SPIE   12464 巻   2023年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Progress in Biomedical Optics and Imaging - Proceedings of SPIE  

    Neuromelanin magnetic resonance imaging (NM-MRI) has been widely used in the diagnosis of Parkinson’s disease (PD) for its significantly enhanced contrast between the PD-related structure, the substantia nigra (SN) and surrounding tissues. To develop the computer-aided diagnosis (CAD) system of PD and reduce the labor burden of clinicians, precise and automatic segmentation of SN is becoming more and more desired. This paper proposes a novel network combining the priority gating attention and Bayesian learning for improving the accuracy of fully automatic SN segmentation from NM-MRI. Different from the conventional gated attention model, the proposed network uses the prior SN probability map for guiding the attention computation and reducing the potential disruptions introduced by the background. Additionally, to lower the risks of over-fitting and estimate the confidence scores for the segmentation results, Bayesian learning with Monte Carlo dropout is applied in the training and testing phases. The quantitative results showed that the proposed network acquired the averaged Dice score of 79.46% in comparison with the baseline model 77.93%.

    DOI: 10.1117/12.2655112

    Scopus

  30. Octree Cube Constraints in PBD Method for High Resolution Surgical Simulation 招待有り 査読有り

    Miyazaki R., Hayashi Y., Oda M., Mori K.

    Progress in Biomedical Optics and Imaging - Proceedings of SPIE   12464 巻   2023年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Progress in Biomedical Optics and Imaging - Proceedings of SPIE  

    This paper proposes a deformable tissue model that introduces octree lattice vertex layout and cubic constraints to the orthodox PBD (Position Based Dynamics) method. Surgical simulation is expected to provide a safe method for training in surgery, which is especially useful for preoperative education of inexperienced surgeons and/or for the case a prior attempt is required. To build a surgical simulator, it is necessary to develop organ models with deformations and interaction algorithms between surgical instruments and organ models, all of which must be performed in real time. Since existing surgical simulators focus on real-time performance, the resolution of organ models is limited. The proposed method restricts the vertex locations of the PBD method to the vertices of the octree lattice to save computation time while maintaining a high deformation resolution. To obtain appropriate results even for large deformations, three-dimensional constraints are applied to each octree cube as the constraints of the PBD method. In the simulations, we tested the overall deformation by dropping a liver model and the local deformation scene by laparoscopic clipping. As a result, we achieved deformation simulations at 26.5 fps for the model with approximately 2,672 cube elements and 20,659 vertices.

    DOI: 10.1117/12.2654092

    Scopus

  31. Multi-view Guidance for Self-supervised Monocular Depth Estimation on Laparoscopic Images via Spatio-Temporal Correspondence 招待有り 査読有り

    Li W., Hayashi Y., Oda M., Kitasaka T., Misawa K., Mori K.

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   14228 LNCS 巻   頁: 429 - 439   2023年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)  

    This work proposes an innovative self-supervised approach to monocular depth estimation in laparoscopic scenarios. Previous methods independently predicted depth maps ignoring spatial coherence in local regions and temporal correlation between adjacent images. The proposed approach leverages spatio-temporal coherence to address the challenges of textureless areas and homogeneous colors in such scenes. This approach utilizes a multi-view depth estimation model to guide monocular depth estimation when predicting depth maps. Moreover, the minimum reprojection error is extended to construct a cost volume for the multi-view model using adjacent images. Additionally, a 3D consistency of the point cloud back-projected from predicted depth maps is optimized for the monocular depth estimation model. To benefit from spatial coherence, deformable patch-matching is introduced to the monocular and multi-view models to smooth depth maps in local regions. Finally, a cycled prediction learning for view synthesis and relative poses is designed to exploit the temporal correlation between adjacent images fully. Experimental results show that the proposed method outperforms existing methods in both qualitative and quantitative evaluations. Our code is available at https://github.com/MoriLabNU/MGMDepthL.

    DOI: 10.1007/978-3-031-43996-4_41

    Scopus

  32. Masked Frequency Consistency for Domain-Adaptive Semantic Segmentation of Laparoscopic Images 招待有り 査読有り

    Zhao X., Hayashi Y., Oda M., Kitasaka T., Mori K.

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   14220 LNCS 巻   頁: 663 - 673   2023年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)  

    Semantic segmentation of laparoscopic images is an important issue for intraoperative guidance in laparoscopic surgery. However, acquiring and annotating laparoscopic datasets is labor-intensive, which limits the research on this topic. In this paper, we tackle the Domain-Adaptive Semantic Segmentation (DASS) task, which aims to train a segmentation network using only computer-generated simulated images and unlabeled real images. To bridge the large domain gap between generated and real images, we propose a Masked Frequency Consistency (MFC) module that encourages the network to learn frequency-related information of the target domain as additional cues for robust recognition. Specifically, MFC randomly masks some high-frequency information of the image to improve the consistency of the network’s predictions for low-frequency images and real images. We conduct extensive experiments on existing DASS frameworks with our MFC module and show performance improvements. Our approach achieves comparable results to fully supervised learning method on the CholecSeg8K dataset without using any manual annotation. The code is available at github.com/MoriLabNU/MFC.

    DOI: 10.1007/978-3-031-43907-0_63

    Scopus

  33. Improved method for COVID-19 Classification of Complex-Architecture CNN from Chest CT volumes using Orthogonal Ensemble Networks 招待有り 査読有り

    Toda R., Oda M., Hayashi Y., Otake Y., Hashimoto M., Akashi T., Aoki S., Mori K.

    Progress in Biomedical Optics and Imaging - Proceedings of SPIE   12465 巻   2023年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Progress in Biomedical Optics and Imaging - Proceedings of SPIE  

    This paper introduces the improved method for the COVID-19 classification based on computed tomography (CT) volumes using a combination of a complex-architecture convolutional neural network (CNN) and orthogonal ensemble networks (OEN). The novel coronavirus disease reported in 2019 (COVID-19) is still spreading worldwide. Early and accurate diagnosis of COVID-19 is required in such a situation, and the CT scan is an essential examination. Various computer-aided diagnosis (CAD) methods have been developed to assist and accelerate doctors' diagnoses. Although one of the effective methods is ensemble learning, existing methods combine some major models which do not specialize in COVID-19. In this study, we attempted to improve the performance of a CNN for the COVID-19 classification based on chest CT volumes. The CNN model specializes in feature extraction from anisotropic chest CT volumes. We adopt the OEN, an ensemble learning method considering inter-model diversity, to boost its feature extraction ability. For the experiment, We used chest CT volumes of 1283 cases acquired in multiple medical institutions in Japan. The classification result on 257 test cases indicated that the combination could improve the classification performance.

    DOI: 10.1117/12.2653792

    Scopus

  34. Classification of COVID-19 cases from chest CT volumes using hybrid model of 3D CNN and 3D MLP-Mixer 招待有り 査読有り

    Oda M., Zheng T., Hayashi Y., Otake Y., Hashimoto M., Akashi T., Aoki S., Mori K.

    Progress in Biomedical Optics and Imaging - Proceedings of SPIE   12465 巻   2023年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Progress in Biomedical Optics and Imaging - Proceedings of SPIE  

    This paper proposes an automated classification method of COVID-19 chest CT volumes using improved 3D MLP-Mixer. Novel coronavirus disease 2019 (COVID-19) spreads over the world, causing a large number of infected patients and deaths. Sudden increase in the number of COVID-19 patients causes a manpower shortage in medical institutions. Computer-aided diagnosis (CAD) system provides quick and quantitative diagnosis results. CAD system for COVID-19 enables efficient diagnosis workflow and contributes to reduce such manpower shortage. In image-based diagnosis of viral pneumonia cases including COVID-19, both local and global image features are important because viral pneumonia cause many ground glass opacities and consolidations in large areas in the lung. This paper proposes an automated classification method of chest CT volumes for COVID-19 diagnosis assistance. MLP-Mixer is a recent method of image classification using Vision Transformer-like architecture. It performs classification using both local and global image features. To classify 3D CT volumes, we developed a hybrid classification model that consists of both a 3D convolutional neural network (CNN) and a 3D version of the MLP-Mixer. Classification accuracy of the proposed method was evaluated using a dataset that contains 1205 CT volumes and obtained 79.5% of classification accuracy. The accuracy was higher than that of conventional 3D CNN models consists of 3D CNN layers and simple MLP layers.

    DOI: 10.1117/12.2654706

    Scopus

  35. Automated Detection of the Thoracic Ossification of the Posterior Longitudinal Ligament Using Deep Learning and Plain Radiographs. 招待有り 査読有り

    Ito S, Nakashima H, Segi N, Ouchida J, Oda M, Yamauchi I, Oishi R, Miyairi Y, Mori K, Imagama S

    BioMed research international   2023 巻   頁: 8495937   2023年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:BioMed Research International  

    Ossification of the ligaments progresses slowly in the initial stages, and most patients are unaware of the disease until obvious myelopathy symptoms appear. Consequently, treatment and clinical outcomes are not satisfactory. This study is aimed at developing an automated system for the detection of the thoracic ossification of the posterior longitudinal ligament (OPLL) using deep learning and plain radiography. We retrospectively reviewed the data of 146 patients with thoracic OPLL and 150 control cases without thoracic OPLL. Plain lateral thoracic radiographs were used for object detection, training, and validation. Thereafter, an object detection system was developed, and its accuracy was calculated. The performance of the proposed system was compared with that of two spine surgeons. The accuracy of the proposed object detection model based on plain lateral thoracic radiographs was 83.4%, whereas the accuracies of spine surgeons 1 and 2 were 80.4% and 77.4%, respectively. Our findings indicate that our automated system, which uses a deep learning-based method based on plain radiographs, can accurately detect thoracic OPLL. This system has the potential to improve the diagnostic accuracy of thoracic OPLL.

    DOI: 10.1155/2023/8495937

    Scopus

    PubMed

  36. AI画像解析による内視鏡外科手術手技のビデオ評価及び手術支援システムの構築 招待有り 査読有り

    安井 昭洋, 内田 広夫, 森 健策, 石田 昇平, 出家 亨一, 檜 顕成, 城田 千代栄, 小田 昌宏, 林 雄一郎

    生体医工学   Annual61 巻 ( Abstract ) 頁: 127_2 - 127_2   2023年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:公益社団法人 日本生体医工学会  

    <p>【はじめに】術後成長発達する小児患者にとって、低侵襲手術は非常に重要である。しかし患者数は限られているため、しっかりとした手術を行うためにoff the job-training(OJT)が重要である。さらにOJTでの効率的な手技獲得には、手技を客観的に評価しfeed backを行うシステムが必須である。また安全で効率的な内視鏡手術を行うためには、臓器の位置関係の把握が必要であるため、術中ナビゲーションは重要な要件となる。これらの課題に対して、AIを用いた内視鏡手技評価および手術支援システムの構築に着手しており現状の成果を報告する。【方法と結果】食道閉鎖症モデルを用いた吻合手技を被験者に課し、各被験者の手技を最初に人の目で「check 表」「エラー項目」「時間」を用いて評価した。次にビデオから検出した鉗子の動きと人が判定した手技優劣の関係性をAIで学習させ、上位88%・下位95%の精度で手技優劣が自動判定可能となった。この結果を解析することで今まで必要だった50項目以上の肉眼チェックが、わずか7項目チェックするだけで手技の優劣を判断できることが明らかになった。現在食道閉鎖症の手術画像を用いて、食道・迷走神経・気管を深層学習させ、各種構造物の自動認識を進めている。【まとめ】AI画像解析により内視鏡手技の優劣をビデオで判定可能となった。この結果から新たに効率的な手技判断基準を定めることができた。術中ナビゲーションは現在精度のさらなる向上を目指している。</p>

    DOI: 10.11239/jsmbe.annual61.127_2

    CiNii Research

  37. Anatomy aware-based 2.5D bronchoscope tracking for image-guided bronchoscopic navigation 招待有り 査読有り

    Wang Cheng, Oda Masahiro, Hayashi Yuichiro, Kitasaka Takayuki, Itoh Hayato, Honma Hirotoshi, Takebatake Hirotsugu, Mori Masaki, Natori Hiroshi, Mori Kensaku

    COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION     2022年12月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization  

    Physicians use an endoscopic navigation system during bronchoscopy to decrease the risk of getting lost in complex tree-structure like bronchus. Most existing navigation systems based on the camera pose estimated from bronchoscope tracking and/or deep learning. However, bronchoscope tracking-based method exists tracking error, and the pre-training of the model needs massive data. This paper describes an improved bronchoscope tracking procedure by adopting image domain translation technique to improve tracking performance. Specifically, our scheme consists of three modules, an RGB-D image domain translation module, an anatomical structure classification module and a structure-aware bronchoscope tracking module. The RGB-D image domain translation module translates a real bronchoscope (RB) image to its corresponding virtual bronchoscope image and depth image. The anatomical dependency module classifies the current scene into two categories: structureless and rich structure. The bronchoscope tracking module uses a modified video-CT bronchoscope tracking approach to estimate camera pose. Experimental results showed that the proposed method achieved higher tracking accuracy than the current state-of-the-art bronchoscope tracking methods.

    DOI: 10.1080/21681163.2022.2152728

    Web of Science

    Scopus

  38. AIによる腹腔鏡下手術支援 招待有り 査読有り

    林 雄一郎, 森 健策

    Medical Imaging Technology   40 巻 ( 4 ) 頁: 164 - 169   2022年9月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:日本医用画像工学会  

    <p>腹腔鏡下手術は,外科手術の一つの方法として広く行われるようになった.腹腔鏡下手術は高度な技術を要するため,コンピューターを用いて外科医を支援するコンピューター支援外科の研究が行われている.これらは,手術ナビゲーションとして一般的にはとらえられている.近年のAI技術の発展に伴い,腹腔鏡下手術支援のために深層学習を利用して腹腔鏡から得られる手術映像を解析する研究も行われるようになってきた.本稿では,筆者らの研究グループで取り組んでいるAIを用いた腹腔鏡映像の解析に関する研究を紹介する.</p>

    DOI: 10.11409/mit.40.164

    CiNii Research

  39. AI for VR 招待有り 査読有り

    森 健策

    日本VR医学会学術大会プログラム・抄録集   2022 巻 ( 0 ) 頁: 9 - 9   2022年8月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:日本VR医学会  

    DOI: 10.24764/jsmvr.2022.0_9

    CiNii Research

  40. BONE MARROW EDEMA SCORE IN HAND X-RAY FILM BY AI DEEP LEARNING ASSOCIATE WITH MRI BONE EDEMA IN RHEUMATOID ARTHRITIS. 招待有り 査読有り

    Katayama, K; Pan, D; Oda, M; Okubo, T; Mori, K

    ANNALS OF THE RHEUMATIC DISEASES   81 巻   頁: 1773 - 1774   2022年6月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1136/annrheumdis-2022-eular.933

    Web of Science

  41. 30年間の医用画像研究経験を振り返り未来を考える 招待有り 査読有り

    森 健策

    情報・システムソサイエティ誌   27 巻 ( 1 ) 頁: 16 - 17   2022年5月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:一般社団法人電子情報通信学会  

    DOI: 10.1587/ieiceissjournal.27.1_16

    CiNii Research

  42. Depth Estimation from Single-shot Monocular Endoscope Image Using Image Domain Adaptation And Edge-Aware Depth Estimation 査読有り

    Masahiro Oda, Hayato Itoh, Kiyohito Tanaka, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori, Kensaku Mori

    Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization   10 巻 ( 3 ) 頁: 266 - 273   2022年5月

     詳細を見る

  43. Uncertainty meets 3D-spatial feature in colonoscopic polyp-size determination 招待有り 査読有り

    Hayato Itoh, Masahiro Oda, Kai Jiang, Yuichi Mori, Masashi Misawa, Shin-Ei Kudo, Kenichiro Imai, Sayo Ito, Kinichi Hotta, Kensaku Mori

    Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization   10 巻 ( 3 ) 頁: 289 - 298   2022年5月

     詳細を見る

  44. Spatially variant biases considered self-supervised depth estimation based on laparoscopic videos 査読有り

    Wenda Li, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kazunari Misawa, Kensaku Mori

    Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization   10 巻 ( 3 ) 頁: 274 - 282   2022年5月

     詳細を見る

  45. SR-CycleGAN: super-resolution of clinical CT to micro-CT level with multi-modality super-resolution loss 査読有り

    Tong Zheng, Hirohisa Oda, Yuichiro Hayashi, Takayasu Moriya, Shota Nakamura, Masaki Mori, Hirotsugu Takabatake, Hiroshi Natori, Masahiro Oda, Kensaku Mori

    Journal of Medical Imaging   9 巻 ( 2 ) 頁: 024003-1 - 28   2022年4月

     詳細を見る

  46. Artificial Intelligence for Computer Vision in Surgery: A Call for Developing Reporting Guidelines. 招待有り 査読有り

    Kitaguchi D, Watanabe Y, Madani A, Hashimoto DA, Meireles OR, Takeshita N, Mori K, Ito M, Computer Vision in Surgery International Collaborative.

    Annals of surgery   275 巻 ( 4 ) 頁: e609 - e611   2022年4月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Annals of surgery  

    DOI: 10.1097/SLA.0000000000005319

    Scopus

    PubMed

  47. Evaluation in real-time use of artificial intelligence during colonoscopy to predict relapse of ulcerative colitis: a prospective study 招待有り 査読有り

    Yasuharu Maeda, Shin-ei Kudo, Noriyuki Ogata, Masashi Misawa, Marietta Iacucci, Mayumi Homma, Tetsuo Nemoto, Kazumi Takishima, Kentaro Mochida, Hideyuki Miyachi, Toshiyuki Baba, Kensaku Mori, Kazuo Ohtsuka, Yuichi Mori

    Gastrointestinal Endoscopy     2022年4月

     詳細を見る

  48. Artificial Intelligence-Based Total Mesorectal Excision Plane Navigation in Laparoscopic Colorectal Surgery. 招待有り 査読有り

    Igaki T, Kitaguchi D, Kojima S, Hasegawa H, Takeshita N, Mori K, Kinugasa Y, Ito M

    Diseases of the colon and rectum     2022年2月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1097/DCR.0000000000002393

    PubMed

  49. A cascaded fully convolutional network framework for dilated pancreatic duct segmentation 招待有り 査読有り

    Shen, C; Roth, HR; Hayashi, Y; Oda, M; Miyamoto, T; Sato, G; Mori, K

    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY   17 巻 ( 2 ) 頁: 343 - 354   2022年2月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose: Pancreatic duct dilation can be considered an early sign of pancreatic ductal adenocarcinoma (PDAC). However, there is little existing research focused on dilated pancreatic duct segmentation as a potential screening tool for people without PDAC. Dilated pancreatic duct segmentation is difficult due to the lack of readily available labeled data and strong voxel imbalance between the pancreatic duct region and other regions. To overcome these challenges, we propose a two-step approach for dilated pancreatic duct segmentation from abdominal computed tomography (CT) volumes using fully convolutional networks (FCNs). Methods: Our framework segments the pancreatic duct in a cascaded manner. The pancreatic duct occupies a tiny portion of abdominal CT volumes. Therefore, to concentrate on the pancreas regions, we use a public pancreas dataset to train an FCN to generate an ROI covering the pancreas and use a 3D U-Net-like FCN for coarse pancreas segmentation. To further improve the dilated pancreatic duct segmentation, we deploy a skip connection on each corresponding resolution level and an attention mechanism in the bottleneck layer. Moreover, we introduce a combined loss function based on Dice loss and Focal loss. Random data augmentation is adopted throughout the experiments to improve the generalizability of the model. Results: We manually created a dilated pancreatic duct dataset with semi-automated annotation tools. Experimental results showed that our proposed framework is practical for dilated pancreatic duct segmentation. The average Dice score and sensitivity were 49.9% and 51.9%, respectively. These results show the potential of our approach as a clinical screening tool. Conclusions: We investigate an automated framework for dilated pancreatic duct segmentation. The cascade strategy effectively improved the segmentation performance of the pancreatic duct. Our modifications to the FCNs together with random data augmentation and the proposed combined loss function facilitate automated segmentation.

    DOI: 10.1007/s11548-021-02530-x

    Web of Science

    Scopus

    PubMed

  50. impact of the clinical use of artificial intelligence-assisted neoplasia detection for colonoscopy: a large-scale prospective, propensity score-matched study (with video) 査読有り

    Misaki Ishiyama, Shin-ei Kudo, Masashi Misawa, Yuichi Mori, Yasuharu Maeda, Katsuro Ichimasa, Toyoki Kudo, Takemasa Hayashi, Kunihiko Wakamura, Hideyuki Miyachi, Fumio Ishida, Hayato Itoh, Masahiro Oda, Kensaku Mori

      95 巻 ( 1 ) 頁: 155 - 163   2022年1月

     詳細を見る

    担当区分:筆頭著者  

    DOI: 10.1016/j.gie.2021.07.022

  51. Joint Multi Organ and Tumor Segmentation from Partial Labels Using Federated Learning 招待有り 査読有り

    Shen, C; Wang, PC; Yang, D; Xu, DG; Oda, M; Chen, PT; Liu, KL; Liao, WC; Fuh, CS; Mori, K; Wang, WC; Roth, HR

    DISTRIBUTED, COLLABORATIVE, AND FEDERATED LEARNING, AND AFFORDABLE AI AND HEALTHCARE FOR RESOURCE DIVERSE GLOBAL HEALTH, DECAF 2022, FAIR 2022   13573 巻   頁: 58 - 67   2022年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)  

    Segmentation studies in medical image analysis are always associated with a particular task scenario. However, building datasets to train models to segment multiple types of organs and pathologies is challenging. For example, a dataset annotated for the pancreas and pancreatic tumors will result in a model that cannot segment other organs, like the liver and spleen, visible in the same abdominal computed tomography image. The lack of a well-annotated dataset is one limitation resulting in a lack of universal segmentation models. Federated learning (FL) is ideally suited for addressing this issue in the real-world context. In this work, we show that each medical center can use training data for distinct tasks to collaboratively build more generalizable segmentation models for multiple segmentation tasks without the requirement to centralize datasets in one place. The main challenge of this research is the heterogeneity of training data from various institutions and segmentation tasks. In this paper, we propose a multi-task segmentation framework using FL to learn segmentation models using several independent datasets with different annotations of organs or tumors. We include experiments on four publicly available single-task datasets, including MSD liver (w/ tumor), MSD spleen, MSD pancreas (w/ tumor), and KITS19. Experimental results on an external validation set to highlight the advantages of employing FL in multi-task organ and tumor segmentation.

    DOI: 10.1007/978-3-031-18523-6_6

    Web of Science

    Scopus

  52. Depth-based branching level estimation for bronchoscopic navigation 招待有り 査読有り

    Wang, C; Hayashi, Y; Oda, M; Kitasaka, T; Takabatake, H; Mori, M; Honma, H; Natori, H; Mori, K

    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY   16 巻 ( 10 ) 頁: 1795 - 1804   2021年10月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose: Bronchoscopists rely on navigation systems during bronchoscopy to reduce the risk of getting lost in the complex bronchial tree-like structure and the homogeneous bronchus lumens. We propose a patient-specific branching level estimation method for bronchoscopic navigation because it is vital to identify the branches being examined in the bronchus tree during examination. Methods: We estimate the branching level by integrating the changes in the number of bronchial orifices and the camera motions among the frames. We extract the bronchial orifice regions from a depth image, which is generated using a cycle generative adversarial network (CycleGAN) from real bronchoscopic images. We calculate the number of orifice regions using the vertical and horizontal projection profiles of the depth images and obtain the camera-moving direction using the feature point-based camera motion estimation. The changes in the number of bronchial orifices are combined with the camera-moving direction to estimate the branching level. Results: We used three in vivo and one phantom case to train the CycleGAN model and four in vivo cases to validate the proposed method. We manually created the ground truth of the branching level. The experimental results showed that the proposed method can estimate the branching level with an average accuracy of 87.6%. The processing time per frame was about 61 ms. Conclusion: Experimental results show that it is feasible to estimate the branching level using the number of bronchial orifices and camera-motion estimation from real bronchoscopic images.

    DOI: 10.1007/s11548-021-02460-8

    DOI: 10.1007/s11548-021-02460-8

    Web of Science

    Scopus

    PubMed

  53. Artificial intelligence and computer-aided diagnosis for colonoscopy: where do we stand now? 招待有り 査読有り

    Kudo Shin-Ei, Mori Yuichi, Abdel-aal Usama M., Misawa Masashi, Itoh Hayato, Oda Masahiro, Mori Kensaku

    TRANSLATIONAL GASTROENTEROLOGY AND HEPATOLOGY   6 巻   頁: 64   2021年10月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Translational Gastroenterology and Hepatology  

    Computer-aided diagnosis (CAD) for colonoscopy with use of artificial intelligence (AI) is catching increased attention of endoscopists. CAD allows automated detection and pathological prediction, namely optical biopsy, of colorectal polyps during real-time endoscopy, which help endoscopists avoid missing and/or misdiagnosing colorectal lesions. With the increased number of publications in this field and emergence of the AI medical device that have already secured regulatory approval, CAD in colonoscopy is now being implemented into clinical practice. On the other side, drawbacks and weak points of CAD in colonoscopy have not been thoroughly discussed. In this review, we provide an overview of CAD for optical biopsy of colorectal lesions with a particular focus on its clinical applications and limitations.

    DOI: 10.21037/tgh.2019.12.14

    Web of Science

    Scopus

    PubMed

  54. Deep learning system for automatic detection of bladder tumors in cystoscopic images 招待有り 査読有り

    Mutaguchi J., Oda M., Ueda S., Kinoshita F., Naganuma H., Matsumoto T., Lee K., Monji K., Kashiwagi K., Takeuchi A., Shiota M., Inokuchi J., Mori K., Eto M.

    EUROPEAN UROLOGY   79 巻   頁: S1022 - S1023   2021年6月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    Web of Science

  55. COVID-19診断支援AI開発における名古屋大学の取り組み 招待有り 査読有り

    小田 昌宏, 鄭 通, 林 雄一郎, 森 健策

    Medical Imaging Technology   39 巻 ( 1 ) 頁: 13 - 19   2021年1月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:日本医用画像工学会  

    <p>本稿では,名古屋大学におけるCT像からのCOVID-19診断支援AI開発に対する取り組みを紹介する.新型コロナウイルス感染症(COVID-19)が急速に世界に広まり,多くの感染者と死亡者を生じている.このように多数の患者を迅速に診断する必要がある状況では,AIによる診断支援が有効と考える.われわれは患者のCT像から画像所見に基づくCOVID-19典型度を自動判別するAI開発を行った.AIによる自動判別に必要となる3つの要素技術として肺野セグメンテーション,肺野領域クラスタリング,COVID-19典型度推定を開発し,CT像からの自動判別AIの処理フロー全体を構築した.本AIを開発する上で,多数の医療機関で撮影された膨大な画像を格納した医療画像ビッグデータクラウド基盤を活用し,実際のCOVID-19患者のCT像に対して高い精度での自動判別が可能であることを確認した.</p>

    DOI: 10.11409/mit.39.13

    CiNii Research

  56. Automated Detection of Spinal Schwannomas Utilizing Deep Learning Based on Object Detection From Magnetic Resonance Imaging 招待有り 査読有り 国際共著

    Ito Sadayuki, Ando Kei, Kobayashi Kazuyoshi, Nakashima Hiroaki, Oda Masahiro, Machino Masaaki, Kanbara Shunsuke, Inoue Taro, Yamaguchi Hidetoshi, Koshimizu Hiroyuki, Mori Kensaku, Ishiguro Naoki, Imagama Shiro

    SPINE   46 巻 ( 2 ) 頁: 95 - 100   2021年1月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Spine  

    STUDY DESIGN: A retrospective analysis of magnetic resonance imaging (MRI) was conducted. OBJECTIVE: This study aims to develop an automated system for the detection of spinal schwannoma, by employing deep learning based on object detection from MRI. The performance of the proposed system was verified to compare the performances of spine surgeons. SUMMARY OF BACKGROUND DATA: Several MRI scans were conducted for the diagnoses of patients suspected to suffer from spinal diseases. Typically, spinal diseases do not involve tumors on the spinal cord, although a few tumors may exist at the unexpectable level or without symptom by chance. It is difficult to recognize these tumors; in some cases, these tumors may be overlooked. Hence, a deep learning approach based on object detection can minimize the probability of overlooking these tumors. METHODS: Data from 50 patients with spinal schwannoma who had undergone MRI were retrospectively reviewed. Sagittal T1- and T2-weighted magnetic resonance imaging (T1WI and T2WI) were used in the object detection training and for validation. You Only Look Once version3 was used to develop the object detection system, and its accuracy was calculated. The performance of the proposed system was compared to that of two doctors. RESULTS: The accuracies of the proposed object detection based on T1W1, T2W1, and both T1W1 and T2W1 were 80.3%, 91.0%, and 93.5%, respectively. The accuracies of the doctors were 90.2% and 89.3%. CONCLUSION: Automated object detection of spinal schwannoma was achieved. The proposed system yielded a high accuracy that was comparable to that of the doctors.Level of Evidence: 4.

    DOI: 10.1097/BRS.0000000000003749

    Web of Science

    Scopus

    PubMed

  57. Depth estimation from single-shot monocular endoscope image using image domain adaptation and edge-aware depth estimation 招待有り 査読有り

    Oda M., Itoh H., Tanaka K., Takabatake H., Mori M., Natori H., Mori K.

    Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization     2021年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization  

    We propose a depth estimation method from a single-shot monocular endoscopic image using Lambertian surface translation by domain adaptation and depth estimation using multi-scale edge loss. We employ a two-step estimation process including Lambertian surface translation from unpaired data and depth estimation. The texture and specular reflection on the surface of an organ reduce the accuracy of depth estimations. We apply Lambertian surface translation to an endoscopic image to remove these texture and reflections. Then, we estimate the depth by using a fully convolutional network (FCN). During the training of the FCN, improvement of the object edge similarity between an estimated image and a ground truth depth image is important for getting better results. We introduced a muti-scale edge loss function to improve the accuracy of depth estimation. We quantitatively evaluated the proposed method using real colonoscopic images. The estimated depth values were proportional to the real depth values. Furthermore, we applied the estimated depth images to automated anatomical location identification of colonoscopic images using a convolutional neural network. The identification accuracy of the network improved from 69.2% to 74.1% by using the estimated depth images.

    DOI: 10.1080/21681163.2021.2012835

    Scopus

  58. Context encoder guided self-supervised siamese depth estimation based on stereo laparoscopic images 招待有り 査読有り

    Li W., Hayashi Y., Oda M., Kitasaka T., Misawa K., Mori K.

    Progress in Biomedical Optics and Imaging - Proceedings of SPIE   11598 巻   2021年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Progress in Biomedical Optics and Imaging - Proceedings of SPIE  

    This paper proposes a novel self-supervised depth estimation method guided by a context encoder. Depth estimation from stereo laparoscopic images is essential to robotic surgical navigation systems and robotic surgical platform. Recent work has shown that depth estimation of stereo image pairs can be formulated as a self- supervised learning task without ground-truth. However, most architectures based on convolutional neural lead to lose some spatial information because of the consecutive pooling and convolution operations. In order to tackle this problem, we add a contextual encoding module to the previous method. The context encoder module is formed by dense atrous convolution block and spatial pyramid pooling block that are used to extract and merge features on different scales. Also, we add the edge-awared smoothness for predicted disparity maps. In addition, we output multi-scale disparity predictions and corresponding image reconstruction for loss calculating. In the experiments, we showed that the proposed method has about 7.79% improvement in SSIM and about 17.76% improvement in PSNR for stereo image pairs compared with previous method. Also, the disparity maps and reconstructed images given by the proposed method have significant enhancements compared with the previous method.

    DOI: 10.1117/12.2582348

    Scopus

  59. Bronchial orifice segmentation on bronchoscopic video frames based on generative adversarial depth estimation 招待有り 査読有り

    Wang C., Hayashi Y., Oda M., Kitasaka T., Honma H., Takabatake H., Mori M., Natori H., Mori K.

    Progress in Biomedical Optics and Imaging - Proceedings of SPIE   11598 巻   頁: 115980N-1 - 7   2021年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Progress in Biomedical Optics and Imaging - Proceedings of SPIE  

    This paper describes a bronchial orifice (BO) segmentation method on real bronchoscopic video frames by using depth images. The BO is one of the anatomical characteristics in the bronchus, which is critical in clinical applications such as bronchus scene description and navigation path generation. Previous work used image appearance and the gradation of the real bronchoscopic image to segment orifice region, which behaved poorly in complex scenes including bubble or changes in illumination. To obtain a better segmentation result of BO even in the complex scenes, we propose a BO segmentation method using the distance between the bronchoscope camera and the bronchus lumen, which is represented by a depth image. Since the depth image is unavailable due to devices limitation, we use an image-to-image domain translation network named cycle generative adversarial network (CycleGAN) to estimate depth images from real bronchoscopic images. The BO regions are considered as the regions whose distances are larger than a distance threshold. We decide the distance threshold according to the depth images' projection profiles. Experimental results showed that the proposed method can find BO regions in the real bronchoscopic videos in real-time. We manually labeled BO regions as ground truth to evaluate the proposed method. The average Dice score of the proposed method was 77.0 %.

    DOI: 10.1117/12.2582341

    Scopus

  60. COVID-19 Infection Segmentation from Chest CT Images Based on Scale Uncertainty 招待有り 査読有り

    Oda M., Zheng T., Hayashi Y., Otake Y., Hashimoto M., Akashi T., Aoki S., Mori K.

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   12969 LNCS 巻   頁: 88 - 97   2021年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)  

    This paper proposes a segmentation method of infection regions in the lung from CT volumes of COVID-19 patients. COVID-19 spread worldwide, causing many infected patients and deaths. CT image-based diagnosis of COVID-19 can provide quick and accurate diagnosis results. An automated segmentation method of infection regions in the lung provides a quantitative criterion for diagnosis. Previous methods employ whole 2D image or 3D volume-based processes. Infection regions have a considerable variation in their sizes. Such processes easily miss small infection regions. Patch-based process is effective for segmenting small targets. However, selecting the appropriate patch size is difficult in infection region segmentation. We utilize the scale uncertainty among various receptive field sizes of a segmentation FCN to obtain infection regions. The receptive field sizes can be defined as the patch size and the resolution of volumes where patches are clipped from. This paper proposes an infection segmentation network (ISNet) that performs patch-based segmentation and a scale uncertainty-aware prediction aggregation method that refines the segmentation result. We design ISNet to segment infection regions that have various intensity values. ISNet has multiple encoding paths to process patch volumes normalized by multiple intensity ranges. We collect prediction results generated by ISNets having various receptive field sizes. Scale uncertainty among the prediction results is extracted by the prediction aggregation method. We use an aggregation FCN to generate a refined segmentation result considering scale uncertainty among the predictions. In our experiments using 199 chest CT volumes of COVID-19 cases, the prediction aggregation method improved the dice similarity score from 47.6% to 62.1%.

    DOI: 10.1007/978-3-030-90874-4_9

    Scopus

  61. Attention-Guided Pancreatic Duct Segmentation from Abdominal CT Volumes 招待有り 査読有り

    Shen C., Roth H.R., Yuichiro H., Oda M., Miyamoto T., Sato G., Mori K.

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   12969 LNCS 巻   頁: 46 - 55   2021年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)  

    Pancreatic duct dilation indicates a high risk of pancreatic ductal adenocarcinoma (PDAC), the deadliest cancer with a poor prognosis. Segmentation of dilated pancreatic duct from CT taken from patients without PDAC shows the potential to assist the early detection of PDAC. Most current researches include pancreatic duct segmentation as one additional class for patients who have already detected PDAC. However, the dilated pancreatic duct for people who have not yet developed PDAC is typically much smaller, making the segmentation difficult. Deep learning-based segmentation on tiny components is challenging because of the large imbalance between the target object and irrelevant regions. In this work, we explore an attention-guided approach for dilated pancreatic duct segmentation as a screening tool for pre-PDAC patients, enhancing the pancreas regions’ concentration and ignoring the unnecessary features. We employ a multi-scale aggregation to combine the information at different scales to improve the segmentation performance further. Our proposed multi-scale pancreatic attention-guided approach achieved a Dice score of 54.16% on dilated pancreatic duct dataset, which shows a significant improvement over prior techniques.

    DOI: 10.1007/978-3-030-90874-4_5

    Scopus

  62. Current status and future perspective on artificial intelligence for lower endoscopy 招待有り 査読有り

    Misawa M., Kudo S.E., Mori Y., Maeda Y., Ogawa Y., Ichimasa K., Kudo T., Wakamura K., Hayashi T., Miyachi H., Baba T., Ishida F., Itoh H., Oda M., Mori K.

    Gastroenterological Endoscopy   63 巻 ( 7 ) 頁: 1402 - 1416   2021年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Gastroenterological Endoscopy  

    The global incidence and mortality rate of colorectal cancer remains high. Colonoscopy is regarded as the gold standard examination for detecting and eradicating neoplastic lesions. However, there are some uncertainties in colonoscopy practice that are related to limitations in human performance. First, approximately one-fourth of colorectal neoplasms are missed on a single colonoscopy. Second, it is still difficult for nonexperts to perform adequately regarding optical biopsy. Third, recording of some quality indicators (e.g. cecal intubation, bowel preparation, and withdrawal speed)which are related to adenoma detection rate, is sometimes incomplete. With recent improvements in machine learning techniques and advances in computer performance, artificial intelligence-assisted computer-aided diagnosis is being increasingly utilized by endoscopists. In particular, the emergence of deep-learning, data-driven machine learning techniques have made the development of computer-aided systems easier than that of conventional machine learning techniques, the former currently being considered the standard artificial intelligence engine of computer-aided diagnosis by colonoscopy. To date, computer-aided detection systems seem to have improved the rate of detection of neoplasms. Additionally, computer-aided characterization systems may have the potential to improve diagnostic accuracy in real-time clinical practice. Furthermore, some artificial intelligence-assisted systems that aim to improve the quality of colonoscopy have been reported. The implementation of computer-aided system clinical practice may provide additional benefits such as helping in educational poorly performing endoscopists and supporting real-time clinical decision-making. In this review, we have focused on computer-aided diagnosis during colonoscopy reported by gastroenterologists and discussed its status, limitations, and future prospects.

    DOI: 10.11280/gee.63.1402

    Scopus

    CiNii Research

  63. Clinical and Genetic Characteristics of 18 Patients from 13 Japanese Families with CRX-associated retinal disorder: Identification of Genotype-phenotype Association 招待有り 査読有り

    Fujinami-Yokokawa Y., Fujinami K., Kuniyoshi K., Hayashi T., Ueno S., Mizota A., Shinoda K., Arno G., Pontikos N., Yang L., Liu X., Sakuramoto H., Katagiri S., Mizobuchi K., Kominami T., Terasaki H., Nakamura N., Kameya S., Yoshitake K., Miyake Y., Kurihara T., Tsubota K., Miyata H., Iwata T., Tsunoda K., Nishimura T., Hayashizaki Y., Kondo M., Shimozawa N., Horiguchi M., Yamamoto S., Kuze M., Naoi N., Machida S., Shimada Y., Nakamura M., Fujikado T., Hotta Y., Takahashi M., Mochizuki K., Murakami A., Kondo H., Ishida S., Nakazawa M., Hatase T., Matsunaga T., Maeda A., Noda K., Tanikawa A., Yamamoto S., Yamamoto H., Araie M., Aihara M., Nakazawa T., Sekiryu T., Kashiwagi K., Kosaki K., Piero C., Fukuchi T., Hayashi A., Hosono K., Mori K., Tanaka K., Furuya K., Suzuki K., Kohata R., Yanagi Y., Minegishi Y., Iejima D., Suga A., Rossmiller B.P., Pan Y., Oshima T., Nakayama M., Teruyama Y., Yamamoto M., Minematsu N., Sanbe H., Mori D., Kijima Y., Mawatari G., Kurata K., Yamada N., Itoh M., Kawaji H., Murakawa Y.

    Scientific Reports   10 巻 ( 1 )   2020年12月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Scientific Reports  

    Inherited retinal disorder (IRD) is a leading cause of blindness, and CRX is one of a number of genes reported to harbour autosomal dominant (AD) and recessive (AR) causative variants. Eighteen patients from 13 families with CRX-associated retinal disorder (CRX-RD) were identified from 730 Japanese families with IRD. Ophthalmological examinations and phenotype subgroup classification were performed. The median age of onset/latest examination was 45.0/62.5 years (range, 15–77/25–94). The median visual acuity in the right/left eye was 0.52/0.40 (range, −0.08–2.00/−0.18–1.70) logarithm of the minimum angle of resolution (LogMAR) units. There was one family with macular dystrophy, nine with cone-rod dystrophy (CORD), and three with retinitis pigmentosa. In silico analysis of CRX variants was conducted for genotype subgroup classification based on inheritance and the presence of truncating variants. Eight pathogenic CRX variants were identified, including three novel heterozygous variants (p.R43H, p.P145Lfs*42, and p.P197Afs*22). A trend of a genotype-phenotype association was revealed between the phenotype and genotype subgroups. A considerably high proportion of CRX-RD in ADCORD was determined in the Japanese cohort (39.1%), often showing the mild phenotype (CORD) with late-onset disease (sixth decade). Frequently found heterozygous missense variants located within the homeodomain underlie this mild phenotype. This large cohort study delineates the disease spectrum of CRX-RD in the Japanese population.

    DOI: 10.1038/s41598-020-65737-z

    Scopus

  64. Clinical characteristics in patients with ossification of the posterior longitudinal ligament: A prospective multi-institutional cross-sectional study 招待有り 査読有り

    Hirai T., Yoshii T., Ushio S., Mori K., Maki S., Katsumi K., Nagoshi N., Takeuchi K., Furuya T., Watanabe K., Nishida N., Watanabe K., Kaito T., Kato S., Nagashima K., Koda M., Ito K., Imagama S., Matsuoka Y., Wada K., Kimura A., Ohba T., Katoh H., Matsuyama Y., Ozawa H., Haro H., Takeshita K., Watanabe M., Matsumoto M., Nakamura M., Yamazaki M., Okawa A., Kawaguchi Y.

    Scientific Reports   10 巻 ( 1 )   2020年12月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Scientific Reports  

    Ossification of the posterior longitudinal ligament (OPLL) can occur throughout the entire spine and can sometimes lead to spinal disorder. Although patients with OPLL sometimes develop physical limitations because of pain, the characteristics of pain and effects on activities of daily living (ADL) have not been precisely evaluated in OPLL patients. Therefore, we conducted a multi-center prospective study to assess whether the symptoms of cervical OPLL are different from those of cervical spondylosis (CS). A total of 263 patients with a diagnosis of cervical OPLL and 50 patients with a diagnosis of CS were enrolled and provided self-reported outcomes, including responses to the Japanese Orthopaedic Association (JOA) Cervical Myelopathy Evaluation Questionnaire (JOACMEQ), JOA Back Pain Evaluation Questionnaire (JOABPEQ), visual analog scale (VAS), and SF-36 scores. The severity of myelopathy was significantly correlated with each domain of the JOACMEQ and JOABPEQ. There was a negative correlation between the VAS score for each domain and the JOA score. There were significantly positive correlations between the JOA score and the Mental Health, Bodily Pain, Physical Functioning, Role Emotional, and Role Physical domains of the SF-36. One-to-one matching resulted in 50 pairs of patients with OPLL and CS. Although there was no significant between-group difference in scores in any of the domains of the JOACMEQ or JOABPEQ, the VAS scores for pain or numbness in the buttocks or limbs were significantly higher in the CS group; however, there was no marked difference in low back pain, chest tightness, or numbness below the chest between the two study groups. The scores for the Role Physical and Body Pain domains of the SF-36 were significantly higher in the OPLL group than in the CS group, and the mean scores for the other domains was similar between the two groups. The results of this study revealed that patients with OPLL were likely to have neck and low back pain and restriction in ADL. No specific type of pain was found in patients with OPLL when compared with those who had CS.

    DOI: 10.1038/s41598-020-62278-3

    Scopus

  65. Synthetic laparoscopic video generation for machine learning-based surgical instrument segmentation from real laparoscopic video and virtual surgical instruments 査読有り 国際共著

    Takuya Ozawa,Yuichiro Hayashi,Hirohisa Oda, Masahiro Oda, Takayuki Kitasaka, Nobuyoshi Takeshita, Masaaki Ito, Kensaku Mori

    Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization     2020年11月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: https://doi.org/10.1080/21681163.2020.1835560

  66. Clinical impact of Endoscopic Surgical Skill Qualification System (ESSQS) by Japan Society for Endoscopic Surgery (JSES) for laparoscopic distal gastrectomy and low anterior resection based on the National Clinical Database (NCD) registry. 招待有り 査読有り

    Akagi T, Endo H, Inomata M, Yamamoto H, Mori T, Kojima K, Kuroyanagi H, Sakai Y, Nakajima K, Shiroshita H, Etoh T, Saida Y, Yamamoto S, Hasegawa H, Ueno H, Kakeji Y, Miyata H, Kitagawa Y, Watanabe M

    Annals of gastroenterological surgery   4 巻 ( 6 ) 頁: 721 - 734   2020年11月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1002/ags3.12384

    PubMed

  67. A visual SLAM-based bronchoscope tracking scheme for bronchoscopic navigation 招待有り 査読有り

    Wang, C; Oda, M; Hayashi, Y; Villard, B; Kitasaka, T; Takabatake, H; Mori, M; Honma, H; Natori, H; Mori, K

    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY   15 巻 ( 10 ) 頁: 1619 - 1630   2020年10月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose: Due to the complex anatomical structure of bronchi and the resembling inner surfaces of airway lumina, bronchoscopic examinations require additional 3D navigational information to assist the physicians. A bronchoscopic navigation system provides the position of the endoscope in CT images with augmented anatomical information. To overcome the shortcomings of previous navigation systems, we propose using a technique known as visual simultaneous localization and mapping (SLAM) to improve bronchoscope tracking in navigation systems. Methods: We propose an improved version of the visual SLAM algorithm and use it to estimate nt-specific bronchoscopic video as input. We improve the tracking procedure by adding more narrow criteria in feature matching to avoid mismatches. For validation, we collected several trials of bronchoscopic videos with a bronchoscope camera by exploring synthetic rubber bronchus phantoms. We simulated breath by adding periodic force to deform the phantom. We compared the camera positions from visual SLAM with the manually created ground truth of the camera pose. The number of successfully tracked frames was also compared between the original SLAM and the proposed method. Results: We successfully tracked 29,559 frames at a speed of 80 ms per frame. This corresponds to 78.1% of all acquired frames. The average root mean square error for our technique was 3.02 mm, while that for the original was 3.61 mm. Conclusion: We present a novel methodology using visual SLAM for bronchoscope tracking. Our experimental results showed that it is feasible to use visual SLAM for the estimation of the bronchoscope camera pose during bronchoscopic navigation. Our proposed method tracked more frames and showed higher accuracy than the original technique did. Future work will include combining the tracking results with virtual bronchoscopy and validation with in vivo cases.

    DOI: 10.1007/s11548-020-02241-9

    Web of Science

    Scopus

    PubMed

  68. Station number assignment to abdominal lymph node for assisting gastric cancer surgery 査読有り

    Yuichiro Hayashi, Kazunari Misawa, Kensaku Mori

    Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization     2020年10月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: https://doi.org/10.1080/21681163.2020.1835543

  69. A deformable model for navigated laparoscopic gastrectomy based on finite elemental method 招待有り 査読有り

    Chen Tao, Wei Guodong, Xu Lili, Shi Weili, Xu Yikai, Zhu Yongyi, Hayashi Yuichiro, Oda Hirohisa, Oda Masahiro, Hu Yanfeng, Yu Jiang, Jiang Zhengang, Li Guoxin, Mori Kensaku

    MINIMALLY INVASIVE THERAPY & ALLIED TECHNOLOGIES   29 巻 ( 4 ) 頁: 210 - 216   2020年7月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Minimally Invasive Therapy and Allied Technologies  

    Background: Accurate registration for surgical navigation of laparoscopic surgery is highly challenging due to vessel deformation. Here, we describe the design of a deformable model with improved matching accuracy by applying the finite element method (FEM). Material and methods: ANSYS software was used to simulate an FEM model of the vessel after pull-up based on laparoscopic gastrectomy requirements. The central line of the FEM model and the central line of the ground truth were drawn and compared. Based on the material and parameters determined from the animal experiment, a perigastric vessel FEM model of a gastric cancer patient was created, and its accuracy in a laparoscopic gastrectomy surgical scene was evaluated. Results: In the animal experiment, the FEM model created with Ogden foam material exhibited better results. The average distance between the two central lines was 6.5mm, and the average distance between their closest points was 3.8 mm. In the laparoscopic gastrectomy surgical scene, the FEM model and the true artery deformation demonstrated good coincidence. Conclusion: In this study, a deformable vessel model based on FEM was constructed using preoperative CT images to improve matching accuracy and to supply a reference for further research on deformation matching to facilitate laparoscopic gastrectomy navigation.

    DOI: 10.1080/13645706.2019.1625926

    Web of Science

    Scopus

    PubMed

  70. Automated laparoscopic colorectal surgery workflow recognition using artificial intelligence: Experimental research 招待有り 査読有り

    Kitaguchi Daichi, Takeshita Nobuyoshi, Matsuzaki Hiroki, Oda Tatsuya, Watanabe Masahiko, Mori Kensaku, Kobayashi Etsuko, Ito Masaaki

    INTERNATIONAL JOURNAL OF SURGERY   79 巻   頁: 88 - 94   2020年7月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Surgery  

    Background: Identifying laparoscopic surgical videos using artificial intelligence (AI) facilitates the automation of several currently time-consuming manual processes, including video analysis, indexing, and video-based skill assessment. This study aimed to construct a large annotated dataset comprising laparoscopic colorectal surgery (LCRS) videos from multiple institutions and evaluate the accuracy of automatic recognition for surgical phase, action, and tool by combining this dataset with AI. Materials and methods: A total of 300 intraoperative videos were collected from 19 high-volume centers. A series of surgical workflows were classified into 9 phases and 3 actions, and the area of 5 tools were assigned by painting. More than 82 million frames were annotated for a phase and action classification task, and 4000 frames were annotated for a tool segmentation task. Of these frames, 80% were used for the training dataset and 20% for the test dataset. A convolutional neural network (CNN) was used to analyze the videos. Intersection over union (IoU) was used as the evaluation metric for tool recognition. Results: The overall accuracies for the automatic surgical phase and action classification task were 81.0% and 83.2%, respectively. The mean IoU for the automatic tool segmentation task for 5 tools was 51.2%. Conclusions: A large annotated dataset of LCRS videos was constructed, and the phase, action, and tool were recognized with high accuracy using AI. Our dataset has potential uses in medical applications such as automatic video indexing and surgical skill assessments. Open research will assist in improving CNN models by making our dataset available in the field of computer vision.

    DOI: 10.1016/j.ijsu.2020.05.015

    Web of Science

    Scopus

  71. Artificial intelligence for magnifying endoscopy, endocytoscopy, and confocal laser endomicroscopy of the colorectum 招待有り 査読有り

    Mori, Y; Kudo, SE; Misawa, M; Itoh, H; Oda, M; Mori, K

    TECHNIQUES AND INNOVATIONS IN GASTROINTESTINAL ENDOSCOPY   22 巻 ( 2 ) 頁: 56 - 60   2020年4月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Techniques and Innovations in Gastrointestinal Endoscopy  

    Because magnifying endoscopy is considered to be more accurate at predicting the histology of colorectal polyps than nonmagnifying endoscopy, it has been attracting a lot of attention, especially in Japan. However, use of magnifying endoscopy is not yet widespread because of its limited availability and the difficulty in interpreting the acquired images. Application of artificial intelligence (AI) is now changing this situation because it helps less-skilled endoscopists to accurately interpret magnified images. Research in this field initially focused on magnifying endoscopy with narrow-band imaging as the target of AI. Most previously published retrospective studies have reported over 90% sensitivity in differentiation of neoplastic lesions; however, automatically indicating the region of interest (ROI) of the polyps that AI should analyze has been found to be challenging. To address this practical problem, some researchers have started to adopt contact endomicroscopy as a target for AI. Contact endomicroscopy includes endocytoscopy (520-fold magnification, Olympus, Tokyo, Japan) and confocal laser endomicroscopy (1000-fold magnification, Mauna Kea, Paris, France). These forms of contact endomicroscopy provide ultramagnified images that make it unnecessary to manually select the ROI because the entire image acquired by contact endomicroscopy is the ROI of the targeted polyps. This strength of contact endomicroscopy has contributed to early implementation of this technology into clinical practice, which may change the utility of magnifying endoscopy in clinical settings and help increase its use globally in the near future.

    DOI: 10.1016/j.tgie.2019.150632

    Web of Science

    Scopus

  72. Clinical application of a surgical navigation system based on virtual thoracoscopy for lung cancer patients: real time visualization of area of lung cancer before induction therapy and optimal resection line for obtaining a safe surgical margin during surgery 招待有り 査読有り

    Nakamura Shota, Hayashi Yuichiro, Kawaguchi Koji, Fukui Takayuki, Hakiri Shuhei, Ozeki Naoki, Mori Shunsuke, Goto Masaki, Mori Kensaku, Yokoi Kohei

    JOURNAL OF THORACIC DISEASE   12 巻 ( 3 ) 頁: 672 - 679   2020年3月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Journal of Thoracic Disease  

    Background: We have developed a surgical navigation system that presents virtual thoracoscopic images using computed tomography (CT) image data, as if you are observing intra-thoracic cavity in synchronization with the real thoracoscopic view. Using this system, we made it possible to simultaneously visualize the ‘area of lung cancer before induction therapy’ and the ‘optimal resection line for obtaining a safe surgical margin’ as a virtual thoracoscopic view. We applied this navigation system in the clinical setting in operations for lung cancer patients with chest wall invasion after induction chemoradiotherapy. Methods: The proposed surgical navigation system consisted of a three-dimensional (3D) positional tracker and a virtual thoracoscopy system. The 3D positional tracker was used to recognize the positional information of the real thoracoscope. The virtual thoracoscopy system generated virtual thoracoscopic views based on CT image data. Combined with these two technologies, patient-to-image registration was performed in two patients, and the results generated a virtual thoracoscopic view that was synchronized with the real thoracoscopic view. Results: The operations were started with video-assisted thoracic surgery (VATS), and the navigation system was activated at the same time. The virtual thoracoscopic view was synchronized with the real thoracoscopic view, which also simultaneously indicated the ‘area of lung cancer before induction therapy’ and the ‘optimal resection lines for obtaining a safe surgical margin’. We marked the optimal lines using an electric scalpel, and then performed lobectomy and chest wall resection with a sufficient surgical margin using these landmarks. Pathological examinations confirmed that the surgical margin was negative. No complications related to the navigation system were encountered during or after the procedures. Conclusions: Using this proposed navigation system, we could obtain a ‘CT-derived virtual intrathoracic 3D view of the patient’ that was aligned with the thoracoscopic view during surgery. The accurate identification of areas of cancer invasion before induction therapy using this system might be a useful for determining optimal surgical resection lines.

    DOI: 10.21037/jtd.2019.12.108

    Web of Science

    Scopus

  73. Tensor-cut: A tensor-based graph-cut blood vessel segmentation method and its application to renal artery segmentation 査読有り 国際共著

    Chenglong Wang, Masahiro Oda, Yuichiro Hayashi, Yasushi Yoshino, Tokunori Yamamoto, Alejandro F. Frangi, Kensaku Mori

    Medical Image Analysis   60 巻   頁: 101623   2020年2月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1016/j.media.2019.101623

  74. [Pharmacological action and clinical effect of tedizolid phosphate (SIVEXTRO<sup>®</sup> Tablets 200 mg, for iv infusion 200 mg), a novel oxazolidinone-class antibacterial drug]. 招待有り 査読有り

    Mori M, Takase A

    Nihon yakurigaku zasshi. Folia pharmacologica Japonica   155 巻 ( 5 ) 頁: 332 - 339   2020年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1254/fpj.20013

    PubMed

  75. 3Dプリンティングの最新動向 招待有り

    森 健策

    インナービジョン   35 巻   頁: 36 - 37   2020年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    CiNii Research

  76. Automated eye disease classification method from anterior eye image using anatomical structure focused image classification technique 招待有り 査読有り

    Oda Masahiro, Yamaguchi Takefumi, Fukuoka Hideki, Ueno Yuta, Mori Kensaku

    MEDICAL IMAGING 2020: COMPUTER-AIDED DIAGNOSIS   11314 巻   2020年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Progress in Biomedical Optics and Imaging - Proceedings of SPIE  

    This paper presents an automated classification method of infective and non-infective diseases from anterior eye images. Treatments for cases of infective and non-infective diseases are different. Distinguishing them from anterior eye images is important to decide a treatment plan. Ophthalmologists distinguish them empirically. Quantitative classification of them based on computer assistance is necessary. We propose an automated classification method of anterior eye images into cases of infective or non-infective disease. Anterior eye images have large variations of the eye position and brightness of illumination. This makes the classification difficult. If we focus on the cornea, positions of opacified areas in the corneas are different between cases of the infective and non-infective diseases. Therefore, we solve the anterior eye image classification task by using an object detection approach targeting the cornea. This approach can be said as "anatomical structure focused image classification". We use the YOLOv3 object detection method to detect corneas of infective disease and corneas of non-infective disease. The detection result is used to define a classification result of an image. In our experiments using anterior eye images, 88.3% of images were correctly classified by the proposed method.

    DOI: 10.1117/12.2549951

    Web of Science

    Scopus

  77. Automated Pancreas Segmentation Using Multi-institutional Collaborative Deep Learning 招待有り 査読有り

    Wang P., Shen C., Roth H.R., Yang D., Xu D., Oda M., Misawa K., Chen P.T., Liu K.L., Liao W.C., Wang W., Mori K.

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   12444 LNCS 巻   頁: 192 - 200   2020年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)  

    The performance of deep learning based methods strongly relies on the number of datasets used for training. Many efforts have been made to increase the data in the medical image analysis field. However, unlike photography images, it is hard to generate centralized databases to collect medical images because of numerous technical, legal, and privacy issues. In this work, we study the use of federated learning between two institutions in a real-world setting to collaboratively train a model without sharing the raw data across national boundaries. We quantitatively compare the segmentation models obtained with federated learning and local training alone. Our experimental results show that federated learning models have higher generalizability than standalone training.

    DOI: 10.1007/978-3-030-60548-3_19

    Scopus

  78. Abdominal artery segmentation method from CT volumes using fully convolutional neural network 招待有り 査読有り

    Oda, M; Roth, HR; Kitasaka, T; Misawa, K; Fujiwara, M; Mori, K

    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY   14 巻 ( 12 ) 頁: 2069 - 2081   2019年12月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose : The purpose of this paper is to present a fully automated abdominal artery segmentation method from a CT volume. Three-dimensional (3D) blood vessel structure information is important for diagnosis and treatment. Information about blood vessels (including arteries) can be used in patient-specific surgical planning and intra-operative navigation. Since blood vessels have large inter-patient variations in branching patterns and positions, a patient-specific blood vessel segmentation method is necessary. Even though deep learning-based segmentation methods provide good segmentation accuracy among large organs, small organs such as blood vessels are not well segmented. We propose a deep learning-based abdominal artery segmentation method from a CT volume. Because the artery is one of small organs that is difficult to segment, we introduced an original training sample generation method and a three-plane segmentation approach to improve segmentation accuracy. Method : Our proposed method segments abdominal arteries from an abdominal CT volume with a fully convolutional network (FCN). To segment small arteries, we employ a 2D patch-based segmentation method and an area imbalance reduced training patch generation (AIRTPG) method. AIRTPG adjusts patch number imbalances between patches with artery regions and patches without them. These methods improved the segmentation accuracies of small artery regions. Furthermore, we introduced a three-plane segmentation approach to obtain clear 3D segmentation results from 2D patch-based processes. In the three-plane approach, we performed three segmentation processes using patches generated on axial, coronal, and sagittal planes and combined the results to generate a 3D segmentation result. Results : The evaluation results of the proposed method using 20 cases of abdominal CT volumes show that the averaged F-measure, precision, and recall rates were 87.1%, 85.8%, and 88.4%, respectively. This result outperformed our previous automated FCN-based segmentation method. Our method offers competitive performance compared to the previous blood vessel segmentation methods from 3D volumes. Conclusions : We developed an abdominal artery segmentation method using FCN. The 2D patch-based and AIRTPG methods effectively segmented the artery regions. In addition, the three-plane approach generated good 3D segmentation results.

    DOI: 10.1007/s11548-019-02062-5

    Web of Science

    Scopus

    PubMed

  79. A view of three dimensional unit structures of alveoli in peripheral lung 招待有り 査読有り

    Natori Hiroshi, Takabatake Hirotsugu, Mori Masaki, Oda Masahiro, Mori Kensaku, Koba Hiroyuki, Takahashi Hiroki

    EUROPEAN RESPIRATORY JOURNAL   54 巻   2019年9月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1183/13993003.congress-2019.PA3168

    Web of Science

  80. ARTIFICIAL INTELLIGENCE-ASSISTED POLYP DETECTION SYSTEM FOR COLONOSCOPY, BASED ON THE LARGEST AVAILABLE COLLECTION OF CLINICAL VIDEO DATA FOR MACHINE LEARNING 招待有り 査読有り

    Misawa Masashi, Kudo Shinei, Mori Yuichi, Cho Tomonari, Kataoka Shinichi, Maeda Yasuharu, Ogawa Yushi, Takeda Kenichi, Nakamura Hiroki, Ichimasa Katsuro, Toyoshima Naoya, Ogata Noriyuki, Kudo Toyoki, Hisayuki Tomokazu, Hayashi Takemasa, Wakamura Kunihiko, Baba Toshiyuki, Ishida Fumio, Itoh Hayato, Oda Masahiro, Mori Kensaku

    GASTROINTESTINAL ENDOSCOPY   89 巻 ( 6 ) 頁: AB646 - AB647   2019年6月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    Web of Science

  81. Development of a new laparoscopic detection system for gastric cancer using near-infrared light-emitting clips with glass phosphor 招待有り 査読有り

    Inada S., Nakanishi H., Oda M., Mori K., Ito A., Hasegawa J., Misawa K., Fuchi S.

    Micromachines   10 巻 ( 2 )   2019年1月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Micromachines  

    Laparoscopic surgery is now a standard treatment for gastric cancer. Currently, the location of the gastric cancer is identified during laparoscopic surgery via the preoperative endoscopic injection of charcoal ink around the primary tumor; however, the wide spread of injected charcoal ink can make it difficult to accurately visualize the specific site of the tumor. To precisely identify the locations of gastric tumors, we developed a fluorescent detection system comprising clips with glass phosphor (Yb 3+ , Nd 3+ doped to Bi 2 O 3 -B 2 O 3 -based glasses, size: 2 mm × 1 mm × 3 mm) fixed in the stomach and a laparoscopic fluorescent detection system for clip-derived near-infrared (NIR) light (976 nm). We conducted two ex vivo experiments to evaluate the performance of this fluorescent detection system in an extirpated pig stomach and a freshly resected human stomach and were able to successfully detect NIR fluorescence emitted from the clip in the stomach through the stomach wall by the irradiation of excitation light (λ: 808 nm). These results suggest that the proposed combined NIR light-emitting clip and laparoscopic fluorescent detection system could be very useful in clinical practice for accurately identifying the location of a primary gastric tumor during laparoscopic surgery.

    DOI: 10.3390/mi10020081

    Scopus

  82. Automated Hand Eye Calibration in Laparoscope Holding Robot for Robot Assisted Surgery 招待有り 査読有り

    Jiang Shuai, Hayashi Yuichiro, Wang Cheng, Oda Masahiro, Kitasaka Takayuki, Misawa Kazunari, Mori Kensaku

    INTERNATIONAL WORKSHOP ON ADVANCED IMAGE TECHNOLOGY (IWAIT) 2019   11049 巻   2019年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1117/12.2521618

    Web of Science

  83. Colonoscope tracking method based on shape estimation network 招待有り 査読有り

    Oda Masahiro, Roth Holger R., Kitasaka Takayuki, Furukawa Kazuhiro, Miyahara Ryoji, Hirooka Yoshiki, Navab Nassir, Mori Kensaku

    MEDICAL IMAGING 2019: IMAGE-GUIDED PROCEDURES, ROBOTIC INTERVENTIONS, AND MODELING   10951 巻   2019年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Progress in Biomedical Optics and Imaging - Proceedings of SPIE  

    This paper presents a colonoscope tracking method utilizing a colon shape estimation method. CT colonography is used as a less-invasive colon diagnosis method. If colonic polyps or early-stage cancers are found, they are removed in a colonoscopic examination. In the colonoscopic examination, understanding where the colonoscope running in the colon is difficult. A colonoscope navigation system is necessary to reduce overlooking of polyps. We propose a colonoscope tracking method for navigation systems. Previous colonoscope tracking methods caused large tracking errors because they do not consider deformations of the colon during colonoscope insertions. We utilize the shape estimation network (SEN), which estimates deformed colon shape during colonoscope insertions. The SEN is a neural network containing long short-term memory (LSTM) layer. To perform colon shape estimation suitable to the real clinical situation, we trained the SEN using data obtained during colonoscope operations of physicians. The proposed tracking method performs mapping of the colonoscope tip position to a position in the colon using estimation results of the SEN. We evaluated the proposed method in a phantom study. We confirmed that tracking errors of the proposed method was enough small to perform navigation in the ascending, transverse, and descending colons.

    DOI: 10.1117/12.2512729

    Web of Science

    Scopus

  84. Automatic segmentation of eyeball structures from micro-CT images based on sparse annotation 招待有り 査読有り

    Sugino Takaaki, Roth Holger R., Oda Masahiro, Omata Seiji, Sakuma Shinya, Arai Fumihito, Mori Kensaku

    MEDICAL IMAGING 2018: BIOMEDICAL APPLICATIONS IN MOLECULAR, STRUCTURAL, AND FUNCTIONAL IMAGING   10578 巻   頁: 105780V-1-105780V-6   2018年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1117/12.2293431

    Web of Science

  85. Cascade classification of endocytoscopic images of colorectal lesions for automated pathological diagnosis 招待有り 査読有り

    Itoh Hayato, Mori Yuichi, Misawa Masashi, Oda Masahiro, Kudo Shin-ei, Mori Kensaku

    MEDICAL IMAGING 2018: COMPUTER-AIDED DIAGNOSIS   10575 巻   頁: 1057516-1-1057516-6   2018年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1117/12.2293495

    Web of Science

  86. 3Dプリンタの最新動向 招待有り

    森 健策

    インナービジョン   33 巻   頁: 35 - 36   2018年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    CiNii Research

  87. BESNet: Boundary-Enhanced Segmentation of Cells in Histopathological Images 招待有り 査読有り

    Oda Hirohisa, Roth Holger R., Chiba Kosuke, Sokolic Jure, Kitasaka Takayuki, Oda Masahiro, Hinoki Akinari, Uchida Hiroo, Schnabel Julia A., Mori Kensaku

    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2018, PT II   11071 巻   頁: 228 - 236   2018年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1007/978-3-030-00934-2_26

    Web of Science

  88. Develop and Validate a Finite Element Method Model for Deformation Matching of Laparoscopic Gastrectomy Navigation 招待有り 査読有り

    Chen Tao, Wei Guodong, Shi Weili, Hayashi Yuichiro, Oda Masahiro, Jiang Zhengang, Li Guoxin, Mori Kensaku

    MEDICAL IMAGING 2018: IMAGE-GUIDED PROCEDURES, ROBOTIC INTERVENTIONS, AND MODELING   10576 巻   2018年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1117/12.2293288

    Web of Science

  89. Deep Learning and Its Application to Medical Image Segmentation 招待有り 査読有り

    ROTH Holger R., SHEN Chen, ODA Hirohisa, ODA Masahiro, HAYASHI Yuichiro, MISAWA Kazunari, MORI Kensaku

    Medical Imaging Technology   36 巻 ( 2 ) 頁: 63 - 71   2018年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:The Japanese Society of Medical Imaging Technology  

    One of the most common tasks in medical imaging is semantic segmentation. Achieving this segmentation automatically has been an active area of research, but the task has been proven very challenging due to the large variation of anatomy across different patients. However, recent advances in deep learning have made it possible to significantly improve the performance of image recognition and semantic segmentation methods in the field of computer vision. Due to the data driven approaches of hierarchical feature learning in deep learning frameworks, these advances can be translated to medical images without much difficulty. Several variations of deep convolutional neural networks have been successfully applied to medical images. Especially fully convolutional architectures have been proven efficient for segmentation of 3D medical images. In this article, we describe how to build a 3D fully convolutional network (FCN) that can process 3D images in order to produce automatic semantic segmentations. The model is trained and evaluated on a clinical computed tomography (CT) dataset and shows stateof-the-art performance in multi-organ segmentation.

    DOI: 10.11409/mit.36.63

    CiNii Research

  90. Colon Shape Estimation Method for Colonoscope Tracking Using Recurrent Neural Networks 招待有り 査読有り

    Oda Masahiro, Roth Holger R., Kitasaka Takayuki, Furukawa Kasuhiro, Miyahara Ryoji, Hirooka Yoshiki, Goto Hidemi, Navab Nassir, Mori Kensaku

    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2018, PT IV   11073 巻   頁: 176 - 184   2018年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1007/978-3-030-00937-3_21

    Web of Science

  91. Airway Segmentation from 3D Chest CT Volumes Based on Volume of Interest Using Gradient Vector Flow 招待有り 査読有り

    MENG Qier, KITASAKA Takayuki, ODA Masahiro, UENO Junji, MORI Kensaku

    Medical Imaging Technology   36 巻 ( 3 ) 頁: 133 - 146   2018年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:The Japanese Society of Medical Imaging Technology  

    In this paper, we propose a new airway segmentation algorithm from 3D chest CT volumes based on the volume of interest (VOI). The algorithm segments each bronchial branch by recognizing the airway regions from the trachea using the VOIs to segment each branch. A VOI is placed to envelop the branch currently being processed. Then a cavity enhancement filter is performed only inside the current VOI so that each branch is extracted. At the same time, we perform a leakage detection scheme to avoid any leakage regions inside the VOI. Next the gradient vector flow magnitude map and a tubular-likeness function are computed in each VOI. This assists the predictions of both the position and direction of the next child VOIs to detect the next child branches to continue the tracking algorithm. Finally, we unify all of the extracted airway regions to form a complete airway tree. We used a dataset that includes 50 standard-dose human chest CT volumes to evaluate our proposed algorithm. The average extraction rate was approximately 78.1% with a significantly decreased false positive rate compared to the previous method.

    DOI: 10.11409/mit.36.133

    CiNii Research

  92. <b>3Dプリンターの基礎と医療応用</b> 招待有り 査読有り

    森 健策

    心臓   49 巻 ( 11 ) 頁: 1104 - 1113   2017年11月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:公益財団法人 日本心臓財団  

    DOI: 10.11281/shinzo.49.1104

    CiNii Research

  93. Automated mediastinal lymph node detection from CT volumes based on intensity targeted radial structure tensor analysis 招待有り

    Hirohisa Oda, Kanwal K. Bhatia, Masahiro Oda, Takayuki Kitasaka, Shingo Iwano, Hirotoshi Homma, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori, Julia A. Schnabel, Kensaku Mori

    Journal of Medical Imaging   4 巻 ( 04 ) 頁: 1   2017年11月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:SPIE-Intl Soc Optical Eng  

    DOI: 10.1117/1.jmi.4.4.044502

    CiNii Research

  94. Automatic segmentation of head anatomical structures from sparsely-Annotated images 招待有り 査読有り

    Sugino T., Roth H.R., Eshghi M., Oda M., Chung M.S., Mori K.

    2017 IEEE International Conference on Cyborg and Bionic Systems, CBS 2017   2018-January 巻   頁: 145 - 149   2017年7月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:2017 IEEE International Conference on Cyborg and Bionic Systems, CBS 2017  

    Bionic humanoid systems, which are elaborate human models with sensors, have been developed as a tool for quantitative evaluation of doctors' psychomotor skills and medical device performances. For creation of the elaborate human models, this study presents automated segmentation of head sectioned images using sparsely-Annotated data based on deep convolutional neural network. We applied the following fully convolutional networks (FCNs) to the sparse-Annotation-based segmentation: a standard FCN and a dilated convolution based FCN. To validate the availability of FCNs for segmentation of head structures from sparse annotation, we performed 8- and 243-label segmentation experiments using different two sets of head sectioned images in the Visible Korean Human project. In the segmentation experiments, only 10% of all images in each data set were used for training data. Both of the FCNs could achieve the mean segmentation accuracy of more than 85% in the 8-label segmentation. In the 243-label segmentation, though the mean segmentation accuracy was about 50%, the results suggested that the FCNs, especially the dilated convolution based FCNs, had potential to achieve accurate segmentation of anatomical structures, except for small-sized and complex-shaped tissues, even from sparse annotation.

    DOI: 10.1109/CBS.2017.8266085

    Scopus

  95. Artificial Intelligence for Endocytoscopy Provides Fully Automated Diagnosis of Histological Remission in Ulcerativ E. Coli Tis 招待有り 査読有り

    Yasuharu Maeda, Kudo Shinei, Mori Yuichi, Misawa Masashi, Wakamura Kunihiko, Hayashi Seiko, Ogata Noriyuki, Takeda Kenichi, Kudo Toyoki, Hayashi Takemasa, Katagiri Atsushi, Ishida Fumio, Ohtsuka Kazuo, Oda Masahiro, Mori Kensaku

    GASTROINTESTINAL ENDOSCOPY   85 巻 ( 5 ) 頁: AB248 - AB248   2017年5月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    Web of Science

  96. Can Artificial Intelligence Correctly Diagnose Sessile Serrated Adenomas/Polyps? 招待有り 査読有り

    Mori Yuichi, Kudo Shinei, Ogawa Yushi, Misawa Masashi, Takeda Kenichi, Kudo Toyoki, Wakamura Kunihiko, Hayashi Takemasa, Ichimasa Katsuro, Maeda Yasuharu, Toyoshima Naoya, Nakamura Hiroki, Katagiri Atsushi, Baba Toshiyuki, Ishida Fumio, Oda Masahiro, Mori Kensaku, Inoue Haruhiro

    GASTROINTESTINAL ENDOSCOPY   85 巻 ( 5 ) 頁: AB510 - AB510   2017年5月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    Web of Science

  97. Diagnostic Ability of Automated Diagnosis System Using Endocytoscopy for Invasive Colorectal Cancer 招待有り 査読有り

    Takeda Kenichi, Kudo Shinei, Mori Yuichi, Kataoka Shinichi, Yasuharu Maeda, Ogawa Yushi, Nakamura Hiroki, Misawa Masashi, Kudo Toyoki, Wakamura Kunihiko, Hayashi Takemasa, Katagiri Atsushi, Baba Toshiyuki, Hidaka Eiji, Ishida Fumio, Inoue Haruhiro, Oda Masahiro, Mori Kensaku

    GASTROINTESTINAL ENDOSCOPY   85 巻 ( 5 ) 頁: AB408 - AB408   2017年5月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    Web of Science

  98. Computer-Aided Diagnosis Based on Endocytoscopy With Narrow-Band Imaging Allows Accurate Diagnosis of Diminutive Colorectal Lesions 招待有り 査読有り

    Misawa Masashi, Kudo Shinei, Mori Yuichi, Takeda Kenichi, Kataoka Shinichi, Nakamura Hiroki, Maeda Yasuharu, Ogawa Yushi, Yamauchi Akihiro, Igarashi Kenta, Hayashi Takemasa, Kudo Toyoki, Wakamura Kunihiko, Katagiri Atsushi, Baba Toshiyuki, Ishida Fumio, Oda Masahiro, Mori Kensaku

    GASTROINTESTINAL ENDOSCOPY   85 巻 ( 5 ) 頁: AB57 - AB57   2017年5月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    Web of Science

  99. 3Dイメージと3D印刷されたオブジェクトの利用が熟達者と初心者の空間的メンタルモデル形成に及ぼす影響 招待有り 査読有り

    前東 晃礼, 三輪 和久, 小田 昌宏, 中村 嘉彦, 森 建策, 伊神 剛

    人工知能学会研究会資料 先進的学習科学と工学研究会   79 巻 ( 0 ) 頁: 08   2017年3月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:一般社団法人 人工知能学会  

    DOI: 10.11517/jsaialst.79.0_08

    CiNii Research

  100. 3D FCN Feature Driven Regression Forest-Based Pancreas Localization and Segmentation 招待有り 査読有り

    Oda Masahiro, Shimizu Natsuki, Roth Holger R., Karasawa Ken'ichi, Kitasaka Takayuki, Misawa Kazunari, Fujiwara Michitaka, Rueckert Daniel, Mori Kensaku

    DEEP LEARNING IN MEDICAL IMAGE ANALYSIS AND MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT   10553 巻   頁: 222 - 230   2017年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)  

    This paper presents a fully automated atlas-based pancreas segmentation method from CT volumes utilizing 3D fully convolutional network (FCN) feature-based pancreas localization. Segmentation of the pancreas is difficult because it has larger inter-patient spatial variations than other organs. Previous pancreas segmentation methods failed to deal with such variations. We propose a fully automated pancreas segmentation method that contains novel localization and segmentation. Since the pancreas neighbors many other organs, its position and size are strongly related to the positions of the surrounding organs. We estimate the position and the size of the pancreas (localization) from global features by regression forests. As global features, we use intensity differences and 3D FCN deep learned features, which include automatically extracted essential features for segmentation. We chose 3D FCN features from a trained 3D U-Net, which is trained to perform multi-organ segmentation. The global features include both the pancreas and surrounding organ information. After localization, a patient-specific probabilistic atlas-based pancreas segmentation is performed. In evaluation results with 146 CT volumes, we achieved 60.6% of the Jaccard index and 73.9% of the Dice overlap.

    DOI: 10.1007/978-3-319-67558-9_26

    Web of Science

    Scopus

  101. 3Dプリンタ・ユーザーインターフェイス等の最新動向 招待有り

    森 健策

    インナービジョン   32 巻   頁: 44 - 45   2017年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    CiNii Research

  102. 3Dプリンタの医療応用 招待有り

    森 健策

    医用画像情報学会雑誌   34 巻 ( 1 ) 頁: 1 - 6   2017年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:医用画像情報学会  

    DOI: 10.11318/mii.34.1

    CiNii Research

  103. 3Dプリンタの医療応用 招待有り

    森 健策

    医用画像情報学会雑誌   34 巻   頁: 1 - 6   2017年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    CiNii Research

  104. 3Dプリンタ-の基礎と医療応用 招待有り

    森 健策

    月刊心臓   49 巻   頁: 1104 - 1113   2017年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    CiNii Research

  105. Automatic Segmentation of Head Anatomical Structures from Sparsely-annotated Images 招待有り 査読有り

    Sugino Takaaki, Roth Holger R., Eshghi Mohammad, Oda Masahiro, Chung Min Suk, Mori Kensaku

    2017 IEEE INTERNATIONAL CONFERENCE ON CYBORG AND BIONIC SYSTEMS (CBS)     頁: 145 - 149   2017年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    Web of Science

  106. Computer-Aided Diagnosis of Mammographic Masses Using Geometric Verification-Based Image Retrieval 招待有り 査読有り

    Li Qingliang, Shi Weili, Yang Huamin, Zhang Huimao, Li Guoxin, Chen Tao, Mori Kensaku, Jiang Zhengang

    MEDICAL IMAGING 2017: COMPUTER-AIDED DIAGNOSIS   10134 巻   2017年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1117/12.2255799

    Web of Science

  107. Comparison of the Deep-Learning-Based Automated Segmentation Methods for the Head Sectioned Images of the Virtual Korean Human Project 招待有り 査読有り

    Eshghi Mohammad, Roth Holger R., Oda Masahiro, Chung Min Suk, Mori Kensaku

    PROCEEDINGS OF THE FIFTEENTH IAPR INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS - MVA2017     頁: 290 - 293   2017年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    Web of Science

  108. A study on improvement of airway segmentation using Hybrid method 招待有り 査読有り

    Qier M., Kitasaka T., Nimura Y., Oda M., Mori K.

    Proceedings - 3rd IAPR Asian Conference on Pattern Recognition, ACPR 2015     頁: 549 - 553   2016年6月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Proceedings - 3rd IAPR Asian Conference on Pattern Recognition, ACPR 2015  

    This paper presents a method for extracting an airway region from 3D chest CT volumes that uses a combination of tube enhancement filters, voxel classification based on machine learning methods and graph-cut algorithm. Lots of previous methods utilize region growing or level set algorithms without any prior knowledge of bronchi, which always fail when they reach to the peripheral bronchi. In this paper, a method of extraction based on airway shape and machine learning is proposed. The proposed method detects candidate voxels of bronchial regions by using two types of enhancement filters, and a classifier model is built for selecting the proper candidates regions based on intensity and shape features and finally the selected candidate voxels are connected by graph-cut algorithm. We applied this method on six cases of 3D chest CT volumes. The results show that this method can extract the smaller airway branches without leaking into the lung parenchyma areas.

    DOI: 10.1109/ACPR.2015.7486563

    Scopus

  109. Characterization of Colorectal Lesions Using a Computer-Aided Diagnostic System for Narrow-Band Imaging Endocytoscopy 招待有り 査読有り

    Misawa Masashi, Kudo Shin-ei, Mori Yuichi, Nakamura Hiroki, Kataoka Shinichi, Maeda Yasuharu, Kudo Toyoki, Hayashi Takemasa, Wakamura Kunihiko, Miyachi Hideyuki, Katagiri Atsushi, Baba Toshiyuki, Ishida Fumio, Inoue Haruhiro, Nimura Yukitaka, Mori Kensaku

    GASTROENTEROLOGY   150 巻 ( 7 ) 頁: 1531 - +   2016年6月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1053/j.gastro.2016.04.004

    Web of Science

  110. 3Dイメージと3D印刷されたオブジェクトの利用が肝構造のメンタルモデル形成に与える影響 招待有り 査読有り

    前東 晃礼, 三輪 和久, 小田 昌宏, 中村 嘉彦, 森 建策, 伊神 剛

    人工知能学会研究会資料 先進的学習科学と工学研究会   76 巻 ( 0 ) 頁: 14   2016年3月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:一般社団法人 人工知能学会  

    DOI: 10.11517/jsaialst.76.0_14

    CiNii Research

  111. Cascade registration of micro CT volumes taken in multiple resolutions 招待有り 査読有り

    Nagara K., Oda H., Nakamura S., Oda M., Homma H., Takabatake H., Mori M., Natori H., Rueckert D., Mori K.

    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)   9805 LNCS 巻   頁: 269 - 280   2016年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)  

    In this paper, we present a preliminary report of a multiscale registration method between micro-focus X-ray CT (micro CT) volumes taken in different scales. 3D fine structures of target objects can be observed on micro CT volumes, which are difficult to observe on clinical CT volumes. Micro CT scanners can scan specimens in various resolutions. In their high resolution volumes, ultra fine structures of specimens can be observed, while scanned areas are limited to very small. On the other hand, in low resolution volumes, large areas can be captured, while fine structures of specimens are difficult to observe. The fusion volume of the high and low resolution volumes will have benefits of both. Because the difference of resolutions between the high and low resolution volumes may vary greatly, an intermediate resolution volume is required for successful fusion of volumes. To perform such volume fusion, a cascade multi-resolution registration technique is required. To register micro CT volumes that have quite different resolutions, we employ a cascade co-registration technique. In the cascade co-registration process, intermediate resolution volumes are used in a registration process of the high and low resolution volumes. In the registration between two volumes, we apply two steps registration techniques. In the first step, a block division is used to register two resolution volumes. Afterward, we estimate the fine spatial positions relating the registered two volumes using the Powell method. The registration result can be used to generate a fusion volume of the high and low resolution volumes.

    DOI: 10.1007/978-3-319-43775-0_24

    Scopus

  112. 3Dプリンタ・ユーザーインターフェイス等の最新動向 招待有り

    森 健策

    インナービジョン   - 巻   頁: 44 - 45   2016年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    CiNii Research

  113. 3Dプリンティングのハンドリングのノウハウ 招待有り

    森 健策

    インナービジョン   31 巻   頁: 20 - 24   2016年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    CiNii Research

  114. Automated anatomical labeling of abdominal arteries and hepatic portal system extracted from abdominal CT volumes 招待有り 査読有り

    Matsuzaki Tetsuro, Oda Masahiro, Kitasaka Takayuki, Hayashi Yuichiro, Misawa Kazunari, Mori Kensaku

    MEDICAL IMAGE ANALYSIS   20 巻 ( 1 ) 頁: 152 - 161   2015年2月

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1016/j.media.2014.11.002

    Web of Science

  115. Automated Torso Organ Segmentation from 3D CT Images using Conditional Random Field 招待有り 査読有り

    Nimura Yukitaka, Hayashi Yuichiro, Kitasaka Takayuki, Misawa Kazunari, Mori Kensaku

    MEDICAL IMAGING 2016: COMPUTER-AIDED DIAGNOSIS   9785 巻   2015年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1117/12.2214845

    Web of Science

  116. 3Dプリンティングの現状と将来展望:医用画像処理と3Dプリンタによる臓器実体モデル作成とその利用 招待有り

    森 健策

    光技術コンタクト   53 巻   頁: 20 - 27   2015年

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

    CiNii Research

  117. A study on improvement of airway segmentation using Hybrid method 招待有り 査読有り

    Qier Meng, Kitasaka Takayuki, Nimura Yukitaka, Oda Masahiro, Mori Kensaku

    PROCEEDINGS 3RD IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION ACPR 2015     頁: 549 - 553   2015年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    Web of Science

  118. Automated branching pattern report generation for laparoscopic surgery assistance 招待有り 査読有り

    Oda Masahiro, Matsuzaki Tetsuro, Hayashi Yuichiro, Kitasaka Takayuki, Misawa Kazunari, Mori Kensaku

    MEDICAL IMAGING 2015: COMPUTER-AIDED DIAGNOSIS   9414 巻   2015年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1117/12.2082488

    Web of Science

  119. Development of a new detection device using a glass clip emitting infrared fluorescence for laparoscopic surgery of gastric cancer 招待有り 査読有り

    Inada Shunko Albano, Fuchi Shingo, Mori Kensaku, Hasegawa Junichi, Misawa Kazunari, Nakanishi Hayao

    6TH INTERNATIONAL CONFERENCE ON OPTICAL, OPTOELECTRONIC AND PHOTONIC MATERIALS AND APPLICATIONS (ICOOPMA) 2014   619 巻   2015年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1088/1742-6596/619/1/012033

    Web of Science

  120. Development and clinical application of surgical navigation system for laparoscopic hepatectomy 招待有り 査読有り

    Hayashi Yuichiro, Igami Tsuyoshi, Hirose Tomoaki, Nagino Masato, Mori Kensaku

    MEDICAL IMAGING 2015: IMAGE-GUIDED PROCEDURES, ROBOTIC INTERVENTIONS, AND MODELING   9415 巻   2015年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1117/12.2082690

    Web of Science

  121. Connection method of separated luminal regions of intestine from CT volumes 招待有り 査読有り

    Oda Masahiro, Kitasaka Takayuki, Furukawa Kazuhiro, Watanabe Osamu, Ando Takafumi, Hirooka Yoshiki, Goto Hidemi, Mori Kensaku

    MEDICAL IMAGING 2015: COMPUTER-AIDED DIAGNOSIS   9414 巻   2015年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1117/12.2081977

    Web of Science

  122. Automated Torso Organ Segmentation from 3D CT Images using Structured Perceptron and Dual Decompostion 招待有り 査読有り

    Nimura Yukitaka, Hayashi Yuichiro, Kitasaka Takayuki, Mori Kensaku

    MEDICAL IMAGING 2015: COMPUTER-AIDED DIAGNOSIS   9414 巻   2015年

     詳細を見る

    担当区分:責任著者   記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1117/12.2081774

    Web of Science

  123. SGSR: style-subnets-assisted generative latent bank for large-factor super-resolution with registered medical image dataset. 招待有り 査読有り 国際誌

    Zheng T, Oda H, Hayashi Y, Nakamura S, Mori M, Takabatake H, Natori H, Oda M, Mori K

    International journal of computer assisted radiology and surgery   19 巻 ( 3 ) 頁: 493 - 506   2024年3月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose: We propose a large-factor super-resolution (SR) method for performing SR on registered medical image datasets. Conventional SR approaches use low-resolution (LR) and high-resolution (HR) image pairs to train a deep convolutional neural network (DCN). However, LR–HR images in medical imaging are commonly acquired from different imaging devices, and acquiring LR–HR image pairs needs registration. Registered LR–HR images have registration errors inevitably. Using LR–HR images with registration error for training an SR DCN causes collapsed SR results. To address these challenges, we introduce a novel SR approach designed specifically for registered LR–HR medical images. Methods: We propose style-subnets-assisted generative latent bank for large-factor super-resolution (SGSR) trained with registered medical image datasets. Pre-trained generative models named generative latent bank (GLB), which stores rich image priors, can be applied in SR to generate realistic and faithful images. We improve GLB by newly introducing style-subnets-assisted GLB (S-GLB). We also propose a novel inter-uncertainty loss to boost our method’s performance. Introducing more spatial information by inputting adjacent slices further improved the results. Results: SGSR outperforms state-of-the-art (SOTA) supervised SR methods qualitatively and quantitatively on multiple datasets. SGSR achieved higher reconstruction accuracy than recently supervised baselines by increasing peak signal-to-noise ratio from 32.628 to 34.206 dB. Conclusion: SGSR performs large-factor SR while given a registered LR–HR medical image dataset with registration error for training. SGSR’s results have both realistic textures and accurate anatomical structures due to favorable quantitative and qualitative results. Experiments on multiple datasets demonstrated SGSR’s superiority over other SOTA methods. SR medical images generated by SGSR are expected to improve the accuracy of pre-surgery diagnosis and reduce patient burden.

    DOI: 10.1007/s11548-023-03037-3

    Scopus

    PubMed

  124. YOLOv7-RepFPN: Improving real-time performance of laparoscopic tool detection on embedded systems 招待有り 査読有り

    Liu Y., Hayashi Y., Oda M., Kitasaka T., Mori K.

    Healthcare Technology Letters     2024年

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Healthcare Technology Letters  

    This study focuses on enhancing the inference speed of laparoscopic tool detection on embedded devices. Laparoscopy, a minimally invasive surgery technique, markedly reduces patient recovery times and postoperative complications. Real-time laparoscopic tool detection helps assisting laparoscopy by providing information for surgical navigation, and its implementation on embedded devices is gaining interest due to the portability, network independence and scalability of the devices. However, embedded devices often face computation resource limitations, potentially hindering inference speed. To mitigate this concern, the work introduces a two-fold modification to the YOLOv7 model: the feature channels and integrate RepBlock is halved, yielding the YOLOv7-RepFPN model. This configuration leads to a significant reduction in computational complexity. Additionally, the focal EIoU (efficient intersection of union) loss function is employed for bounding box regression. Experimental results on an embedded device demonstrate that for frame-by-frame laparoscopic tool detection, the proposed YOLOv7-RepFPN achieved an mAP of 88.2% (with IoU set to 0.5) on a custom dataset based on EndoVis17, and an inference speed of 62.9 FPS. Contrasting with the original YOLOv7, which garnered an 89.3% mAP and 41.8 FPS under identical conditions, the methodology enhances the speed by 21.1 FPS while maintaining detection accuracy. This emphasizes the effectiveness of the work.

    DOI: 10.1049/htl2.12072

    Scopus

  125. Towards better laparoscopic video segmentation: A class-wise contrastive learning approach with multi-scale feature extraction 招待有り 査読有り

    Zhang L., Hayashi Y., Oda M., Mori K.

    Healthcare Technology Letters     2024年

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Healthcare Technology Letters  

    The task of segmentation is integral to computer-aided surgery systems. Given the privacy concerns associated with medical data, collecting a large amount of annotated data for training is challenging. Unsupervised learning techniques, such as contrastive learning, have shown powerful capabilities in learning image-level representations from unlabelled data. This study leverages classification labels to enhance the accuracy of the segmentation model trained on limited annotated data. The method uses a multi-scale projection head to extract image features at various scales. The partitioning method for positive sample pairs is then improved to perform contrastive learning on the extracted features at each scale to effectively represent the differences between positive and negative samples in contrastive learning. Furthermore, the model is trained simultaneously with both segmentation labels and classification labels. This enables the model to extract features more effectively from each segmentation target class and further accelerates the convergence speed. The method was validated using the publicly available CholecSeg8k dataset for comprehensive abdominal cavity surgical segmentation. Compared to select existing methods, the proposed approach significantly enhances segmentation performance, even with a small labelled subset (1–10%) of the dataset, showcasing a superior intersection over union (IoU) score.

    DOI: 10.1049/htl2.12069

    Scopus

  126. Revisiting instrument segmentation: Learning from decentralized surgical sequences with various imperfect annotations 招待有り 査読有り

    Zheng Z., Hayashi Y., Oda M., Kitasaka T., Mori K.

    Healthcare Technology Letters     2024年

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Healthcare Technology Letters  

    This paper focuses on a new and challenging problem related to instrument segmentation. This paper aims to learn a generalizable model from distributed datasets with various imperfect annotations. Collecting a large-scale dataset for centralized learning is usually impeded due to data silos and privacy issues. Besides, local clients, such as hospitals or medical institutes, may hold datasets with diverse and imperfect annotations. These datasets can include scarce annotations (many samples are unlabelled), noisy labels prone to errors, and scribble annotations with less precision. Federated learning (FL) has emerged as an attractive paradigm for developing global models with these locally distributed datasets. However, its potential in instrument segmentation has yet to be fully investigated. Moreover, the problem of learning from various imperfect annotations in an FL setup is rarely studied, even though it presents a more practical and beneficial scenario. This work rethinks instrument segmentation in such a setting and propose a practical FL framework for this issue. Notably, this approach surpassed centralized learning under various imperfect annotation settings. This method established a foundational benchmark, and future work can build upon it by considering each client owning various annotations and aligning closer with real-world complexities.

    DOI: 10.1049/htl2.12068

    Scopus

  127. Artificial intelligence in a prediction model for postendoscopic retrograde cholangiopancreatography pancreatitis. 査読有り

    Takahashi H, Ohno E, Furukawa T, Yamao K, Ishikawa T, Mizutani Y, Iida T, Shiratori Y, Oyama S, Koyama J, Mori K, Hayashi Y, Oda M, Suzuki T, Kawashima H

    Digestive endoscopy : official journal of the Japan Gastroenterological Endoscopy Society     2023年7月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Digestive Endoscopy  

    Objectives: In this study we aimed to develop an artificial intelligence-based model for predicting postendoscopic retrograde cholangiopancreatography (ERCP) pancreatitis (PEP). Methods: We retrospectively reviewed ERCP patients at Nagoya University Hospital (NUH) and Toyota Memorial Hospital (TMH). We constructed two prediction models, a random forest (RF), one of the machine-learning algorithms, and a logistic regression (LR) model. First, we selected features of each model from 40 possible features. Then the models were trained and validated using three fold cross-validation in the NUH cohort and tested in the TMH cohort. The area under the receiver operating characteristic curve (AUROC) was used to assess model performance. Finally, using the output parameters of the RF model, we classified the patients into low-, medium-, and high-risk groups. Results: A total of 615 patients at NUH and 544 patients at TMH were enrolled. Ten features were selected for the RF model, including albumin, creatinine, biliary tract cancer, pancreatic cancer, bile duct stone, total procedure time, pancreatic duct injection, pancreatic guidewire-assisted technique without a pancreatic stent, intraductal ultrasonography, and bile duct biopsy. In the three fold cross-validation, the RF model showed better predictive ability than the LR model (AUROC 0.821 vs. 0.660). In the test, the RF model also showed better performance (AUROC 0.770 vs. 0.663, P = 0.002). Based on the RF model, we classified the patients according to the incidence of PEP (2.9%, 10.0%, and 23.9%). Conclusion: We developed an RF model. Machine-learning algorithms could be powerful tools to develop accurate prediction models.

    DOI: 10.1111/den.14622

    Scopus

    PubMed

  128. DEVELOPMENT OF A MACHINE-LEARNING MODEL FOR PREDICTING POST-ERCP PANCREATITIS 招待有り 査読有り

    Takahashi Hidekazu, Eizaburo Ohno, Taiki Furukawa, Kentaro Yamao, Takuya Ishikawa, Yasuyuki Mizutani, Tadashi Iida, Yoshimune Shiratori, Shintaro Oyama, Junji Koyama, Kensaku Mori, Yuichiro Hayashi, Masahiro Oda, Takahisa Suzuki, Hiroki Kawashima

    Gastrointestinal Endoscopy   97 巻 ( 6 ) 頁: AB656 - AB656   2023年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Elsevier BV  

    DOI: 10.1016/j.gie.2023.04.1087

  129. Development of panorama vision ring for thoracoscopy. 招待有り 査読有り 国際誌

    Kitasaka T, Nakamura S, Hayashi Y, Nakai T, Nakai Y, Mori K, Chen-Yoshikawa TF

    International journal of computer assisted radiology and surgery   18 巻 ( 5 ) 頁: 945 - 952   2023年5月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose: Minimally invasive surgery (MIS) using a thoraco- or laparoscope is becoming a more common surgical technique. In MIS, a magnified view from a thoracoscope helps surgeons conduct precise operations. However, there is a risk of the visible area becoming narrow. To confirm that the operation field is safe, the surgeon will draw the thoracoscope back to check the marginal area of the target and insert it again many times during MIS. To reduce the surgeon’s load, we aim to visualize the entire thoracic cavity using a newly developed device called “panorama vision ring” (PVR). Method: The PVR is used instead of a wound retractor or a trocar. It is a ring-type socket with one big hole for the thoracoscope and four small holes for tiny cameras placed around the big hole. The views from the tiny cameras are fused into one wider view that visualizes the entire thoracic cavity. A surgeon can proceed with an operation by checking what exists outside of the thoracoscopic view. Also, she/he can check whether or not bleeding has occurred from the image of the entire cavity. Results: We evaluated the view-expansion ability of the PVR by using a three-dimensional full-scale thoracic model. The experimental results showed that the entire thoracic cavity could be visible in a panoramic view generated by the PVR. We also demonstrated pulmonary lobectomy in virtual MIS using the PVR. Surgeons could perform a pulmonary lobectomy while checking the entire cavity. Conclusion: We developed the PVR, which uses tiny auxiliary cameras to create a panoramic view of the entire thoracic cavity during MIS. We aim to make MIS safer for patients and more comfortable for surgeons through the development of the PVR.

    DOI: 10.1007/s11548-023-02859-5

    Scopus

    PubMed

  130. CT像からの腸閉塞検出における中心線を用いた後処理手法

    陳 思睿, 小田 紘久, 安 芹, 林 雄一郎, 北坂 孝幸, 滝本 愛太朗, 檜 顕成, 内田 広夫, 鈴木 耕次郎, 小田 昌宏, 森 健策

    電子情報通信学会技術研究報告(MI)信学技報   122 巻 ( 417 ) 頁: 223 - 228   2023年3月

  131. 医用画像とAI 査読有り

    カレントテラピー   41 巻 ( 39 ) 頁: 79 - 79   2023年3月

     詳細を見る

    記述言語:日本語  

  132. グラフニューラルネットワークを用いた血管名自動命名における臓器特徴の有効性の調査

    出口 智也, 林 雄一郎, 北坂 孝幸, 小田 昌宏, 三澤 一成, 森 健策

    電子情報通信学会技術研究報告(MI), 信学技報   122 巻 ( 417 ) 頁: 105 - 110   2023年3月

  133. Comparative study of the Small Intestine segmentation based on 2D and 3D U-Nets

    Qin An, Hirohisa Oda, Sirui Chen, Yuichiro Hayashi, Takayuki Kitasaka, Hiroo Uchida, Akinari Hinoki, Kojiro Suzuki, Aitaro Takimoto, Masahiro Oda, Kensaku Mori

      122 巻 ( 417 ) 頁: 46 - 51   2023年3月

  134. Average Templateを使ったCT像からのCOVID-19異常陰影領域セグメンテーション手法

    柳 凱, 小田 昌宏, 鄭 通, 林 雄一郎, 大竹 義人, 橋本 正弘, 明石 敏昭, 青木 茂樹, 森 健策

    電子情報通信学会技術研究報告(MI) 信学技報   122 巻 ( 47 ) 頁: 40 - 45   2023年3月

  135. 術前画像情報を用いた腹腔鏡映像からの血管位置予測の検討

    榎本 圭吾, 林 雄一郎, 北坂 孝幸, 小田 昌宏, 三澤 一成, 森 健策

    電子情報通信学会技術研究報告(MI) 信学技報   122 巻 ( 417 ) 頁: 63 - 68   2023年3月

  136. L-former : a lightweight transformer for realistic medical image generation and its application to super-resolution 査読有り

    Tong Zheng, Hirohisa Oda, Yuichiro Hayashi,shota Nakamura, Masaki Mori, Hirotsugu Takabatake, Hiroshi Natori, Masahiro Oda, Kensaku Mori

    proc. SPIE     2023年2月

  137. Priority attention network with Bayesian learning for fully automatic segmentation of substantia nigra from neuromelanin MRI 招待有り 査読有り

    Tao Hu, Hayato Itoh, Masahiro Oda, Shinji Saiki, Nobutaka Hattori, Koji Kamagata, Shigeki Aoki, Kensaku Mori

    Proc.SPIE12464     2023年2月

  138. `Thrombosis region extraction and quantitative analysis in confocal laser scanning microscopic image sequence in in-vivo imaging 招待有り 査読有り

    Yunheng Wu, Masahiro Oda,Yuichiro Hayashi, Shuntaro Kawamura, Takanori Takebe, Kensaku Mori

    Proc.SPIE     2023年2月

  139. Real bronchoscopic images-based bronchial nomenclature: a preliminary study 招待有り 査読有り

    Cheng Wang, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori, Kensaku Mori

    Proc.SPIE     2023年2月

  140. Improved method for COVID-19 classification of complex-architecture CNN from chest CT volumes using orthogonal ensemble networks 招待有り 査読有り

    Ryo Toda, Masahiro Oda, Yuichiro Hayashi, Yoshito Otake, Masahiro Hashimoto, Toshiaki Akashi, Shigeki Aoki, Kensaku Mori

    Proc.SPIE     2023年2月

  141. Octree cube constraints in PBD method for high resolution surgical simulation 査読有り

    Rintaro Miyazaki, Yuichiro Hayashi, Masahiro Oda, Kensaku Mori

    Proc.SPIE     2023年2月

  142. Classification of COVID-19 cases from chest CT volumes using hybrid model of 3D CNN and 3D MLP-mixer 招待有り 査読有り

    Masahiro Oda, Tong Zheng, Yuichiro Hayashi, Yoshito Otake, Masahiro Hashimoto, Toshiaki Akashi, Shigeki Aoki, Kensaku Mori

    Proc.SPIE     2023年2月

  143. A semantic segmentation method for laparoscopic images using semantically similar groups 査読有り

    Leo Uramoto, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kazunari Misawa, Kensaku Mori

    Proc.SPIE     2023年2月

  144. Oesophagus Achalasia Diagnosis from Esophagoscopy Based on a Serial Multi-scale Network 招待有り

    Jiang K., Oda M., Hayashi Y., Shiwaku H., Misawa M., Mori K.

    Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization   11 巻 ( 4 ) 頁: 1 - 10   2023年2月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization  

    Oesophageal achalasia is a primary oesophageal motility disorder disease. To diagnose oesophagus achalasia, physicians recommend endoscopic evaluation of the oesophagus. However, a low sensitivity still accompanies esophagoscopy on oesophagus achalasia diagnosis. Thus, a quantitative diagnosis system is needed to support physicians diagnose achalasia from the esophagoscopy video. This paper proposes a Serial Multi-scale Network for classifying achalasia images from the esophagoscopy video. The proposed method contains two main components, a Dense-pooling Net, and a Serial Multi-scale Dilated encoder. We construct the Dense-pooling Net using a convolution neural network with dense mixed-pooling connections to extract features. We design the Serial Multi-scale Dilated encoder based on a residual-style dilated encoder. We combine the dilated encoder and spatial attention modules to focus on features we need. We trained and evaluated our method with a dataset that was extracted from several esophagoscopy videos of achalasia patients. The evaluation results reveal a state-of-the-art accuracy of achalasia diagnosis. Furthermore, we developed a real-time computer-aided achalasia diagnosis system with the trained network. In the real-time test, the achalasia diagnosis system can stably output the diagnosis results in only (Formula presented.) seconds. The extended experiments demonstrate that the constructed diagnosis system can diagnose achalasia from esophagoscopy videos.

    DOI: 10.1080/21681163.2022.2159534

    Scopus

    CiNii Research

  145. KST-Mixer: kinematic spatio-temporal data mixer for colon shape estimation 招待有り

    Oda M., Furukawa K., Navab N., Mori K.

    Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization   11 巻 ( 4 ) 頁: 1 - 7   2023年1月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization  

    We propose a spatio-temporal mixing kinematic data estimation method to estimate the shape of the colon with deformations caused by colonoscope insertion. Endoscope tracking or a navigation system that navigates physicians to target positions is needed to reduce such complications as organ perforations. Although many previous methods focused to track bronchoscopes and surgical endoscopes, few number of colonoscope tracking methods were proposed because the colon largely deforms during colonoscope insertion. The deformation causes significant tracking errors. Colon deformation should be considered in the tracking process. We propose a colon shape estimation method using a Kinematic Spatio-Temporal data Mixer (KST-Mixer) that can be used during colonoscope insertions to the colon. Kinematic data of a colonoscope and the colon, including positions and directions of their centerlines, are obtained using electromagnetic and depth sensors. The proposed method separates the data into sub-groups along the spatial and temporal axes. The KST-Mixer extracts kinematic features and mix them along the axes multiple times. We evaluated colon shape estimation accuracies in phantom studies. The proposed method achieved 11.92 mm mean Euclidean distance error, the smallest of the previous methods. Statistical analysis indicated that the proposed method significantly reduced the error compared to the previous methods.

    DOI: 10.1080/21681163.2022.2151938

    Scopus

    CiNii Research

  146. 人工知能(AI)の最新動向 RSNA2022におけるAI関連セッション 査読有り

    インナービジョン   38 巻 ( 2 ) 頁: 33 - 34   2023年1月

     詳細を見る

    記述言語:日本語  

  147. L-former: A Lightweight Transformer for Realistic Medical Image Generation and its Application to Super-resolution 招待有り 査読有り

    Zheng T., Oda H., Hayashi Y., Nakamura S., Mori M., Takabatake H., Natori H., Oda M., Mori K.

    Progress in Biomedical Optics and Imaging - Proceedings of SPIE   12464 巻   2023年

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:Progress in Biomedical Optics and Imaging - Proceedings of SPIE  

    Medical image analysis approaches such as data augmentation and domain adaption need huge amounts of realistic medical images. Generating realistic medical images by machine learning is a feasible approach. We propose L-former, a lightweight Transformer for realistic medical image generation. L-former can generate more reliable and realistic medical images than recent generative adversarial networks (GANs). Meanwhile, L-former does not consume as high computational cost as conventional Transformer-based generative models. L-former uses Transformers to generate low-resolution feature vectors at shallow layers, and uses convolutional neural networks to generate high-resolution realistic medical images at deep layers. Experimental results showed that L-former outperformed conventional GANs by FID scores 33.79 and 76.85 on two datasets, respectively. We further conducted a downstream study by using the images generated by L-former to perform a super-resolution task. A high PSNR score of 27.87 proved L-former’s ability to generate reliable images for super-resolution and showed its potential for applications in medical diagnosis.

    DOI: 10.1117/12.2653776

    Web of Science

    Scopus

  148. Boundary-aware Feature and Prediction Refinement for Polyp Segmentation 査読有り

    ie Qiu, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kensaku Mori, ``Boundary-aware Feature and Prediction Refinement for Polyp Segmentation

    Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization     2022年12月

     詳細を見る

  149. Classification and visual explanation for COVID-19 pneumonia from CT images using triple learning. 招待有り 査読有り

    Kato S, Oda M, Mori K, Shimizu A, Otake Y, Hashimoto M, Akashi T, Hotta K

    Scientific reports   12 巻 ( 1 ) 頁: 20840   2022年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Scientific Reports  

    This study presents a novel framework for classifying and visualizing pneumonia induced by COVID-19 from CT images. Although many image classification methods using deep learning have been proposed, in the case of medical image fields, standard classification methods are unable to be used in some cases because the medical images that belong to the same category vary depending on the progression of the symptoms and the size of the inflamed area. In addition, it is essential that the models used be transparent and explainable, allowing health care providers to trust the models and avoid mistakes. In this study, we propose a classification method using contrastive learning and an attention mechanism. Contrastive learning is able to close the distance for images of the same category and generate a better feature space for classification. An attention mechanism is able to emphasize an important area in the image and visualize the location related to classification. Through experiments conducted on two-types of classification using a three-fold cross validation, we confirmed that the classification accuracy was significantly improved; in addition, a detailed visual explanation was achieved comparison with conventional methods.

    DOI: 10.1038/s41598-022-24936-6

    Scopus

    PubMed

    その他リンク: https://www.nature.com/articles/s41598-022-24936-6

  150. Positive-gradient-weighted object activation mapping: visual explanation of object detector towards precise colorectal-polyp localisation 招待有り 査読有り 国際誌

    Itoh, H; Misawa, M; Mori, Y; Kudo, SE; Oda, M; Mori, K

    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY   17 巻 ( 11 ) 頁: 2051 - 2063   2022年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose: Precise polyp detection and localisation are essential for colonoscopy diagnosis. Statistical machine learning with a large-scale data set can contribute to the construction of a computer-aided diagnosis system for the prevention of overlooking and miss-localisation of a polyp in colonoscopy. We propose new visual explaining methods for a well-trained object detector, which achieves fast and accurate polyp detection with a bounding box towards a precise automated polyp localisation. Method: We refine gradient-weighted class activation mapping for more accurate highlighting of important patterns in processing a convolutional neural network. Extending the refined mapping into multiscaled processing, we define object activation mapping that highlights important object patterns in an image for a detection task. Finally, we define polyp activation mapping to achieve precise polyp localisation by integrating adaptive local thresholding into object activation mapping. We experimentally evaluate the proposed visual explaining methods with four publicly available databases. Results: The refined mapping visualises important patterns in each convolutional layer more accurately than the original gradient-weighted class activation mapping. The object activation mapping clearly visualises important patterns in colonoscopic images for polyp detection. The polyp activation mapping localises the detected polyps in ETIS-Larib, CVC-Clinic and Kvasir-SEG database with mean Dice scores of 0.76, 0.72 and 0.72, respectively. Conclusions: We developed new visual explaining methods for a convolutional neural network by refining and extending gradient-weighted class activation mapping. Experimental results demonstrated the validity of the proposed methods by showing that accurate visualisation of important patterns and localisation of polyps in a colonoscopic image. The proposed visual explaining methods are useful for the interpreting and applying a trained polyp detector.

    DOI: 10.1007/s11548-022-02696-y

    Web of Science

    Scopus

    PubMed

  151. Pattern Analysis of Substantia Nigra in Parkinson Disease by Fifth-Order Tensor Decomposition and Multi-sequence MRI 査読有り

    Hayato Itoh, Tao Hu, Masahiro Oda, Shinji Saiki, Koji Kamagata, Nobutaka Hattori, Shigeki Aoki,Kensaku Mori

    LNCS13594     2022年10月

     詳細を見る

  152. Enhancing Model Generalization for Substantia Nigra Segmentation Using a Test-time Normalization-Based Method 査読有り

    Tao Hu, Hayato Itoh, Masahiro Oda, Yuichiro Hayashi, Zhongyang Lu, Shinji Saiki, Nobutaka Hattori, Koji Kamagata, Shigeki Aoki, Kanako K. Kumamaru, Toshiaki Akashi, Kensaku Mori

    LNCS13437     頁: 736 - 744   2022年9月

     詳細を見る

  153. Geometric Constraints for Self-supervised Monocular Depth Estimation on Laparoscopic Images with Dual-task Consistency 査読有り

    Wenda Li, Yuichiro Hayashi, Masahiro Oda, akayuki Kitasaka, Kazunari Misawa, Kensaku Mori

    LNCS13434     頁: 467 - 477   2022年9月

  154. GNN による血管名自動命名手法における臓器特徴の利用に関する検討 招待有り 査読有り

    出口 智也, 林 雄一郎, 北坂 孝幸, 小田 昌宏, 三澤 一成, 森 健策

     第41回日本医用画像工学会大会予稿集     2022年7月

  155. Confocal Laser Scanning Microscope Image Super Resolution for Biomedical Research Based on Two-Stage Generative Adversarial Network 招待有り 査読有り

    Yunheng WU, Masahiro ODA, Yuichiro HAYASHI, Takanori TAKEBE, Shogo NAGATA, Shuntaro KAWAMURA, Kensaku MORI

        頁: 138 - 139   2022年7月

  156. Co-Training for Semi-Supervised CT Segmentation of COVID-19 招待有り 査読有り

    Kai LIU, Masahiro ODA, Tong ZHENG, Yuichiro HAYASHI, Yoshito OTAKE, Masahiro HASHIMOTO, Toshiaki AKASHI, Shigeki AOKI, Kensaku MORI

        頁: 114 - 115   2022年7月

  157. A Novel Centroid-attention based Hybrid Model for Subarachnoid Hemorrhage Classification on Imbalanced Data 招待有り 査読有り

    Zhongyang LU, Masahiro ODA,1, Yuichiro HAYASHI, Tao Hu, Hayato ITOH, Takeyuki WATADANI, Osamu ABE,Kensaku MORI

        頁: 104 - 105   2022年7月

  158. テンソル分解を用いた黒質緻密部の3 次元パターン表現に関する初期的検討 招待有り 査読有り

    伊東 隼人, 小田 昌宏, 斉木 臣二, 服部 信考, 鎌形 康司, 青木 茂樹, 森 健策

    第41回日本医用画像工学会大会予稿集     頁: 124 - 125   2022年7月

  159. 境界情報を考慮する損失関数を用いたFCN による腹部 CT 像からの臓器領域抽出に関する研究 招待有り 査読有り

    大野 真奈, 申 忱, Holger R. Roth, 小田 昌宏, 林 雄一郎, 三澤 一成, 森 健策

    第41回日本医用画像工学会大会予稿集     頁: 106 - 107   2022年7月

  160. 大腸外科領域における情報支援内視鏡外科手術システムの開発 招待有り 査読有り

    ]長谷川 寛, 北口 大地, 小島 成浩, 竹下 修由, 森 健策, 伊藤 雅昭

    日本コンピュータ外科学会誌 第31回日本コンピュータ外科学会大会特集号   24 巻 ( 2 )   2022年6月

  161. nnU‒Netによる肺マイクロCT像からの小葉間隔壁抽出 招待有り 査読有り

    深井 大輔,小田 紘久,椎名 健,林 雄一郎,鄭 通,中村 彰太, 小田 昌宏,森 健策

    日本コンピュータ外科学会誌 第31回日本コンピュータ外科学会大会特集号   24 巻 ( 2 ) 頁: 22(5)-6   2022年6月

  162. コンピュータ外科におけるAIとVisionのドッキング―知能と知覚の結合による新たなコンピュータ外科 招待有り 査読有り

    森 健策

    日本コンピュータ外科学会誌 第30回日本コンピュータ外科学会大会特集号   24 巻 ( 2 )   2022年6月

  163. 腹腔鏡下胃切除術支援のための腹腔鏡映像からの膵臓領域抽出の検討 招待有り 査読有り

    林 雄一郎, 辻 真治, 丘 杰, 小田 昌宏, 三澤 一成,森 健策

    日本コンピュータ外科学会誌 第31回日本コンピュータ外科学会大会特集号   24 巻 ( 2 ) 頁: 22(5)-4   2022年6月

  164. 腹腔鏡映像からの血管領域自動抽出におけるDilated U‒Netの段数が 抽出精度に与える影響 招待有り 査読有り

    榎本 圭吾 ,林 雄一郎,北坂 孝幸,小田 昌宏, 三澤 一成, 森 健策

    日本コンピュータ外科学会誌 第31回日本コンピュータ外科学会大会特集号   24 巻 ( 2 ) 頁: 22(5)-3   2022年6月

  165. Laparoscopic image classification based on surgical areas in laparoscopic gastrectomy, 招待有り 査読有り

    Y. Hayashi, K. Misawa, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   17 巻   頁: s57 - 58   2022年6月

  166. 3D bronchus anatomical structure measurement on real bronchoscopic images based on depth images estimated by deep neural network 査読有り

    C. Wang, Y. Hayashi, M. Oda, T. Kitasaka, H. Takabatake, M. Mori, H. Honma, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   17 巻   頁: s48 - s49   2022年6月

  167. Extraction of respiratory bronchioles and alveolar ducts from micro-CT volumes with distance-based tubular structure filter 招待有り 査読有り

    T. Shiina, H. Oda, T. Zheng, S. Nakamura, Y. Hayashi, M. Oda, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   17 巻   頁: s117 - s118   2022年6月

  168. 2D+3D registration in deformation-adaptive super-resolution for medical images 招待有り 査読有り

    T. Zheng, H. Oda, T. Hu, Y. Hayashi, S. Nakamura, M. Mori, H. Takabatake, H. Natori, M. Oda, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   17 巻   頁: s105 - s106   2022年6月

  169. Automatic Detection of Bladder Tumors in Narrow-Band Imaging Cystoscopic Images by tiny-YOLO 招待有り 査読有り

    J. Mutaguchi, M. Oda, E. Kashiwagi, J. Inokuchi, K. Mori, M. Eto

    International Journal of Computer Assisted Radiology and Surgery   17 巻   頁: s86 - s86   2022年6月

  170. 30年間の医用画像研究経験を振り返り未来を考える 査読有り

    情報・システムソサイエティ誌   27 巻 ( 1 )   2022年5月

     詳細を見る

    記述言語:日本語  

  171. Automated classification method of COVID-19 cases from chest CT volumes using 2D and 3D hybrid CNN for anisotropic volumes 査読有り

    Masahiro Oda, Tong Zheng, Yuichiro Hayashi, Yoshito Otake, Masahiro Hashimoto, Toshiaki Akashi, Shigeki Aoki, Kensaku Mori

    Proc. SPIE 12033     2022年3月

     詳細を見る

  172. Size-reweighted cascaded fully convolutional network for substantia nigra segmentation from T2 MRI 査読有り

    Tao Hu, Hayato Itoh, Masahiro Oda, Shinji Saiki, Nobutaka Hattori, Koji Kamagata, Shigeki Aoki, Kensaku Mori

    Proc. SPIE 12032     2022年3月

     詳細を見る

  173. Substantia nigra analysis by tensor decomposition of T2-weighted images for Parkinson’s disease diagnosis 招待有り 査読有り

    Hayato Itoh, Masahiro Oda, Shinji Saiki, Nobutaka Hattori, Koji Kamagata, Shigeki Aoki, Kensaku Mori

    Proc. SPIE 12032     2022年3月

     詳細を見る

  174. Self-supervised depth estimation with uncertainty-weight joint loss function based on laparoscopic videos 査読有り

    Wenda Li, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kazunari Misawa, Kensaku Mori

    Proc. SPIE 12034     2022年3月

     詳細を見る

  175. Spatial label smoothing via aleatoric uncertainty for bleeding region segmentation from laparoscopic videos 査読有り

    Jie Qiu, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Nobuyoshi Takeshita, Masaaki Ito, Kensaku Mori

    Proc. SPIE 12032     2022年3月

     詳細を見る

  176. Effective hyperparameter optimization with proxy data for multi-organ segmentation 査読有り

    Chen Shen, Holger R. Roth, Vishwesh Nath, Yuichiro Hayashi, Masahiro Oda, Kazunari Misawa, Kensaku Mori

    Proc. SPIE 12032     2022年3月

     詳細を見る

  177. Coarse-to-fine cascade framework for cross-modality super-resolution on clinical/micro CT dataset, 査読有り

    Tong Zheng, Hirohisa Oda, Yuichiro Hayashi, Shota Nakamura, Masaki Mori, Hirotsugu Takabatake, Hiroshi Natori, Masahiro Oda, Kensaku Mori

    Proc. SPIE 12032     2022年3月

     詳細を見る

  178. Bronchial orifice tracking-based branch level estimation for bronchoscopic navigation 査読有り

    Cheng Wang, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Hitotsugu Takabatake, Masaki Mori, Hirotoshi Honma, Hiroshi Natori, Kensaku Mori

    Proc. SPIE 12034     2022年3月

     詳細を見る

  179. Taking full advantage of uncertainty estimation: an uncertainty-assisted two-stage pipeline for multi-organ segmentation 査読有り

    Zhou Zheng, Masahiro Oda, Kazunari Misawa, Kensaku Mori

    Proc. SPIE 12033     2022年3月

     詳細を見る

  180. Multiclass prediction for improving intestine segmentation on non-fecal-tagged CT volume 査読有り

    Hirohisa Oda, Yuichiro Hayashi, Takayuki Kitasaka, Aitaro Takimoto, Akinari,Hinoki, Hiroo Uchida, Kojiro Suzuki, Masahiro Oda, Kensaku Mori

    Proc. SPIE 12033     2022年3月

     詳細を見る

  181. 人工知能(AI)最新動向 学会発表を中心に(2)-Digital Poster発表を中心に

    森 健策

    インナービジョン   37 巻 ( 2 ) 頁: 30 - 31   2022年2月

  182. Aorta-aware GAN for non-contrast to artery contrasted CT translation and its application to abdominal aortic aneurysm detection 招待有り 査読有り 国際誌

    Hu, T; Oda, M; Hayashi, Y; Lu, ZY; Kumamaru, KK; Akashi, T; Aoki, S; Mori, K

    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY   17 巻 ( 1 ) 頁: 97 - 105   2022年1月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose: Artery contrasted computed tomography (CT) enables accurate observations of the arteries and surrounding structures, thus being widely used for the diagnosis of diseases such as aneurysm. To avoid the complications caused by contrast agent, this paper proposes an aorta-aware deep learning method to synthesize artery contrasted CT volume form non-contrast CT volume. Methods: By introducing auxiliary multi-resolution segmentation tasks in the generator, we force the proposed network to focus on the regions of aorta and the other vascular structures. Then, the segmentation results produced by the auxiliary tasks were used to extract aorta. The detection of abnormal CT images containing aneurysm was implemented by estimating the maximum axial radius of aorta. Results: In comparison with the baseline models, the proposed network with auxiliary tasks achieved better performances with higher peak signal-noise ratio value. In aorta regions which are supposed to be the main region of interest in many clinic scenarios, the average improvement can be up to 0.33dB. Using the synthesized artery contrasted CT, the F score of aneurysm detection achieved 0.58 at slice level and 0.85 at case level. Conclusion: This study tries to address the problem of non-contrast to artery contrasted CT modality translation by employing a deep learning model with aorta awareness. The auxiliary tasks help the proposed model focus on aorta regions and synthesize results with clearer boundaries. Additionally, the synthesized artery contrasted CT shows potential in identifying slices with abdominal aortic aneurysm, and may provide an option for patients with contrast agent allergy.

    DOI: 10.1007/s11548-021-02492-0

    Web of Science

    Scopus

    PubMed

  183. 医用画像処理による人体構造の解析とその診断治療への応用 ~ 30年間の医用画像研究経験を振り返り未来を考える ~

    森 健策

    電子情報通信学会技術研究報告(MI), MI2021-74   121 巻 ( 347 ) 頁: 127 - 132   2022年1月

  184. 胸部CT像からのCOVID-19に関連した所見文の自動生成の検討

    岡崎 真治, 林 雄一郎, 小田 昌宏, 橋本 正弘, 陣崎 雅弘, 明石 敏昭, 青木 茂樹, 森 健策

    電子情報通信学会技術研究報告(MI), MI2021-57   121 巻 ( 347 ) 頁: 49 - 54   2022年1月

  185. 高精度な大腸ポリープ検出に向けた物体検出モデルの解析

    伊東 隼人, 三澤 将史, 森 悠一, 工藤 進英, 小田 昌宏, 森 健策

    電子情報通信学会技術研究報告(MI), MI2021-63   121 巻 ( 347 ) 頁: 86 - 87   2022年1月

  186. 大規模腹腔鏡動画像データベース構築に向けたオンラインアノテーションツールの開発

    伊東 隼人, 潘 冬平, 小澤 卓也, 小田 昌宏, 竹下修由, 伊藤 雅昭, 森 健策

    電子情報通信学会技術研究報告(MI), MI2021-65   121 巻 ( 347 ) 頁: 86 - 87   2022年1月

  187. 深層学習に基づくマウスのクラニアルウィンドウ画像における血管セグメンテーションの考察 招待有り 査読有り

    呉 運恒, 小田 昌宏, 林 雄一郎, 武部 貴則, 森 健策

    電子情報通信学会技術研究報告(MI)   121 巻 ( 347 ) 頁: 174 - 179   2022年1月

  188. Performance improvement of weakly supervised fully convolutional networks by skip connections for brain structure segmentation 査読有り

    Takaaki Sugino, Holger R. Roth, Mashiro Oda, Taichi Kin, Nobuhito Saito, Yoshikazu Nakajima, Kensaku Mori,

    Medical Physics   48 巻 ( 11 ) 頁: 7215 - 7227   2021年11月

     詳細を見る

    担当区分:最終著者  

    DOI: 10.1002/mp.15192

  189. 胸腔鏡下手術におけるパノラマビジョンリングの基礎開発 査読有り

    北坂 孝幸,林 雄一郎,中村 彰太,芳川 豊史,森 健策,中井 剛,中井 康博

    日本コンピュータ外科学会誌 第30回日本コンピュータ外科学会大会特集号   23 巻 ( 4 ) 頁: 318   2021年11月

  190. 自己教師あり学習による腹腔鏡動画像の手術器具セグメンテーション 査読有り

    丘 傑,林 雄一郎,小澤 卓也,小田 昌宏, 北坂 孝幸,三澤 一成, 竹下 修由, 伊藤 雅昭, 森 健策

    日本コンピュータ外科学会誌 第30回日本コンピュータ外科学会大会特集号   23 巻 ( 4 ) 頁: 217 - 218   2021年11月

  191. 距離変換と管状構造フィルタによる肺マイクロCT画像からの細気管支・肺胞管抽出手法の検討 査読有り

    椎名 健, 小田 紘久, 鄭 通, 中村 彰太, 林 雄一郎, 小田 昌宏, 森 健策

    日本コンピュータ外科学会誌 第30回日本コンピュータ外科学会大会特集号   23 巻 ( 4 ) 頁: 219 - 220   2021年11月

  192. 大規模腹腔鏡動画像データベース構築に向けたアノテーションツール開発 査読有り

    伊東 隼人,潘 冬平,小澤 卓也,小田 昌宏, 竹下 修由, 伊藤 雅昭, 森 健策

    日本コンピュータ外科学会誌 第30回日本コンピュータ外科学会大会特集号   23 巻 ( 4 ) 頁: 243 - 244   2021年11月

  193. 腹腔鏡下胃切除術支援のための腹腔鏡映像からの術中操作の予測に関する初期検討 査読有り

    林 雄一郎, 三澤 一成, 森 健策

    日本コンピュータ外科学会誌 第30回日本コンピュータ外科学会大会特集号   23 巻 ( 4 ) 頁: 248   2021年11月

  194. CT像の非等方性を考慮した3D CNNによるCOVID‒19症例の自動分類手法 査読有り

    小田 昌宏, 鄭 通, 林 雄一郎, 大竹 義人, 橋本 正弘, 明石 敏明, 青木 茂樹, 森 健策

    日本コンピュータ外科学会誌 第30回日本コンピュータ外科学会大会特集号   23 巻 ( 4 ) 頁: 265 - 266   2021年11月

  195. MRI 画像からの大脳基底核のAIセグメンテーション―Skip Connection による抽出精度向上の検討 査読有り

    杉野貴明,金 太一,斎藤 季,川瀬 利弘,小野木 真哉,齊藤 延人,森 健策, 中島 義和

    日本コンピュータ外科学会誌 第30回日本コンピュータ外科学会大会特集号   23 巻 ( 4 ) 頁: 286   2021年11月

  196. CT 像からの腸管領域抽出改善に関する基礎的検討 査読有り

    小田紘久,林 雄一郎,北坂 孝幸,滝本 愛太朗,檜 顕成,内田 広夫,鈴木 耕次郎, 小田 昌宏, 森 健策

    日本コンピュータ外科学会誌 第30回日本コンピュータ外科学会大会特集号   23 巻 ( 4 ) 頁: 287 - 288   2021年11月

  197. 複数の畳み込み範囲を持つグラフニューラルネットワークによる血管名自動命名手法の検討 査読有り

    出口 智也,林 雄一郎,北坂 孝幸,小田 昌宏, 三澤 一成,森 健策

    日本コンピュータ外科学会誌 第30回日本コンピュータ外科学会大会特集号   23 巻 ( 4 ) 頁: 299 - 300   2021年11月

  198. 腹腔鏡下胃切除術の手術ナビゲーションにおける位置合わせ誤差の補正に関する検討 査読有り

    林 雄一郎, 三澤 一成,森 健策

    日本コンピュータ外科学会誌 第30回日本コンピュータ外科学会大会特集号   23 巻 ( 4 ) 頁: 301 - 302   2021年11月

  199. Binary polyp-size classification based on deep-learned spatial information 招待有り 査読有り

    Itoh, H; Oda, M; Jiang, K; Mori, Y; Misawa, M; Kudo, SE; Imai, K; Ito, S; Hotta, K; Mori, K

    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY   16 巻 ( 10 ) 頁: 1817 - 1828   2021年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose: The size information of detected polyps is an essential factor for diagnosis in colon cancer screening. For example, adenomas and sessile serrated polyps that are ≥ 10 mm are considered advanced, and shorter surveillance intervals are recommended for smaller polyps. However, sometimes the subjective estimations of endoscopists are incorrect and overestimate the sizes. To circumvent these difficulties, we developed a method for automatic binary polyp-size classification between two polyp sizes: from 1 to 9 mm and ≥ 10 mm. Method: We introduce a binary polyp-size classification method that estimates a polyp’s three-dimensional spatial information. This estimation is comprised of polyp localisation and depth estimation. The combination of location and depth information expresses a polyp’s three-dimensional shape. In experiments, we quantitatively and qualitatively evaluate the proposed method using 787 polyps of both protruded and flat types. Results: The proposed method’s best classification accuracy outperformed the fine-tuned state-of-the-art image classification methods. Post-processing of sequential voting increased the classification accuracy and achieved classification accuracy of 0.81 and 0.88 for polyps ranging from 1 to 9 mm and others that are ≥ 10 mm. Qualitative analysis revealed the importance of polyp localisation even in polyp-size classification. Conclusions: We developed a binary polyp-size classification method by utilising the estimated three-dimensional shape of a polyp. Experiments demonstrated accurate classification for both protruded- and flat-type polyps, even though the flat type have ambiguous boundary between a polyp and colon wall.

    DOI: 10.1007/s11548-021-02477-z

    DOI: 10.1007/s11548-021-02477-z

    Web of Science

    Scopus

    PubMed

    その他リンク: https://link.springer.com/article/10.1007/s11548-021-02477-z/fulltext.html

  200. 胸部 CT 像からの COVID-19 症例の自動分類手法

    小田 昌宏, 鄭 通, 林 雄一郎, 大竹 義人, 橋本 正弘, 明石 敏昭, 森 健策

    第40回日本医用画像工学会大会予稿集     頁: 65 - 67   2021年10月

  201. VR Organ Puzzle: A Virtual Reality Application for the Education of Human Anatomy

    Siqi LI, Yuichiro HAYASHI,Michitaka FUJIWARA, Masahiro ODA, Kensaku MORI

        頁: 489 - 491   2021年10月

  202. Non-contrast to Artery Contrast CT Translation Via Representation-Aligned Generative Model

    Tao HU, Masahiro ODA, Yuichiro HAYASHI, Zhongyang LU, Toshiaki AKASHI, Shigeki AOKI, Kensaku MORI

        2021年10月

     詳細を見る

    担当区分:最終著者  

  203. Clinical CT Super-resolution Utilizing Registered Clinical – Micro CT Database

    Tong ZHENG, Hirohisa ODA, Yuichiro HAYASHI, Shota NAKAMURA, Masaki MORI, Hirotsugu TAKABATAKE, Hiroshi NATORI, Masahiro ODA, Kensaku MORI

        頁: 394 - 400   2021年10月

  204. 距離マップを利用した肺マイクロ CT 像からの肺胞抽出

    椎名 健, 小田 紘久, 鄭 通, 中村 彰太, 林 雄一郎, 小田 昌宏, 森 健策

    第40回日本医用画像工学会大会予稿集     頁: 374 - 377   2021年10月

  205. ピットパターン特徴量の解析に向けた超拡大内視鏡画像の再構成法に関する初期的検討

    伊東 隼人, 小田 昌宏, 森 悠一, 三澤 将史, 工藤 進英, 森 健策

    第40回日本医用画像工学会大会予稿集     頁: 309 - 317   2021年10月

  206. 深層学習とディジタルファントムを用いた骨陰影低減技術の開発

    五島 風汰, 田中 利恵, 小田 昌宏, 森 健策, 高田 宗尚, 田村 昌也, 松本 勲

    第40回日本医用画像工学会大会予稿集     頁: 270 - 272   2021年10月

  207. 3D Kidney Tumor Semantic Segmentation using Cascaded Convolutional Networks 招待有り 査読有り

    第40回日本医用画像工学会大会予稿集     頁: 243 - 248   2021年10月

  208. Attention 機構を導入したグラフニューラルネットワークによる,

    出口 智也, 林 雄一郎, 北坂 孝幸, 小田 昌宏,1 三澤 一成, 森 健策

    第40回日本医用画像工学会大会予稿集     頁: 239 - 241   2021年10月

  209. 深度情報を利用した FCN による腹腔鏡映像からの血管領域自動抽出の検討

    榎本 圭吾, 林 雄一郎, 北坂 孝幸, 小田 昌宏,1 伊藤 雅昭, 竹下 修由, 三澤 一成, 森 健策

    第40回日本医用画像工学会大会予稿集     頁: 235 - 241   2021年10月

  210. Vascular Structure Segmentation in Stereomicroscope Image

    Yunheng WU, Masahiro ODA, Yuichiro HAYASHI, Takanori TAKEBE, Kensaku MORI

        頁: 229 - 234   2021年10月

  211. Synthesized Perforation Detection from Endoscopy Videos Using Model Training with Synthesized Images by GAN

    Kai Jiang, Hayato Itoh, Masahiro Oda, Taishi Okumura, Yuichi Mori, Masashi Misawa, Takemasa Hayashi, Shin-Ei Kudo, Kensaku Mori

        頁: 199 - 201   2021年10月

  212. Improving Classification Accuracy of Hands' Bone Marrow Edema by Transfer Learning

    Dongping PAN, Masahiro ODA, Kou KATAYAMA, Takanobu Okubo, Kensaku MORI

        頁: 150 - 157   2021年10月

  213. 腸閉塞・イレウスの病変箇所特定における診断支援システムの精度評価

    小田 紘久, 林 雄一郎, 北坂 孝幸, 玉田 雄大, 滝本 愛太朗, 檜 顕成, 内田 広夫, 鈴木 耕次郎, 小田 昌宏, 森 健策

    第40回日本医用画像工学会大会予稿集     頁: 129 - 131   2021年10月

  214. Self-attention Class Balanced DenseNet_LSTM framework for Subarachnoid Hemorrhage CT image Classification on Extremely Imbalanced Brain CT Dataset

    第40回日本医用画像工学会大会予稿集     頁: 69 - 75   2021年10月

  215. VR Organ Puzzle: A Virtual Reality Application for the Education of Human Anatomy

    Siqi LI, Yuichiro HAYASHI,Michitaka FUJIWARA, Masahiro ODA, Kensaku MORI

        頁: 493 - 498   2021年10月

  216. Multi-task Federated Learning for Heterogeneous Pancreas Segmentation 招待有り 査読有り 国際共著

    Chen Shen, Pochuan Wang, Holger R. Roth, Dong Yang, Daguang Xu, Masahiro Oda, Weichung Wang, Chiou-Shann Fuh, Po-Ting Chen, Kao-Lang Liu, Wei-Chih Liao, Kensaku Mori

    LNCS12969     頁: 101 - 110   2021年9月

     詳細を見る

  217. Super-Resolution by Latent Space Exploration: Training with Poorly-Aligned Clinical and Micro CT Image Dataset 招待有り 査読有り

    Tong Zheng, Hirohisa Oda, Yuichiro Hayashi, Shota Nakamura, Masahiro Oda, Kensaku Mori

    LNCS12965     頁: 24 - 33   2021年9月

  218. Intestine segmentation with small computational cost for diagnosis assistance of ileus and intestinal obstruction 招待有り 査読有り

    Hirohisa Oda, Yuichiro Hayashi, Takayuki Kitasaka, Aitaro Takimoto, Akinari Hinoki, Hiroo Uchida, Kojiro Suzuki, Masahiro Oda, Kensaku Mori

    LNCS 12969     頁: 3 - 12   2021年9月

     詳細を見る

  219. 泌尿器科領域における画像処理

    森 健策

    泌尿器科   14 巻 ( 2 ) 頁: 213 - 220   2021年8月

  220. Can artificial intelligence help to detect dysplasia in patients with ulcerative colitis? 招待有り 国際誌

    Maeda Yasuharu, Kudo Shin-ei, Ogata Noriyuki, Misawa Masashi, Mori Yuichi, Mori Kensaku, Ohtsuka Kazuo

    ENDOSCOPY   53 巻 ( 07 ) 頁: E273 - E274   2021年7月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Endoscopy  

    DOI: 10.1055/a-1261-2944

    Web of Science

    Scopus

    PubMed

  221. Micro-CT-assisted cross-modality super-resolution of clinical CT: utilization of synthesized training dataset 査読有り

    T. Zheng, H. Oda, S. Nakamura, M. Mori, H. Takabatake, H. Natori, M. Oda, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   16 巻 ( sup.1 ) 頁: 12 - 14   2021年6月

  222. Unsupervised colonoscopic depth estimation by domain translations with a Lambertian-reflection keeping auxiliary task 招待有り 査読有り

    Itoh, H; Oda, M; Mori, Y; Misawa, M; Kudo, SE; Imai, K; Ito, S; Hotta, K; Takabatake, H; Mori, M; Natori, H; Mori, K

    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY   16 巻 ( 6 ) 頁: 989 - 1001   2021年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose: A three-dimensional (3D) structure extraction technique viewed from a two-dimensional image is essential for the development of a computer-aided diagnosis (CAD) system for colonoscopy. However, a straightforward application of existing depth-estimation methods to colonoscopic images is impossible or inappropriate due to several limitations of colonoscopes. In particular, the absence of ground-truth depth for colonoscopic images hinders the application of supervised machine learning methods. To circumvent these difficulties, we developed an unsupervised and accurate depth-estimation method. Method: We propose a novel unsupervised depth-estimation method by introducing a Lambertian-reflection model as an auxiliary task to domain translation between real and virtual colonoscopic images. This auxiliary task contributes to accurate depth estimation by maintaining the Lambertian-reflection assumption. In our experiments, we qualitatively evaluate the proposed method by comparing it with state-of-the-art unsupervised methods. Furthermore, we present two quantitative evaluations of the proposed method using a measuring device, as well as a new 3D reconstruction technique and measured polyp sizes. Results: Our proposed method achieved accurate depth estimation with an average estimation error of less than 1 mm for regions close to the colonoscope in both of two types of quantitative evaluations. Qualitative evaluation showed that the introduced auxiliary task reduces the effects of specular reflections and colon wall textures on depth estimation and our proposed method achieved smooth depth estimation without noise, thus validating the proposed method. Conclusions: We developed an accurate depth-estimation method with a new type of unsupervised domain translation with the auxiliary task. This method is useful for analysis of colonoscopic images and for the development of a CAD system since it can extract accurate 3D information.

    DOI: 10.1007/s11548-021-02398-x

    Web of Science

    Scopus

    PubMed

  223. AIによる内視鏡外科手術支援と開発基盤としての手術動画データベース構築 招待有り 査読有り

    竹下 修由, 森 健策, 伊藤 正昭

    消化器外科   44 巻 ( 7 ) 頁: 1159 - 1166   2021年6月

  224. Three-dimensional surgical plan printing for assisting liver surgery, 査読有り

    Y. Hayashi, T. Igami, Y. Nakamura, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   16 巻 ( sup.1 ) 頁: 104 - 106   2021年6月

  225. COVID-19 lung infection and normal region segmentation from CT volumes using FCN with local and global spatial feature encoder 査読有り

    M. Oda, Y. Hayashi, Y. Otake, M. Hashimoto, T. Akashi, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   16 巻 ( sup.1 ) 頁: 19 - 20   2021年6月

  226. Experimental evaluation of loss functions in YOLO-v3 training for the perforation detection and localization in colonoscopic videos 査読有り

    K. Jiang, H. Itoh, M. Oda, T. Okumura, Y. Mori, M. Misawa, T. Hayashi, S. E. Kudo, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   16 巻 ( sup.1 ) 頁: 74 - 75   2021年6月

  227. Blood vessel regions segmentation from laparoscopic videos using fully convolutional networks with multi field of view input 査読有り

    K. Mori, S. Morimitsu, S. Yamamoto, T. Ozawa, T. Kitasaka, Y. Hayashi, M. Oda, M. Ito, N. Takeshita, K. Misawa

    International Journal of Computer Assisted Radiology and Surgery   16 巻 ( sup.1 ) 頁: 56 - 57   2021年6月

  228. `Intestine segmentation combining Watershed transformation and machine learning-based distance map estimation 査読有り

    H. Oda, Y. Hayashi, T. Kitasaka, Y. Tamada, A. Takimoto, A. Hinoki, H. Uchida, K. Suzuki, H. Itoh, M. Oda, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   16 巻 ( sup.1 ) 頁: 89 - 90   2021年6月

  229. Real-time deformation simulation of hollow organs based on XPBD with small time steps and air mesh for surgical simulation 査読有り

    S. Li, Y. Hayashi, M. Oda, K. Misawa, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   16 巻 ( sup.1 ) 頁: 21 - 25   2021年6月

  230. Spatial Information Considered Module based on Attention Mechanism for Self-Supervised Depth Estimation from Laparoscopic Image Pairs 査読有り

    W. Li, Y. Hayashi, M. Oda, T. Kitasaka, K. Misawa, K. Mori

    nternational Journal of Computer Assisted Radiology and Surgery   16 巻 ( sup.1 ) 頁: 45 - 46   2021年6月

  231. 機械学習によるCOVID-19症例CT画像の診断支援 査読有り

    森 健策

    映像情報メディア学会誌   75 巻 ( 3 ) 頁: 326 - 329   2021年5月

     詳細を見る

    担当区分:筆頭著者  

  232. Development of a computer-aided detection system for colonoscopy and a publicly accessible large colonoscopy video database (with video) 招待有り 査読有り 国際誌

    Misawa M., Kudo S.e., Mori Y., Hotta K., Ohtsuka K., Matsuda T., Saito S., Kudo T., Baba T., Ishida F., Itoh H., Oda M., Mori K.

    Gastrointestinal Endoscopy   93 巻 ( 4 ) 頁: 960 - 967   2021年4月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Gastrointestinal Endoscopy  

    Background and Aims: Artificial intelligence (AI)–assisted polyp detection systems for colonoscopic use are currently attracting attention because they may reduce the possibility of missed adenomas. However, few systems have the necessary regulatory approval for use in clinical practice. We aimed to develop an AI-assisted polyp detection system and to validate its performance using a large colonoscopy video database designed to be publicly accessible. Methods: To develop the deep learning–based AI system, 56,668 independent colonoscopy images were obtained from 5 centers for use as training images. To validate the trained AI system, consecutive colonoscopy videos taken at a university hospital between October 2018 and January 2019 were searched to construct a database containing polyps with unbiased variance. All images were annotated by endoscopists according to the presence or absence of polyps and the polyps’ locations with bounding boxes. Results: A total of 1405 videos acquired during the study period were identified for the validation database, 797 of which contained at least 1 polyp. Of these, 100 videos containing 100 independent polyps and 13 videos negative for polyps were randomly extracted, resulting in 152,560 frames (49,799 positive frames and 102,761 negative frames) for the database. The AI showed 90.5% sensitivity and 93.7% specificity for frame-based analysis. The per-polyp sensitivities for all, diminutive, protruded, and flat polyps were 98.0%, 98.3%, 98.5%, and 97.0%, respectively. Conclusions: Our trained AI system was validated with a new large publicly accessible colonoscopy database and could identify colorectal lesions with high sensitivity and specificity. (Clinical trial registration number: UMIN 000037064.)

    DOI: 10.1016/j.gie.2020.07.060

    Scopus

    PubMed

  233. Contrastive Learningを用いた肺野CT画像からCOVID-19の自動判定 招待有り 査読有り

    加藤 聡太, 堀田 一弘, 小田 昌宏, 森 健策, 大竹 義人, 橋本 正弘, 明石 敏昭

    電子情報通信学会技術研究報告(MI)   120 巻 ( 432 ) 頁: 82 - 86   2021年3月

  234. カスケードCNNによる腹腔鏡動画からの出血領域自動抽出 招待有り 査読有り

    山本 翔太, 林 雄一郎, 盛満 慎太郎, 北坂 孝幸, 小田 昌宏, 竹下 修由, 伊藤 雅昭, 森 健策

    電子情報通信学会技術研究報告(MI)   120 巻 ( 431 ) 頁: 172 - 175   2021年3月

  235. Spectral-based Convolutional Graph Neural Networksを用いた腹部動脈領域の血管名自動命名に関する研究 招待有り 査読有り

    日比 裕太, 林 雄一郎, 北坂 孝幸, 伊東 隼人, 小田 昌宏, 三澤 一成, 森 健策

    電子情報通信学会技術研究報告(MI)   120 巻 ( 431 ) 頁: 176 - 181   2021年3月

  236. Artificial Intelligence System to Determine Risk of T1 Colorectal Cancer Metastasis to Lymph Node. 招待有り 査読有り 国際誌

    Kudo SE, Ichimasa K, Villard B, Mori Y, Misawa M, Saito S, Hotta K, Saito Y, Matsuda T, Yamada K, Mitani T, Ohtsuka K, Chino A, Ide D, Imai K, Kishida Y, Nakamura K, Saiki Y, Tanaka M, Hoteya S, Yamashita S, Kinugasa Y, Fukuda M, Kudo T, Miyachi H, Ishida F, Itoh H, Oda M, Mori K

    Gastroenterology   160 巻 ( 4 ) 頁: 1075 - 1084.e2   2021年3月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Gastroenterology  

    Background & Aims: In accordance with guidelines, most patients with T1 colorectal cancers (CRC) undergo surgical resection with lymph node dissection, despite the low incidence (∼10%) of metastasis to lymph nodes. To reduce unnecessary surgical resections, we used artificial intelligence to build a model to identify T1 colorectal tumors at risk for metastasis to lymph node and validated the model in a separate set of patients. Methods: We collected data from 3134 patients with T1 CRC treated at 6 hospitals in Japan from April 1997 through September 2017 (training cohort). We developed a machine-learning artificial neural network (ANN) using data on patients’ age and sex, as well as tumor size, location, morphology, lymphatic and vascular invasion, and histologic grade. We then conducted the external validation on the ANN model using independent 939 patients at another hospital during the same period (validation cohort). We calculated areas under the receiver operator characteristics curves (AUCs) for the ability of the model and US guidelines to identify patients with lymph node metastases. Results: Lymph node metastases were found in 319 (10.2%) of 3134 patients in the training cohort and 79 (8.4%) of /939 patients in the validation cohort. In the validation cohort, the ANN model identified patients with lymph node metastases with an AUC of 0.83, whereas the guidelines identified patients with lymph node metastases with an AUC of 0.73 (P < .001). When the analysis was limited to patients with initial endoscopic resection (n = 517), the ANN model identified patients with lymph node metastases with an AUC of 0.84 and the guidelines identified these patients with an AUC of 0.77 (P = .005). Conclusions: The ANN model outperformed guidelines in identifying patients with T1 CRCs who had lymph node metastases. This model might be used to determine which patients require additional surgery after endoscopic resection of T1 CRCs. UMIN Clinical Trials Registry no: UMIN000038609

    DOI: 10.1053/j.gastro.2020.09.027

    DOI: 10.1053/j.gastro.2020.09.027

    Scopus

    PubMed

  237. Label cleaning and propagation for improved segmentation performance using fully convolutional networks 招待有り 査読有り 国際誌

    Sugino, T; Suzuki, Y; Kin, T; Saito, N; Onogi, S; Kawase, T; Mori, K; Nakajima, Y

    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY   16 巻 ( 3 ) 頁: 349 - 361   2021年3月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose: In recent years, fully convolutional networks (FCNs) have been applied to various medical image segmentation tasks. However, it is difficult to generate a large amount of high-quality annotation data to train FCNs for medical image segmentation. Thus, it is desired to achieve high segmentation performances even from incomplete training data. We aim to evaluate performance of FCNs to clean noises and interpolate labels from noisy and sparsely given label images. Methods: To evaluate the label cleaning and propagation performance of FCNs, we used 2D and 3D FCNs to perform volumetric brain segmentation from magnetic resonance image volumes, based on network training on incomplete training datasets from noisy and sparse annotation. Results: The experimental results using pseudo-incomplete training data showed that both 2D and 3D FCNs could provide improved segmentation results from the incomplete training data, especially by using three orthogonal annotation images for network training. Conclusion: This paper presented a validation for label cleaning and propagation based on FCNs. FCNs might have the potential to achieve improved segmentation performances even from sparse annotation data including possible noises by manual annotation, which can be an important clue to more efficient annotation.

    DOI: 10.1007/s11548-021-02312-5

    Web of Science

    Scopus

    PubMed

  238. Unsupervised segmentation of COVID-19 infected lung clinical CT volumes using image inpainting and representation learning 査読有り

    Tong Zheng, Masahiro Oda, Chenglong Wang, Takayasu Moriya, Yuichiro Hayashi, Yoshito Otake, Masahiro Hashimoto, Toshiaki Akashi, Masaki Mori, Hirotsugu Takabatake, Hiroshi Natori, Kensaku Mori

    Proc. SPIE 11596, Medical Imaging, 2021: Image Processing     頁: 115963F-1 - 6   2021年2月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)  

    DOI: 10.1117/12.2580641

  239. Extremely imbalanced subarachnoid hemorrhage detection based on enseNet-LSTM network with class-balanced loss and transfer learning 査読有り

    Zhongyang Lu, Masahiro Oda, Yuichiro Hayashi, Tao Hu, Hayato Itoh, Takeyuki Watadani, Osamu Abe, Masahiro Hashimoto, Masahiro Jinzaki, Kensaku Mori

    Proceedings Volume 11597, Medical Imaging 2021: Computer-Aided Diagnosis     頁: 115971Z   2021年2月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)  

    DOI: 10.1117/12.2582088

  240. Single-shot three-dimensional reconstruction for colonoscopic image analysis 査読有り

    Hayato Itoh, Masahiro Oda, Yuichi Mori, Masashi Misawa, Shin-ei Kudo, Kinnichi Hotta, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori, Kensaku Mori

    Proc. SPIE 11598, Medical Imaging 2021: Image-Guided Procedures, Robotic Interventions, and Modeling     頁: 115980E-1 - 6   2021年2月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)  

    DOI: 10.1117/12.2582660

  241. Intestinal region reconstruction of ileus cases from 3D CT images based on graphical representation and its visualization 査読有り

    Hirohisa Oda, Yuichiro Hayashi, Takayuki Kitasaka, Yudai Tamada, Aitaro Takimoto, Akinari Hinoki, Hiroo Uchida, Kojiro Suzuki, Hayato Itoh, Masahiro Oda, Kensaku Mori

    Proc.SPIE 11597, Medical Imaging 2021: Computer-Aided Diagnosis     2021年2月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)  

    DOI: 10.1117/12.2581261

  242. Lung infection and normal region segmentation from CT volumes of COVID-19 cases 査読有り

    Masahiro Oda, Yuichiro Hayashi, Yoshito Otake, Masahiro Hashimoto, Toshiaki Akashi, Kensaku Mori

    Proc. SPIE 11597, Medical Imaging 2021: Computer-Aided Diagnosis     頁: 115972X-1 - 6   2021年2月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)  

    DOI: 10.1117/12.2582066

  243. Extraction of lung and lesion regions from COVID-19 CT volumes using 3D fully convolutional networks 査読有り

    Yuichiro Hayashi, Masahiro Oda, Chen Shen, Masahiro Hashimoto, Yoshito Otake, Toshiaki Akashi, Kensaku Mori

    Proc.SPIE 11597, Medical Imaging 2021: Computer-Aided Diagnosis     頁: 115972A-1 - 6   2021年2月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)  

    DOI: 10.1117/12.2581818

  244. 人工知能(AI)最新動向 AI研究から見たRSNA-AIの広がりを感じる大会 招待有り

    森 健策

    インナービジョン   36 巻 ( 2 ) 頁: 25 - 26   2021年2月

     詳細を見る

    担当区分:筆頭著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  245. CT画像XAI技術で「新型コロナウィルス肺炎」を85%精度で識別

    小田 昌宏, 森 健策

    C-press   120 巻   頁: 5 - 6   2021年2月

  246. COVID-19診断支援AI開発における名古屋大学の取り組み

    小田 昌宏, 鄭 通, 林 雄一郎, 森 健策

      39 巻 ( 1 ) 頁: 13 - 19   2021年2月

  247. New method for the assessment of perineural invasion from perihilar cholangiocarcinoma 査読有り

    Hiroshi Tanaka, Tsuyoshi Igami, Yoshie Shimoyama, Tomoki Ebata, Yukihiro Yokoyama, Kensaku Mori, Masato Nagino,

    Surgery Today   51 巻 ( 2 ) 頁: 136 - 143   2021年1月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1007/s00595-020-02071-x

  248. Current status and future perspective on artificial intelligence for lower endoscopy 招待有り 国際誌

    Misawa Masashi, Kudo Shin-ei, Mori Yuichi, Maeda Yasuharu, Ogawa Yushi, Ichimasa Katsuro, Kudo Toyoki, Wakamura Kunihiko, Hayashi Takemasa, Miyachi Hideyuki, Baba Toshiyuki, Ishida Fumio, Itoh Hayato, Oda Masahiro, Mori Kensaku

    DIGESTIVE ENDOSCOPY   33 巻 ( 2 ) 頁: 273 - 284   2021年1月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Digestive Endoscopy  

    The global incidence and mortality rate of colorectal cancer remains high. Colonoscopy is regarded as the gold standard examination for detecting and eradicating neoplastic lesions. However, there are some uncertainties in colonoscopy practice that are related to limitations in human performance. First, approximately one-fourth of colorectal neoplasms are missed on a single colonoscopy. Second, it is still difficult for non-experts to perform adequately regarding optical biopsy. Third, recording of some quality indicators (e.g. cecal intubation, bowel preparation, and withdrawal speed) which are related to adenoma detection rate, is sometimes incomplete. With recent improvements in machine learning techniques and advances in computer performance, artificial intelligence-assisted computer-aided diagnosis is being increasingly utilized by endoscopists. In particular, the emergence of deep-learning, data-driven machine learning techniques have made the development of computer-aided systems easier than that of conventional machine learning techniques, the former currently being considered the standard artificial intelligence engine of computer-aided diagnosis by colonoscopy. To date, computer-aided detection systems seem to have improved the rate of detection of neoplasms. Additionally, computer-aided characterization systems may have the potential to improve diagnostic accuracy in real-time clinical practice. Furthermore, some artificial intelligence-assisted systems that aim to improve the quality of colonoscopy have been reported. The implementation of computer-aided system clinical practice may provide additional benefits such as helping in educational poorly performing endoscopists and supporting real-time clinical decision-making. In this review, we have focused on computer-aided diagnosis during colonoscopy reported by gastroenterologists and discussed its status, limitations, and future prospects.

    DOI: 10.1111/den.13847

    Web of Science

    Scopus

    PubMed

  249. Dense-layer-based YOLO-v3 for Detection and Localization of Colon Perforations 招待有り 査読有り

    Jiang Kai, Itoh Hayato, Oda Masahiro, Okumura Taishi, Mori Yuichi, Misawa Masashi, Hayashi Takemasa, Kudo Shin-Ei, Mori Kensaku

    MEDICAL IMAGING 2021: COMPUTER-AIDED DIAGNOSIS   11597 巻   頁: 115971A-1 - 6   2021年

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:Progress in Biomedical Optics and Imaging - Proceedings of SPIE  

    Endoscopic submucosal dissection is a minimally invasive treatment for early gastric cancer. In endoscopic submucosal dissection, a physician directly removes the mucosa around the lesion under internal endoscopy by using the flush knife. However, the flush knife may accidentally pierce the colonic wall and generate a perforation on it. If physicians overlooking a small perforation, a patient may need emergency open surgery, since a perforation can easily cause peritonitis. For the prevention of overlooking of perforations, a computer-aided diagnosis system has a potential demand. We believe automatic perforation detection and localization function is very useful for the analysis of endoscopic submucosal dissection videos for the development of a computeraided diagnosis system. At current stage, the research of perforation detection and localization progress slowly, automatic image-based perforation detection is very challenge. Thus, we devote to the development of detection and localization of perforations in colonoscopic videos. In this paper, we proposed a supervised-learning method for perforations detection and localization in colonoscopic videos. This method uses dense layers in YOLO-v3 instead of residual units, and a combination of binary cross entropy and generalized intersection over union loss as the loss function in the training process. This method achieved 0.854 accuracy, 0.850 AUC score and 0.884 mean average precision for perforation detection and localization, respectively, as an initial study

    DOI: 10.1117/12.2582300

    Web of Science

    Scopus

  250. Robust endocytoscopic image classification based on higher-order symmetric tensor analysis and multi-scale topological statistics 招待有り 査読有り 国際誌

    Itoh, H; Nimura, Y; Mori, Y; Misawa, M; Kudo, SE; Hotta, K; Ohtsuka, K; Saito, S; Saito, Y; Ikematsu, H; Hayashi, Y; Oda, M; Mori, K

    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY   15 巻 ( 12 ) 頁: 2049 - 2059   2020年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose: An endocytoscope is a new type of endoscope that enables users to perform conventional endoscopic observation and ultramagnified observation at the cell level. Although endocytoscopy is expected to improve the cost-effectiveness of colonoscopy, endocytoscopic image diagnosis requires much knowledge and high-level experience for physicians. To circumvent this difficulty, we developed a robust endocytoscopic (EC) image classification method for the construction of a computer-aided diagnosis (CAD) system, since real-time CAD can resolve accuracy issues and reduce interobserver variability. Method: We propose a novel feature extraction method by introducing higher-order symmetric tensor analysis to the computation of multi-scale topological statistics on an image, and we integrate this feature extraction with EC image classification. We experimentally evaluate the classification accuracy of our proposed method by comparing it with three deep learning methods. We conducted this comparison by using our large-scale multi-hospital dataset of about 55,000 images of over 3800 patients. Results: Our proposed method achieved an average 90% classification accuracy for all the images in four hospitals even though the best deep learning method achieved 95% classification accuracy for images in only one hospital. In the case with a rejection option, the proposed method achieved expert-level accurate classification. These results demonstrate the robustness of our proposed method against pit pattern variations, including differences of colours, contrasts, shapes, and hospitals. Conclusions: We developed a robust EC image classification method with novel feature extraction. This method is useful for the construction of a practical CAD system, since it has sufficient generalisation ability.

    DOI: 10.1007/s11548-020-02255-3

    Web of Science

    Scopus

    PubMed

  251. [総論]AI時代を見据えた消化器外科手術 AIによる大腸T2癌リンパ節転移予測 招待有り 査読有り

    中原 健太, 石田 文生, 一政 克朗, 森 悠一, 三澤 将史, 澤田 成彦, 工藤 進英, Villard Ben, 伊東 隼人, 森 健策

    日本消化器外科学会総会   75回 巻   頁: WS15 - 6   2020年12月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:(一社)日本消化器外科学会  

  252. Improving contrast and spatial resolution in crystal analyzer based x ray dark field imaging 査読有り

    Masami Ando Yuki Nakao Ge Jin Hiroshi Sugiyama Naoki Sunaguchi Yongjin Sung Yoshifumi Suzuki Yong Sun Michio Tanimoto Katsuhiro Kawashima Tetsuya Yuasa Kensaku Mori Shu Ichihara Rajiv Gupta,

    Medical Physics   47 巻 ( 11 ) 頁: 5505 - 5513   2020年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1002/mp.14442

  253. 多元計算解剖モデルと人工知能に基づく診断治療支援 招待有り 査読有り

    森 健策

    映像情報メディア学会誌   74 巻 ( 6 ) 頁: 909-915   2020年11月

     詳細を見る

    担当区分:筆頭著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  254. SUN database : 大腸ポリープ自動検出器の精度評価に向けた試験用画像

    伊東 隼人, 三澤 将史,森 悠一,小田 昌宏,工藤 進英, 森 健策

    日本コンピュータ外科学会誌 第29回日本コンピュータ外科学会大会特集号   22 巻 ( 4 ) 頁: 346 - 347   2020年11月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  255. Preliminary study of Loss-Function Design for Detection and Localization of Perforations with YOLO-v3 in Colonoscopic Images

      22 巻 ( 4 ) 頁: 348 - 349   2020年11月

     詳細を見る

    担当区分:最終著者   記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  256. 気管支鏡ナビゲーションのための敵対的生成による内視鏡画像深度推定の評価

    王 成, 小田 昌宏,林 雄一郎,北坂 孝幸,本間 裕敏,高畠 博嗣,森 雅樹,名取 博, 森 健策

    日本コンピュータ外科学会誌 第29回日本コンピュータ外科学会大会特集号   22 巻 ( 4 ) 頁: 338 - 339   2020年11月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  257. 腸閉塞およびイレウスの診断支援システムにおける距離マップの導入

    小田 紘久, 林 雄一郎, 北坂 孝幸,玉田 雄大,滝本 愛太朗,檜 顕成, 内田 広夫,鈴木 耕次郎, 伊東 隼人,小田 昌宏,森 健策

    日本コンピュータ外科学会誌 第29回日本コンピュータ外科学会大会特集号   22 巻 ( 4 ) 頁: 282 - 284   2020年11月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  258. 局所情報に注目した腹腔鏡動画像からの出血領域抽出

    山本 翔太, 盛満 慎太郎,林 雄一郎,北坂 孝幸,小田 昌宏,伊藤 雅昭,竹下 修由, 森 健策

    日本コンピュータ外科学会誌 第29回日本コンピュータ外科学会大会特集号   22 巻 ( 4 ) 頁: 285 - 286   2020年11月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  259. Dilated convolution を用いた腹腔鏡動画像からの血管領域抽出における空間情報利用に関する検討

    盛満 慎太郎, 山本 翔太, 小澤 卓也, 北坂 孝幸, 林 雄一郎, 小田 昌宏, 伊藤 雅昭, 竹下 修由,三澤 一成, 森 健策

    日本コンピュータ外科学会誌 第29回日本コンピュータ外科学会大会特集号   22 巻 ( 4 ) 頁: 287 - 288   2020年11月

     詳細を見る

    担当区分:最終著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  260. 表現学習に基づくクラスタリングによるCOVID-19 肺CT像からの病変部抽出手法

    鄭 通, 小田 昌宏, 王 成龍,林 雄一郎,橋本 正弘, 大竹 義人,明石 敏昭, 森 健策

    日本コンピュータ外科学会誌 第29回日本コンピュータ外科学会大会特集号   22 巻 ( 4 ) 頁: 294 - 295   2020年11月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  261. 深層学習によるMRI画像からの神経鞘腫の自動位置検出

    小田 昌宏, 伊藤 定之, 今釜 史郎, 森 健策

    日本コンピュータ外科学会誌 第29回日本コンピュータ外科学会大会特集号   22 巻 ( 4 ) 頁: 296 - 297   2020年11月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  262. 腹腔鏡動画像用オンラインアノテーションツールの開発

    屠 芸豪, 伊東 隼人,小澤 卓也,小田 昌宏, 竹下 修由,伊藤 雅昭,森 健策

    日本コンピュータ外科学会誌 第29回日本コンピュータ外科学会大会特集号   22 巻 ( 4 ) 頁: 306 - 307   2020年11月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  263. 呼吸器外科における仮想胸腔鏡像による手術ナビゲーションシステムを用いた手術支援の検討

    林 雄一郎, 中村 彰太,森 健策

    日本コンピュータ外科学会誌 第29回日本コンピュータ外科学会大会特集号   22 巻 ( 4 ) 頁: 337   2020年11月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  264. Cost savings in colonoscopy with artificial intelligence-aided polyp diagnosis: an add-on analysis of a clinical trial (with video) 招待有り 査読有り 国際誌

    Mori Y., Kudo S.e., East J.E., Rastogi A., Bretthauer M., Misawa M., Sekiguchi M., Matsuda T., Saito Y., Ikematsu H., Hotta K., Ohtsuka K., Kudo T., Mori K.

    Gastrointestinal Endoscopy   92 巻 ( 4 ) 頁: 905 - 911   2020年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Gastrointestinal Endoscopy  

    Background and Aims: Artificial intelligence (AI) is being implemented in colonoscopy practice, but no study has investigated whether AI is cost saving. We aimed to quantify the cost reduction using AI as an aid in the optical diagnosis of colorectal polyps. Methods: This study is an add-on analysis of a clinical trial that investigated the performance of AI for differentiating colorectal polyps (ie, neoplastic versus non-neoplastic). We included all patients with diminutive (≤5 mm) rectosigmoid polyps in the analyses. The average colonoscopy cost was compared for 2 scenarios: (1) a diagnose-and-leave strategy supported by the AI prediction (ie, diminutive rectosigmoid polyps were not removed when predicted as non-neoplastic), and (2) a resect-all-polyps strategy. Gross annual costs for colonoscopies were also calculated based on the number and reimbursement of colonoscopies conducted under public health insurances in 4 countries. Results: Overall, 207 patients with 250 diminutive rectosigmoid polyps (104 neoplastic, 144 non-neoplastic, and 2 indeterminate) were included. AI correctly differentiated neoplastic polyps with 93.3% sensitivity, 95.2% specificity, and 95.2% negative predictive value. Thus, 105 polyps were removed and 145 were left under the diagnose-and-leave strategy, which was estimated to reduce the average colonoscopy cost and the gross annual reimbursement for colonoscopies by 18.9% and US$149.2 million in Japan, 6.9% and US$12.3 million in England, 7.6% and US$1.1 million in Norway, and 10.9% and US$85.2 million in the United States, respectively, compared with the resect-all-polyps strategy. Conclusions: The use of AI to enable the diagnose-and-leave strategy results in substantial cost reductions for colonoscopy.

    DOI: 10.1016/j.gie.2020.03.3759

    Scopus

    PubMed

  265. Prediction of dose distribution from luminescence image of water using a deep convolutional neural network for particle therapy 査読有り

    Takuya Yabe, Seiichi Yamamoto, Masahiro Oda, Kensaku Mori, Toshiyuki Toshito, Takashi Akagi

    Medical Physics   47 巻 ( 9 ) 頁: 3882-3891   2020年9月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1002/mp.14372

  266. 名古屋大学スーパーコンピュータ「不老」における医用画像処理

    大島 聡史, 小田 昌宏, 片桐 孝洋, 森 健策

    電子情報通信学会技術研究報告(MI)   120 巻 ( 156 ) 頁: 69 - 74   2020年9月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(研究会,シンポジウム資料等)  

  267. ニューラルネットワークとSpherical K-meansを用いた胃壁マイクロCT像からの層構造および腫瘍抽出の検討

    御手洗 翠, 小田 紘久, 杉野 貴明, 守谷 享泰, 伊東 隼人, 小田 昌宏, 小宮山 琢真, 古川 和宏, 宮原 良二, 藤城 光弘, 森 雅樹, 高畠 博嗣, 名取 博, 森 健策

    電子情報通信学会技術研究報告(MI)   120 巻 ( 156 ) 頁: 1 - 6   2020年9月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(研究会,シンポジウム資料等)  

  268. 腹腔鏡手術動画像データベース構築に向けたリモートアノテーションツールのプロトタイプ開発

    屠 芸豪, 伊東 隼人, 小澤 卓也, 小田 昌宏, 竹下 修由, 伊藤 雅昭, 森 健策

    第39回日本医用画像工学会大会予稿集     頁: 611 - 615   2020年9月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  269. Preliminary Study of Perforation Detection and Localization for Colonoscopy Video

    Kai Jiang, Hayato Itoh, Masahiro Oda, Taishi Okumura, Yuichi Mori, Masashi Misawa, Takemasa Hayashi, Shin-Ei Kudo, Kensaku Mori

        頁: 142 - 147   2020年9月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  270. 読影レポート解析を利用した医用画像データベースからのアノテーション付きデータセット作成に関する初期検討

    林 雄一郎, 鈴村 悠輝, 岡崎 真治, 小田 昌宏, 森 健策

    第39回日本医用画像工学会大会予稿集     頁: 163 - 167   2020年9月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  271. A study on Subarachnoid Hemorrhage automatic detection utilized Transfer Learning on extremely imbalanced brain CT datasets

    Zhongyang Lu, Masahiro Oda, Yuichiro Hayashi, Tao Hu, Hayato Ito,Takeyuki Watadani,Osamu Abe,Masahiro Hashimoto,Masahiro Jinzaki,Kensaku Mori

        頁: 168 - 172   2020年9月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  272. COVID-19 症例の定量評価のためのCT 像からの肺野自動セグメンテーション

    小田 昌宏, 林 雄一郎, 大竹 義人, 橋本 正弘, 明石 敏昭, 森 健策

    第39回日本医用画像工学会大会予稿集     頁: 181 - 184   2020年9月

     詳細を見る

    担当区分:最終著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  273. Preliminary Study on Classification of Interstitial Cystitis Using Cystoscopy Images

    Tao Chu, Masahiro Oda, Akira Furuta, Tokunori Yamamoto, Kensaku Mori

        頁: 186 - 191   2020年9月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  274. Dilated convolution を用いた FCN による腹腔鏡動画像からの血管領域抽出

    盛満 慎太郎, 山本 翔太, 北坂 孝幸, 林 雄一郎, 小田 昌宏, 竹下 修由, 伊藤 雅昭, 三澤 一成, 森 健策

    第39回日本医用画像工学会大会予稿集     頁: 230 - 233   2020年9月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  275. 大域及び局所情報を用いた深層学習による出血領域自動セグメンテーション

    山本 翔太, 盛満 慎太郎, 林 雄一郎, 北坂 孝幸, 小田 昌宏, 伊藤 雅昭, 竹下 修由, 森 健策

    第39回日本医用画像工学会大会予稿集     頁: 246 - 249   2020年9月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  276. 広範囲の隣接関係を考慮したグラフニューラルネットワークを用いた腹部動脈血管名自動命名の検討

    日比 裕太, 林 雄一郎, 北坂 孝幸, 伊東 隼人, 小田 昌宏, 三澤 一成, 森 健策

    第39回日本医用画像工学会大会予稿集     頁: 268 - 271   2020年9月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  277. Cross-phase CT Image Registration Using Convolutional Neural Network

    Tao Hu, Masahiro Oda, Yuichiro Hayashi, Zhongyang Lu, Kanako Kunishishima Kumamaru, Shigeki Aoki, Kensaku Mori

        頁: 276 - 280   2020年9月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  278. Unsupervised 3D Super-resolution of Clinical CT Volumes by Utilizing Multi-axis 2D Super-resolution

    第39回日本医用画像工学会大会予稿集     頁: 377 - 384   2020年9月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  279. Preliminary Study on Classification of Hands' Bone Marrow Edema Using X-ray Images

    Dongping Pan, Masahiro Oda, Kou Katayama, Takanobu Okubo, Kensaku Mori

        頁: 488 - 493   2020年9月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  280. 大腸内視鏡のための教師なし深度画像推定法における補助タスク検討

    伊東 隼人, 小田 昌宏, 森 悠一, 三澤 将史, 工藤 進英, 堀田 欣一, 高畠 博嗣, 森 雅樹, 名取 博, 森 健策

    第39回日本医用画像工学会大会予稿集     頁: 563 - 568   2020年9月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  281. Spatial Information Considered Self-Supervised Depth Estimation Based on Image Pairs from Stereo Laparoscope

    Wenda Li, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kazunari Misawa, Kensaku Mori

        頁: 602 - 606   2020年9月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  282. 泌尿器科画像診断に用いられるAI技術とその応用

    森 健策

    泌尿器外科   33 巻 ( 6 ) 頁: 557-561   2020年6月

     詳細を見る

    記述言語:日本語  

  283. Detecting ganglion cells on virtual slide images: Macroscopic masking by superpixel 査読有り

    H. Oda, Y. Tamada, K. Nishio, T. Kitasaka, H. Amano, K. Chiba, A. Hinoki, H. Uchida, M. Oda, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   15 巻 ( 1 ) 頁: S169 - S170   2020年6月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)  

  284. An Application of Multi-organ Segmentation from Thick-slice Abdominal CT Volumes using Transfer Learning 査読有り

    C. Shen, M. Oda, H. Roth, H. Oda, Y. Hayashi, K. Misawa, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   15 巻 ( 1 ) 頁: S17 - S18   2020年6月

     詳細を見る

    担当区分:最終著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)  

  285. Virtual cleansing by unpaired image translation of intestines for detecting obstruction 査読有り

    K. Nishio, H. Oda, T. Kitasaka, Y. Tamada, H. Amano, A. Takimoto, K. Chiba, Y. Hayashi, H. Itoh, M. Oda, A. Hinoki, H. Uchida, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   15 巻 ( 1 ) 頁: S21 - S22   2020年6月

     詳細を見る

    担当区分:最終著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

  286. TinyLoss: loss function for tiny image difference evaluation and its application to unpaired non-contrast to contrast abdominal CT estimation 査読有り

    M. Oda, T. Hu, K. K. Kumamaru, T. Akashi, S. Aoki, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   15 巻 ( 1 ) 頁: S25 - S26   2020年6月

     詳細を見る

    担当区分:責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)  

  287. SR-CycleGAN V2: CycleGAN-based unsupervised superresolution with pixel-shuffling 査読有り

    T. Zheng, H. Oda, T. Moriya, T. Sugino, S. Nakamura, M. Oda, M. Mori, H. Takabatake, H. Natori, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   15 巻 ( 1 ) 頁: S27 - S28   2020年6月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)  

  288. Blood vessel segmentation from laparoscopic video using ConvLSTM U-Net

    S. Morimitsu, S. Yamamoto, T. Ozawa, T. Kitasaka, Y. Hayashi, M. Oda, M. Ito, N. Takeshita, K. Misawa, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   15 巻 ( 1 ) 頁: S63 - S64   2020年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)  

  289. Extraction of blood vessel regions in liver from CT volumes using fully convolutional networks for computer assisted liver surgery 査読有り

    Y. Hayashi, C. Shen, T. Igami, M. Nagino, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   15 巻 ( 1 ) 頁: S152 - S153   2020年6月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)  

  290. AIによるSSA/Pの超拡大内視鏡診断 招待有り 査読有り

    小川 悠史, 工藤 進英, 森 悠一, 三澤 将史, 片岡 伸一, 前田 康晴, 一政 克朗, 石垣 智之, 工藤 豊樹, 若村 邦彦, 林 武雅, 馬場 俊之, 石田 文生, 伊東 隼人, 小田 昌宏, 森 健策

    日本大腸検査学会雑誌   36 巻 ( 2 ) 頁: 125 - 125   2020年5月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:(社)日本大腸検査学会  

  291. バーチャル・リアリティ手術シュミレーター(VRS)の意義と今後の展望

    藤原 道隆, 林 雄一郎, 高見 秀樹, 田中 千恵, 森 健策, 小寺 泰弘

    臨床外科   75 巻 ( 4 ) 頁: 476 - 482   2020年4月

  292. Cardiac fiber tracking on super high-resolution CT images: a comparative study 招待有り 国際誌

    Oda Hirohisa, Roth Holger R., Sugino Takaaki, Sunaguchi Naoki, Usami Noriko, Oda Masahiro, Shimao Daisuke, Ichihara Shu, Yuasa Tetsuya, Ando Masami, Akita Toshiaki, Narita Yuji, Mori Kensaku

    JOURNAL OF MEDICAL IMAGING   7 巻 ( 2 ) 頁: 026001 - 026001   2020年3月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Journal of Medical Imaging  

    Purpose: High-resolution cardiac imaging and fiber analysis methods are required to understand cardiac anatomy. Although refraction-contrast x-ray CT (RCT) has high soft tissue contrast, it cannot be commonly used because it requires a synchrotron system. Microfocus x-ray CT (μCT) is another commercially available imaging modality. Approach: We evaluate the usefulness of μCT for analyzing fibers by quantitatively and objectively comparing the results with RCT. To do so, we scanned a rabbit heart by both modalities with our original protocol of prepared materials and compared their image-based analysis results, including fiber orientation estimation and fiber tracking. Results: Fiber orientations estimated by two modalities were closely resembled under the correlation coefficient of 0.63. Tracked fibers from both modalities matched well the anatomical knowledge that fiber orientations are different inside and outside of the left ventricle. However, the μCT volume caused incorrect tracking around the boundaries caused by stitching scanning. Conclusions: Our experimental results demonstrated that μCT scanning can be used for cardiac fiber analysis, although further investigation is required in the differences of fiber analysis results on RCT and μCT.

    DOI: 10.1117/1.JMI.7.2.026001

    Web of Science

    Scopus

    PubMed

    CiNii Research

  293. 3Dプリンティングの最新動向 招待有り

    森 健策

    インナービジョン   35 巻 ( 2 ) 頁: 36-37   2020年2月

     詳細を見る

    担当区分:筆頭著者, 最終著者, 責任著者   記述言語:日本語  

  294. Spatial information-embedded fully convolutional networks for multi-organ segmentation with improved data augmentation and instance normalization 査読有り

    Chen Shen, Chenglong Wang, Holger R. Roth, Masahiro Oda, Yuichiro Hayashi, Kazunari Misawa, Kensaku Mori

    Medical Imaging 2020: Image Processing   11313 巻   2020年2月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

  295. Organ segmentation from full-size CT images using memory-efficient FCN 査読有り

    Chenglong Wang, Masahiro Oda, Kensaku Mori

    Medical Imaging 2020: Image Processing   11314 巻   2020年2月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

  296. Multi-modality super-resolution loss for GAN-based super-resolution of clinical CT images using micro CT image database 査読有り

    Tong Zheng, Hirohisa Oda, Takayasu Moriya, Takaaki Sugino, Shota Nakamura, Masahiro Oda, Masaki Mori, Hirotsugu Takabatake, Hiroshi Natori, Kensaku Mori

    Medical Imaging 2020: Image Processing   11313 巻   2020年2月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

  297. Visualizing intestines for diagnostic assistance of ileus based on intestinal region segmentation from 3D CT images 査読有り

    Hirohisa Oda, Kohei Nishio, Takayuki Kitasaka, Hizuru Amano, Aitaro Takimoto, Akinari Hinoki, Hiroo Uchida, Kojiro Suzuki, Hayato Itoh, Masahiro Oda, Kensaku Mori

    Medical Imaging 2020: Image Processing   11314 巻   2020年2月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

  298. Visualising decision-reasoning regions in computer-aided pathological pattern diagnosis of endoscytoscopic images based on CNN weights analysis 査読有り

    Hayato Itoh, Zhongyang Lu, Yuichi Mori, Masashi Misawa, Masahiro Oda, Shin-ei Kudo, Kensaku Mori

    Medical Imaging 2020: Image Processing   11314 巻   2020年2月

     詳細を見る

    担当区分:最終著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

  299. Usefulness of fine-tuning for deep learning based multi-organ regions segmentation method from non-contrast CT volumes using small training dataset 査読有り

    Yuichiro Hayashi, Chen Shen, Holger R. Roth, Masahiro Oda, Kazunari Misawa, Masahiro Jinzaki, Masahiro Hashimoto, Kanako K. Kumamaru, Shigeki Aoki, Kensaku Mori

    Medical Imaging 2020: Image Processing   11314 巻   2020年2月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

  300. Automated eye disease classification method from anterior eye image using anatomical structure focused image classification technique 査読有り

    Masahiro Oda, Naoyuki Maeda, Takefumi Yamaguchi, Hideki Fukuoka, Yuta Ueno, Kensaku Mori

    Medical Imaging 2020: Image Processing   11314 巻   2020年2月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

  301. Improved visual SLAM for bronchoscope tracking and registration with pre-operative CT images

    Cheng Wang, Masahiro Oda, Yuichiro Hayashi, Takayuki Kitasaka, Hirotoshi Honma, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori, Kensaku Mori

    Medical Imaging 2020: Image Processing   11315 巻   2020年2月

     詳細を見る

    担当区分:最終著者   記述言語:英語   掲載種別:研究論文(学術雑誌)  

  302. 仮想腹腔鏡画像生成と深層学習による腹腔鏡画像からの術具領域セグメンテーション

    小澤 卓也, 林 雄一郎, 小田 紘久, 小田 昌宏, 北坂 孝幸, 竹下 修由, 伊藤 雅昭, 森 健策

    電子情報通信学会技術研究報告(MI)   119 巻 ( 399 ) 頁: 129 - 134   2020年1月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(研究会,シンポジウム資料等)  

  303. 腹腔鏡下手術支援のためのU-Netに基づく腹腔鏡動画像からの出血領域の推定

    山本 翔太, 林 雄一郎, 盛満 慎太郎, 小澤 卓也, 北坂 孝幸, 小田 昌宏, 竹下 修由, 伊藤 雅昭, 森 健策

    電子情報通信学会技術研究報告(MI)   119 巻 ( 399 ) 頁: 209 - 214   2020年1月

     詳細を見る

    担当区分:最終著者   記述言語:日本語   掲載種別:研究論文(研究会,シンポジウム資料等)  

  304. MICCAI 2019参加報告

    小田 昌宏, 伊東 隼人, 宮内 翔子, 諸岡 健一, 松崎 博貴, 花岡 昇平, 古川 亮, 増谷 佳孝, 森 健策

    電子情報通信学会技術研究報告(MI)   119 巻 ( 399 ) 頁: 219 - 226   2020年1月

     詳細を見る

    担当区分:最終著者   記述言語:日本語   掲載種別:研究論文(研究会,シンポジウム資料等)  

  305. CycleGANによる腸管電子洗浄とその腸管閉塞部位検出への応用

    西尾 光平, 小田 紘久, 千馬 耕亮, 北坂 孝幸, 林 雄一郎, 伊東 隼人, 小田 昌宏, 檜 顕成, 内田 広夫, 森 健策

    電子情報通信学会技術研究報告(MI)   119 巻 ( 399 ) 頁: 243 - 248   2020年1月

     詳細を見る

    担当区分:最終著者   記述言語:日本語   掲載種別:研究論文(研究会,シンポジウム資料等)  

  306. 臨床肺CT画像と切除肺マイクロCT画像の非剛体位置合わせ手法の検討

    波多腰 慎矢, 小田 紘久, 林 雄一郎, Holger R. Roth, 中村 彰太, 小田 昌宏, 森 雅樹, 高畠 博嗣, 名取 博, 森 健策

    電子情報通信学会技術研究報告(MI)   119 巻 ( 399 ) 頁: 249 - 254   2020年1月

     詳細を見る

    担当区分:最終著者, 責任著者   記述言語:日本語   掲載種別:研究論文(研究会,シンポジウム資料等)  

  307. Stable polyp-scene classification via subsampling and residual learning from an imbalanced large dataset 国際誌

    Itoh, H; Roth, H; Oda, M; Misawa, M; Mori, Y; Kudo, SE; Mori, K

    HEALTHCARE TECHNOLOGY LETTERS   6 巻 ( 6 ) 頁: 237 - 242   2019年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Healthcare Technology Letters  

    This Letter presents a stable polyp-scene classification method with low false positive (FP) detection. Precise automated polyp detection during colonoscopies is essential for preventing colon-cancer deaths. There is, therefore, a demand for a computer-assisted diagnosis (CAD) system for colonoscopies to assist colonoscopists. A high-performance CAD system with spatiotemporal feature extraction via a three-dimensional convolutional neural network (3D CNN) with a limited dataset achieved about 80% detection accuracy in actual colonoscopic videos. Consequently, further improvement of a 3D CNN with larger training data is feasible. However, the ratio between polyp and non-polyp scenes is quite imbalanced in a large colonoscopic video dataset. This imbalance leads to unstable polyp detection. To circumvent this, the authors propose an efficient and balanced learning technique for deep residual learning. The authors’ method randomly selects a subset of non-polyp scenes whose number is the same number of still images of polyp scenes at the beginning of each epoch of learning. Furthermore, they introduce post-processing for stable polyp-scene classification. This post-processing reduces the FPs that occur in the practical application of polyp-scene classification. They evaluate several residual networks with a large polyp-detection dataset consisting of 1027 colonoscopic videos. In the scene-level evaluation, their proposed method achieves stable polyp-scene classification with 0.86 sensitivity and 0.97 specificity.

    DOI: 10.1049/htl.2019.0079

    Web of Science

    Scopus

    PubMed

    CiNii Research

  308. Realistic endoscopic image generation method using virtual-to-real image-domain translation

    Oda, M; Tanaka, K; Takabatake, H; Mori, M; Natori, H; Mori, K

    HEALTHCARE TECHNOLOGY LETTERS   6 巻 ( 6 ) 頁: 214 - 219   2019年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Healthcare Technology Letters  

    A realistic image generation method for visualisation in endoscopic simulation systems is proposed in this study. Endoscopic diagnosis and treatment are performed in many hospitals. To reduce complications related to endoscope insertions, endoscopic simulation systems are used for training or rehearsal of endoscope insertions. However, current simulation systems generate non-realistic virtual endoscopic images. To improve the value of the simulation systems, improvement of the reality of their generated images is necessary. The authors propose a realistic image generation method for endoscopic simulation systems. Virtual endoscopic images are generated by using a volume rendering method from a CT volume of a patient. They improve the reality of the virtual endoscopic images using a virtual-to-real image-domain translation technique. The image-domain translator is implemented as a fully convolutional network (FCN). They train the FCN by minimising a cycle consistency loss function. The FCN is trained using unpaired virtual and real endoscopic images. To obtain high-quality image-domain translation results, they perform an image cleansing to the real endoscopic image set. They tested to use the shallow U-Net, U-Net, deep U-Net, and U-Net having residual units as the image-domain translator. The deep U-Net and U-Net having residual units generated quite realistic images.

    DOI: 10.1049/htl.2019.0071

    Web of Science

    Scopus

    PubMed

    CiNii Research

  309. Realistic Endoscopic Image Generation Method Using Virtual-to-real Image-domain Translation

    Masahiro Oda, Kiyohito Tanaka, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori, Kensaku Mori

    Healthcare Technology Letters   6 巻 ( 6 ) 頁: 214-219   2019年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

  310. Generative Adversarial Networks Showcase: Their Mechanisms and Radiological Applications

    Masahiro Oda, Hirohisa Oda, Kanako K. Kumamaru, Shigeki Aoki, Hiroshi Natori, Kensaku Mori, Masaki Mori, Hirotsugu Takabatake

    RSNA2019     頁: AI020-EB-X   2019年12月

     詳細を見る

    記述言語:英語  

  311. Automatic Quantitative Analysis of Kidney Tumor Using 3D Fully Convolutional Network

    Chenglong Wang, Masahiro Oda, Yuichiro Hayashi, Naoto Sassa, Tokunori Yamamoto, Kensaku Mori

    RSNA2019     頁: UR002-EB-X   2019年12月

     詳細を見る

    記述言語:英語  

  312. Technique for Improving Accuracy of Deep Learning-based Multi-Organ Segmentation from CT Volumes

    Chen Shen, Hirohisa Oda, MENG, Masahiro Oda, Holger R. Roth, Yuichiro Hayashi, Kensaku Mori, Takayuki Kitasaka, Kazunari Misawa

    RSNA2019     頁: N021-EC-X   2019年12月

     詳細を見る

    記述言語:英語  

  313. Micro Lung Cancer Analysis Based on Micro CT Imaging Using Generative Adversarial Network

    Kensaku Mori, Takayasu Moriya, Hirohisa Oda, MENG , Midori Mitarai, Masahiro Oda, Shota Nakamura, Takaaki Sugino, Holger R. Roth

    RSNA2019     頁: CH007-EC-X   2019年12月

     詳細を見る

    記述言語:英語  

  314. 大腸肛門病理学におけるAI利用の将来像

    森 健策

    日本大腸肛門病学学術集会 抄録号     頁: A17   2019年10月

     詳細を見る

    記述言語:日本語  

  315. 機械学習を用いた医療支援

    森 健策

    第84回日本泌尿器気学会東部総会, プログラム 抄録集     頁: 123   2019年10月

     詳細を見る

    記述言語:日本語  

  316. 胸部領域AIの歴史と今後-歴史的研究を振り返りながら今後を展望する-

    森 健策

    臨床画像   35 巻 ( 10 ) 頁: 1139-1149   2019年10月

     詳細を見る

    記述言語:日本語  

  317. Spaciousness Filters for Non-contrast CT Volume Segmentation of the Intestine Region for Emergency Ileus Diagnosis

    Hirohisa Oda, Kohei Nishio, Takayuki Kitasaka, Benjamin Villard, Hizuru Amano, Kosuke Chiba, Akinari Hinoki, Hiroo Uchida, Kojiro Suzuki, Hayato Itoh, Masahiro Oda, Kensaku Mori

    MICCAI 2019   LNCS 11840 巻   頁: 104-114   2019年10月

     詳細を見る

    記述言語:英語  

  318. Unsupervised Segmentation of Micro-CT Images of Lung Cancer Specimen Using Deep Generative Models

    Takayasu Moriya, Hirohisa Oda, Midori Mitarai, Shota Nakamura, Holger R. Roth, Masahiro Oda, Kensaku Mori

    MICCAI 2019   LNCS 11769 巻   頁: 240-248   2019年10月

     詳細を見る

    記述言語:英語  

  319. Tubular Structure Segmentation Using Spatial Fully Connected Network With Radial Distance Loss for 3D Medical images

    Chenglong Wang, Yuichiro Hayashi, Masahiro Oda, Hayato Itoh, Takayuki Kitasaka, Alejandro Frangi, Kensaku Mori

    MICCAI 2019   LNCS 11769 巻   頁: 348-356   2019年10月

     詳細を見る

    記述言語:英語  

  320. Intelligent Image Synthesis to Attack a egmentation CNN Using Adversarial Learning

    Liang Chen, Paul Bentley, Kensaku Mori, Kazunari Misawa, Michitaka Fujiwara, Daniel Rueckert

    MICCAI 2019   LNCS 11827 巻   頁: 90-99   2019年10月

     詳細を見る

    記述言語:英語  

  321. 人工知能時代の医療を考える

    森 健策

    EAJ NEWS「AI×医療」特集号   ( 181 ) 頁: 6-8   2019年10月

     詳細を見る

    記述言語:日本語  

  322. Precise estimation of renal vascular dominant regions using spatially aware fully convolutional networks, tensor-cut and Voronoi diagrams

    Chenglong Wang, Holger R. Roth, Takayuki Kitasaka, Masahiro Oda, Yuichiro Hayashi, Yasushi Yoshino, Tokunori Yamamoto, Naoto Sassa, Momokazu Goto, Kensaku Mori

    Computerized Medical Imaging and Graphics   77 巻 ( 10642 ) 頁: 1-13   2019年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1016/j.compmedimag.2019.101642

  323. Intelligent Image Synthesis to Attack a segmentation CNN Using Adversarial Learning

    Liang Chen, Paul Bentley, Kensaku Mori, Kazunari Misawa, Michitaka Fujiwara, Daniel Rueckert,

    LNCS11827     頁: 90-99   2019年10月

     詳細を見る

    記述言語:英語  

  324. Spaciousness Filters for Non-contrast CT Volume Segmentation of the Intestine Region for Emergency Ileus Diagnosis

    Hirohisa Oda, Kohei Nishio, Takayuki Kitasaka, Benjamin Villard, Hizuru Amano, Kosuke Chiba, Akinari Hinoki, Hiroo Uchida, Kojiro Suzuki, Hayato Itoh, Masahiro Oda, Kensaku Mori

    LNCS 11840     頁: 104-114   2019年10月

     詳細を見る

    記述言語:英語  

  325. Tubular Structure Segmentation Using Spatial Fully Connected Network With Radial Distance Loss for 3D Medical images

    Chenglong Wang, Yuichiro Hayashi, Masahiro Oda, Hayato Itoh, Takayuki Kitasaka, Alejandro Frangi, Kensaku Mori

    LNCS11769     頁: 348-356   2019年10月

     詳細を見る

    記述言語:英語  

  326. Unsupervised Segmentation of Micro-CT Images of Lung Cancer Specimen Using Deep Generative Models

    Takayasu Moriya, Hirohisa Oda, Midori Mitarai, Shota Nakamura, Holger R. Roth, Masahiro Oda, Kensaku Mori,

    LNCS11769     頁: 240-248   2019年10月

     詳細を見る

    記述言語:英語  

  327. Artificial Intelligence-assisted System Improves Endoscopic Identification of Colorectal Neoplasms 査読有り 国際誌

    Kudo S.e., Misawa M., Mori Y., Hotta K., Ohtsuka K., Ikematsu H., Saito Y., Takeda K., Nakamura H., Ichimasa K., Ishigaki T., Toyoshima N., Kudo T., Hayashi T., Wakamura K., Baba T., Ishida F., Inoue H., Itoh H., Oda M., Mori K.

    Clinical Gastroenterology and Hepatology   18 巻 ( 8 ) 頁: 1874 - 1881   2019年9月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Clinical Gastroenterology and Hepatology  

    Background & Aims: Precise optical diagnosis of colorectal polyps could improve the cost-effectiveness of colonoscopy and reduce polypectomy-related complications. However, it is difficult for community-based non-experts to obtain sufficient diagnostic performance. Artificial intelligence-based systems have been developed to analyze endoscopic images; they identify neoplasms with high accuracy and low interobserver variation. We performed a multi-center study to determine the diagnostic accuracy of EndoBRAIN, an artificial intelligence-based system that analyzes cell nuclei, crypt structure, and microvessels in endoscopic images, in identification of colon neoplasms. Methods: The EndoBRAIN system was initially trained using 69,142 endocytoscopic images, taken at 520-fold magnification, from patients with colorectal polyps who underwent endoscopy at 5 academic centers in Japan from October 2017 through March 2018. We performed a retrospective comparative analysis of the diagnostic performance of EndoBRAIN vs that of 30 endoscopists (20 trainees and 10 experts); the endoscopists assessed images from 100 cases produced via white-light microscopy, endocytoscopy with methylene blue staining, and endocytoscopy with narrow-band imaging. EndoBRAIN was used to assess endocytoscopic, but not white-light, images. The primary outcome was the accuracy of EndoBrain in distinguishing neoplasms from non-neoplasms, compared with that of endoscopists, using findings from pathology analysis as the reference standard. Results: In analysis of stained endocytoscopic images, EndoBRAIN identified colon lesions with 96.9% sensitivity (95% CI, 95.8%–97.8%), 100% specificity (95% CI, 99.6%–100%), 98% accuracy (95% CI, 97.3%–98.6%), a 100% positive-predictive value (95% CI, 99.8%–100%), and a 94.6% negative-predictive (95% CI, 92.7%–96.1%); these values were all significantly greater than those of the endoscopy trainees and experts. In analysis of narrow-band images, EndoBRAIN distinguished neoplastic from non-neoplastic lesions with 96.9% sensitivity (95% CI, 95.8–97.8), 94.3% specificity (95% CI, 92.3–95.9), 96.0% accuracy (95% CI, 95.1–96.8), a 96.9% positive-predictive value, (95% CI, 95.8–97.8), and a 94.3% negative-predictive value (95% CI, 92.3–95.9); these values were all significantly higher than those of the endoscopy trainees, sensitivity and negative-predictive value were significantly higher but the other values are comparable to those of the experts. Conclusions: EndoBRAIN accurately differentiated neoplastic from non-neoplastic lesions in stained endocytoscopic images and endocytoscopic narrow-band images, when pathology findings were used as the standard. This technology has been authorized for clinical use by the Japanese regulatory agency and should be used in endoscopic evaluation of small polyps more widespread clinical settings. UMIN clinical trial no: UMIN000028843.

    DOI: 10.1016/j.cgh.2019.09.009

    Scopus

    PubMed

  328. 人工知能による画像診断支援-どこまでできたか.そして,その先は?

    森 健策

    第46回 日本小児内視鏡研究会 プログラム・抄録集     頁: 11   2019年7月

     詳細を見る

    記述言語:日本語  

  329. Artificial intelligence and upper gastrointestinal endoscopy: Current status and future perspective 国際誌

    Mori Yuichi, Kudo Shin-ei, Mohmed Hussein E. N., Misawa Masashi, Ogata Noriyuki, Itoh Hayato, Oda Masahiro, Mori Kensaku

    DIGESTIVE ENDOSCOPY   31 巻 ( 4 ) 頁: 378 - 388   2019年7月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Digestive Endoscopy  

    With recent breakthroughs in artificial intelligence, computer-aided diagnosis (CAD) for upper gastrointestinal endoscopy is gaining increasing attention. Main research focuses in this field include automated identification of dysplasia in Barrett's esophagus and detection of early gastric cancers. By helping endoscopists avoid missing and mischaracterizing neoplastic change in both the esophagus and the stomach, these technologies potentially contribute to solving current limitations of gastroscopy. Currently, optical diagnosis of early-stage dysplasia related to Barrett's esophagus can be precisely achieved only by endoscopists proficient in advanced endoscopic imaging, and the false-negative rate for detecting gastric cancer is approximately 10%. Ideally, these novel technologies should work during real-time gastroscopy to provide on-site decision support for endoscopists regardless of their skill; however, previous studies of these topics remain ex vivo and experimental in design. Therefore, the feasibility, effectiveness, and safety of CAD for upper gastrointestinal endoscopy in clinical practice remain unknown, although a considerable number of pilot studies have been conducted by both engineers and medical doctors with excellent results. This review summarizes current publications relating to CAD for upper gastrointestinal endoscopy from the perspective of endoscopists and aims to indicate what is required for future research and implementation in clinical practice.

    DOI: 10.1111/den.13317

    Web of Science

    Scopus

    PubMed

    CiNii Research

  330. Artificial intelligence and upper gastrointestinal endoscopy: current status and future perspective 査読有り

    Yuichi Mori, Shinei Kudo, Hussein Ebaid Naeem Mohmed, Masashi Misawa, Noriyuki Ogata, Hayato Itoh, Masahiro Oda, Kensaku Mori

    Digestive endoscopy   34 巻 ( 4 ) 頁: 378-388   2019年7月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1111 den.13317

  331. 大腸内視鏡(コロノスコピー)画像診断支援ソフトウェアの開発

    森 健策

    インナービジョン 2019年7月号   34 巻 ( 7 ) 頁: 45   2019年7月

     詳細を見る

    記述言語:日本語  

  332. 内視鏡検査手術における超音波画像の利用-マルチモダリティ画像統合-

    森 健策

    計測と制御   58 巻 ( 7 ) 頁: 541-544   2019年7月

     詳細を見る

    記述言語:日本語  

  333. 単眼腹腔鏡映像からの奥行き推定を利用した術具セグメンテーション

    鈴木 拓矢, 道満 恵介, 目加田 慶人, 三澤 一成, 森 健策

    第38回日本医用画像工学会大会予稿集     頁: OP1-11   2019年7月

     詳細を見る

    記述言語:日本語  

  334. 多元計算解剖学のその先にあるもの

    森 健策

    第38回日本医用画像工学会大会予稿集     頁: SY2-5   2019年7月

     詳細を見る

    記述言語:日本語  

  335. 3D fully convolutional network を用いた腎腫瘍の定量評価における初期検討

    王 成龍, 小田 昌宏, 林 雄一郎, 佐々 直人, 山本 徳則, 森 健策

    第38回日本医用画像工学会大会予稿集     頁: OP5-23   2019年7月

     詳細を見る

    記述言語:日本語  

  336. 深層学習を用いた非造影 CT 画像からの複数臓器領域の抽出に関する検討

    林 雄一郎, 申 忱, Roth Holger, 小田 昌宏, 三澤 一成, 森 健策

    第38回日本医用画像工学会大会予稿集     頁: OP5-14   2019年7月

     詳細を見る

    記述言語:日本語  

  337. グラフ畳み込みニューラルネットワークを用いた腹部動脈血管名自動命名の初期検討

    日比 裕太, 林 雄一郎, 北坂 孝幸, 伊東 隼人, 小田 昌宏, 三澤 一成, 森 健策

    第38回日本医用画像工学会大会予稿集     頁: OP5-11   2019年7月

     詳細を見る

    記述言語:日本語  

  338. 移学習を用いた腹部 thick-slice CT 像における多臓器領域の自動抽出の初期検討

    申 忱, ロス ホルガー, 林 雄一郎, 小田 紘久, 小田 昌宏, 三澤 一成, 森 健策

    第38回日本医用画像工学会大会予稿集     頁: OP4-15   2019年7月

     詳細を見る

    記述言語:日本語  

  339. 深層学習を用いた腹腔鏡手術動画像の出血領域自動セグメンテーション

    山本 翔太, 小田 紘久, 林 雄一郎, 北坂 孝幸, 小田 昌宏, 伊藤 雅昭, 竹下 修由, 森 健策

    第38回日本医用画像工学会大会予稿     頁: OP4-13   2019年7月

     詳細を見る

    記述言語:日本語  

  340. 開腹手術映像における遮蔽物除去システムの VR 化

    北坂 孝幸, 伊藤 幹也, 駒形 和哉, 三澤 一成, 森 健策

    第38回日本医用画像工学会大会予稿集     頁: OP4-10   2019年7月

     詳細を見る

    記述言語:日本語  

  341. μ CT を用いた改良版 Cycle-GAN による臨床用 CT 像の超解像処理

    鄭 通, 小田 紘久, 守谷 享泰, 杉野 貴明, 中村 彰太, 小田 昌弘, 森 雅樹, 高畠 博嗣, 名取 博, 森 健策

    第38回日本医用画像工学会大会予稿集     頁: OP4-02   2019年7月

     詳細を見る

    記述言語:日本語  

  342. 小児腸閉塞患者の CT 像における CycleGAN を用いた電子洗浄手法の検討

    西尾 光平, 小田 紘久, 千馬 耕亮, 北坂 孝幸, 伊東 隼人, 小田 昌宏, 檜 顕成, 内田 広夫, 森 健策

    第38回日本医用画像工学会大会予稿集     頁: OP3-20   2019年7月

     詳細を見る

    記述言語:日本語  

  343. 腹腔鏡動画像からの Fully Convolutional Network による血管領域抽出

    盛満 慎太郎, 小澤 卓也, 北坂 孝幸, 林 雄一郎, 小田 昌宏, 伊藤 雅昭, 竹下 修由, 三澤 一成, 森 健策

    第38回日本医用画像工学会大会予稿集     頁: OP3-12   2019年7月

     詳細を見る

    記述言語:日本語  

  344. Generative Adversarial Frameworks を用いた腹部 CT 像における非造影像からの造影像の推定

    小田 昌宏, 隈丸 加奈子, 青木 茂樹, 森 健策

    第38回日本医用画像工学会大会予稿集     頁: OP3-07   2019年7月

     詳細を見る

    記述言語:日本語  

  345. AMED 大規模データベースを用いた CT 画像解析と病変検出への応用

    森 健策,小田 昌宏

    第38回日本医用画像工学会大会予稿集     頁: SY1-5   2019年7月

     詳細を見る

    記述言語:日本語  

  346. Polyp size classification in colorectal cancer using a Siamese network

        頁: OP2-14   2019年7月

     詳細を見る

    記述言語:英語  

  347. 表現学習と SVM による胃壁マイクロ CT 像の半教師ありセグメンテーション手法

    御手洗 翠, 小田 紘久, 杉野 貴明, 守谷 享泰, 伊東 隼人, 小田 昌宏, 小宮山 琢真, 森 雅樹, 高畠 博嗣, 名取 博, 森 健策

    第38回日本医用画像工学会大会予稿集     頁: OP2-08   2019年7月

     詳細を見る

    記述言語:日本語  

  348. 深層学習における学習データセット規模拡大に応じた分類精度向上に関する実験的検討 ~超拡大大腸内視鏡画像における腫瘍性病変分類に向けた特徴量抽出~

    伊東 隼人, 森 悠一, 三澤 将史, 小田 昌宏, 工藤 進英, 森 健策

    第38回日本医用画像工学会大会予稿集     頁: OP1-24   2019年7月

     詳細を見る

    記述言語:日本語  

  349. 少量のラベルデータを用いた学習によるイレウス症例 CT 像における拡張腸管の自動抽出

    小田 紘久, 西尾 光平, 北坂 孝幸, 天野 日出, 千馬 耕亮, 内田 広夫, 鈴木 耕次郞, 伊東 隼人, 小田 昌宏, 森 健策

    第38回日本医用画像工学会大会予稿集     頁: OP1-15   2019年7月

     詳細を見る

    記述言語:日本語  

  350. 3D fully convolutional network を用いた腎腫瘍の定量評価における初期検討

    王 成龍, 小田 昌宏, 林 雄一郎, 佐々 直人, 山本 徳則, 森 健策

    第38回日本医用画像工学会大会予稿集     頁: OP5-23   2019年7月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  351. Artificial intelligence and upper gastrointestinal endoscopy: current status and future perspective 査読有り

    Yuichi Mori, Shinei Kudo, Hussein Ebaid, Naeem Mohmed, Masashi Misawa, Noriyuki Ogata, Hayato Itoh, Masahiro Oda, Kensaku Mori

    Digestive endoscopy   34 巻 ( 4 ) 頁: 378-388   2019年7月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

  352. AMED 大規模データベースを用いた CT 画像解析と病変検出への応用

    森 健策, 小田 昌宏

    第38回日本医用画像工学会大会予稿集     頁: SY1-5   2019年7月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  353. Artificial intelligence and colonoscopy: Current status and future perspectives 査読有り 国際誌

    Kudo, SE; Mori, Y; Misawa, M; Takeda, K; Kudo, T; Itoh, H; Oda, M; Mori, K

    DIGESTIVE ENDOSCOPY   31 巻 ( 4 ) 頁: 363 - 371   2019年7月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Digestive Endoscopy  

    Background and Aim: Application of artificial intelligence in medicine is now attracting substantial attention. In the field of gastrointestinal endoscopy, computer-aided diagnosis (CAD) for colonoscopy is the most investigated area, although it is still in the preclinical phase. Because colonoscopy is carried out by humans, it is inherently an imperfect procedure. CAD assistance is expected to improve its quality regarding automated polyp detection and characterization (i.e. predicting the polyp's pathology). It could help prevent endoscopists from missing polyps as well as provide a precise optical diagnosis for those detected. Ultimately, these functions that CAD provides could produce a higher adenoma detection rate and reduce the cost of polypectomy for hyperplastic polyps. Methods and Results: Currently, research on automated polyp detection has been limited to experimental assessments using an algorithm based on ex vivo videos or static images. Performance for clinical use was reported to have >90% sensitivity with acceptable specificity. In contrast, research on automated polyp characterization seems to surpass that for polyp detection. Prospective studies of in vivo use of artificial intelligence technologies have been reported by several groups, some of which showed a >90% negative predictive value for differentiating diminutive (≤5 mm) rectosigmoid adenomas, which exceeded the threshold for optical biopsy. Conclusion: We introduce the potential of using CAD for colonoscopy and describe the most recent conditions for regulatory approval for artificial intelligence-assisted medical devices.

    DOI: 10.1111/den.13340

    Web of Science

    Scopus

    PubMed

    CiNii Research

  354. 医用画像AI

    森 健策

    医療機器学 第94回日本医療機器学会大会・学術集会   89 巻 ( 2 ) 頁: 107-108   2019年6月

     詳細を見る

    記述言語:日本語  

  355. Evaluation on econstruction accuracy of visual SLAM based bronchoscope tracking

    C. Wang, Masahiro Oda, Yuichiro Hayashi, Takayuki Kitasaka, Hayato Itoh, H. Honma, H. Takabatake, M. Mori, H. Natori, Kensaku Mori

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: S24-25   2019年6月

     詳細を見る

    記述言語:英語  

  356. Automatic egistration of unordered point clouds for the study of abdominal organs and lymph node eformations

    B. Villard, K. Tachi, K. Misawa, M. Oda, K. Mori

    Computer Assisted Radiology 33rd International Congress and Exhibition CARS 2019     頁: 0   2019年6月

     詳細を見る

    記述言語:英語  

  357. Semi-automated small intestine segmentation by fully convolutional networks and Hessian analysis

    K. Mori, H. Oda, T. Sugino, K. Nishio, K. Chiba, K. Oshima, T. Kitasaka, M. Oda, C. Shirota, A. Hinoki, H. Uchida

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: s119-120   2019年6月

     詳細を見る

    記述言語:英語  

  358. 3D fully convolutional network-based head structure segmentation on multi-modal images from sparse annotation

    K. Mori, T. Sugino, H. Roth, M. Oda, T. Kin

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: s120-121   2019年6月

     詳細を見る

    記述言語:英語  

  359. Non-contrast to contrasted abdominal CT volume regression using fully convolutional network

    M. Oda, K. K. Kumamaru, S. Aoki, K. Mori

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: 103-104   2019年6月

     詳細を見る

    記述言語:英語  

  360. Computer-based virtual clinical trial for pulmonary function diagnosis with ynamic chest radiograph

    R. Tanaka, E. Samei, W. P. Segars, E. Abadi, H. Roth, H. Oda, K. Mori

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: s20-21   2019年6月

     詳細を見る

    記述言語:英語  

  361. Optical coherence tomography classification of multiple retinal diseases using DenseNet

    C. Wang, M. Oda, Y. Itoh, K.Mori

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: s69-70   2019年6月

     詳細を見る

    記述言語:英語  

  362. Artificial neural network for the prediction of colorectal lymph node metastasis

    B. Villard, H. Itoh, K. Ichimasa, Y. Mori, M. Misawa, M. Oda, S. Kudo, K. Mori

    International Journal of Computer Assisted Radiology and Surgery CARS 2019     頁: 0   2019年6月

     詳細を見る

    記述言語:英語  

  363. Evaluation of squeeze and excitation fully convolutional networks for multi-organ segmentation

    C. Shen, F. Milletari, H. Roth, M. Oda, B. Villard, Y. Hayashi, K. Misawa, K. Mori

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: s29-30   2019年6月

     詳細を見る

    記述言語:英語  

  364. Automatic segmentation of attention-aware artery region in laparoscopic colorectal surger

    S.Morimitsu, H. Itoh, T. Ozawa, H. Oda, T. Kitasaka, T. Sugino, Y. Hayashi, N. Takeshita, M. Ito, M. Oda, K. Mori

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: s41-42   2019年6月

     詳細を見る

    記述言語:英語  

  365. Polyp-size determination method using short colonoscopic video clip information

    H. Itoh, Y. Mori, M. Misawa, M. Oda, S. E. Kudo, K. Mori

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: s88-89   2019年6月

     詳細を見る

    記述言語:英語  

  366. 3D fully convolutional network-based head structure segmentation on multi-modal images from sparse annotation

    K. Mori, T. Sugino, H. Roth, M. Oda, T. Kin

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: s120-121   2019年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  367. Automatic egistration of unordered point clouds for the study of abdominal organs and lymph node eformations

    B. Villard, K. Tachi, K. Misawa, M. Oda, K. Mori

    Computer Assisted Radiology 33rd International Congress and Exhibition CARS 2019     頁: 0   2019年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  368. Computer-based virtual clinical trial for pulmonary function diagnosis with ynamic chest radiograph

    R. Tanaka, E. Samei, W. P. Segars, E. Abadi, H. Roth, H. Oda, K. Mori

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: s20-21   2019年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  369. Automatic segmentation of attention-aware artery region in laparoscopic colorectal surger

    S.Morimitsu, H. Itoh, T. Ozawa, H. Oda, T. Kitasaka, T. Sugino, Y. Hayashi, N. Takeshita, M. Ito, M. Oda, K. Mori

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: s41-42   2019年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  370. Artificial neural network for the prediction of colorectal lymph node metastasis

    B. Villard, H. Itoh, K. Ichimasa, Y. Mori, M. Misawa, M. Oda, S. Kudo, K. Mori

    International Journal of Computer Assisted Radiology and Surgery CARS 2019     頁: 0   2019年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  371. Semi-automated small intestine segmentation by fully convolutional networks and Hessian analysis

    K. Mori, H. Oda, T. Sugino, K. Nishio, K. Chiba, K. Oshima, T. Kitasaka, M. Oda, C. Shirota, A. Hinoki, H. Uchida

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: s119-120   2019年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  372. Polyp-size determination method using short colonoscopic video clip information

    H. Itoh, Y. Mori, M. Misawa, M. Oda, S. E. Kudo, K. Mori

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: s88-89   2019年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  373. Optical coherence tomography classification of multiple retinal diseases using DenseNet

    C. Wang, M. Oda, Y. Itoh, K.Mori

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: s69-70   2019年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  374. Non-contrast to contrasted abdominal CT volume regression using fully convolutional network

    M. Oda, K. K. Kumamaru, S. Aoki, K. Mori

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: 103-104   2019年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  375. Evaluation on econstruction accuracy of visual SLAM based bronchoscope tracking

    C. Wang, Masahiro Oda, Yuichiro Hayashi, Takayuki Kitasaka, Hayato Itoh, H. Honma, H. Takabatake, M. Mori, H. Natori, Kensaku Mori

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: S24-25   2019年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  376. Evaluation of squeeze and excitation fully convolutional networks for multi-organ segmentation

    C. Shen, F. Milletari, H. Roth, M. Oda, B. Villard, Y. Hayashi, K. Misawa, K. Mori

    International Journal of Computer Assisted Radiology and Surgery CARS 2019   14 巻 ( 1 ) 頁: s29-30   2019年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  377. 深層学習を用いた脳CT像からの出血検出におけるデータ拡張とネットワーク構造の影響に関する考察

    魯 仲陽, 小田 昌宏, 鄭 通, 申 忱, 胡 涛, 渡谷 岳行, 阿部 修, 橋本 正弘, 陣崎 雅弘, 森 健策

    電子情報通信学会技術研究報告(MI)   119 巻 ( 51 ) 頁: 65-70   2019年5月

     詳細を見る

    記述言語:日本語  

  378. ビッグデータとAIの医療応用

    森 健策

    最新醫學   74 巻 ( 3 ) 頁: 20-28   2019年3月

     詳細を見る

    記述言語:日本語  

  379. Radiomics nomogram for predicting the malignant potential of gastrointestinal stromal tumours preoperatively 査読有り

    Tao Chen, Zhenyuan Ning, Lili Xu, Xingyu Feng, Shuai Han, Holger R. Roth, Wei Xiong, Xixi Zhao, Yanfeng Hu, Hao Liu, Jiang Yu, Yu Zhang, Yong Li, Yikai Xu, Kensaku Mori, Guoxin Li

    European radiology   29 巻 ( 3 ) 頁: 1074-1082   2019年3月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

  380. Fully automated diagnostic system with artificial intelligence using endocytoscopy to identify the presence of histologic inflammation associated with ulcerative colitis (with video) 査読有り

    Yasuharu Maeda, Shin-eiKudo, Yuichi Mori, Masashi Misawa, Noriyuki Ogata, Seiko Sasanuma, Kunihiko Wakamura, Masahiro Oda, Kensaku Mori, Kazuo Ohtsuka

    Gastrointestinal endoscopy   89 巻 ( 2 ) 頁: 408-415   2019年2月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

  381. Polyp-size classification with RGB-D features for colonoscopy

    Hayato Itoh, Holger Roth, Yuichi Mori, Masashi Misawa, Masahiro Oda, Shin-Ei Kudo, Kensaku Mori

    Proceedings of SPIE 10950, Medical Imaging 2019     頁: 1095015-1-7   2019年2月

     詳細を見る

    記述言語:英語  

  382. Colonoscope tracking method based on shape estimation network

    Masahiro Oda, Holger R. Roth, Takayuki Kitasaka, Kazuhiro Furukawa, Yoshiki Hirooka, Nassir Navab, Kensaku Mori

    Proceedings of SPIE 10951, Medical Imaging 2019     頁: 109510Q-1-6   2019年2月

     詳細を見る

    記述言語:英語  

  383. Visual SLAM for bronchoscope tracking and bronchus reconstruction in bronchoscopic navigation

    Wang Cheng, Kensaku Mori, Masahiro Oda, Yuichiro Hayashi, Takayuki Kitasaka, Hirotoshi Honma, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori

    Proceedings of SPIE 10951, Medical Imaging 2019     頁: 109510A-1-7   2019年2月

     詳細を見る

    記述言語:英語  

  384. 3Dプリンティングの最新動向

    森 健策

    インナービジョン 2019年2月号   34 巻 ( 2 ) 頁: 40-41   2019年2月

     詳細を見る

    記述言語:日本語  

  385. Dynamic chest radiography for pulmonary function diagnosis: A validation study using 4D extended cardiac-torso (XCAT) phantom

    Rie Tanaka, Ehsan Samei, William Paul Segars, Ehsan Abadi, Holger Roth, Hirohisa Oda, Kensaku Mori

    Proceedings of SPIE 10948, Medical Imaging 2019     頁: 109483I-1-7   2019年2月

     詳細を見る

    記述言語:英語  

  386. Scanning, registration, and fiber estimation of rabbit hearts using micro-focus and refraction-contrast x-ray CT

    Hirohisa Oda, Holger R. Roth, Takaaki Sugino, Naoki Sunaguchi, Noriko Usami, Masahiro Oda, Daisuke Shimao, Shu Ichihara, Tetsuya Yuasa, Masami Ando, Toshiaki Akita, Yuji Narita, Kensaku Mori

    Proceedings of SPIE 10953, Medical Imaging 2019     頁: 109531I-1-12   2019年2月

     詳細を見る

    記述言語:英語  

  387. Lung segmentation based on a deep learning approach for dynamic chest radiography

    Yuki Kitahara, Rie Tanaka, Holger Roth, Hirohisa Oda, Kensaku Mori, Kazuo Kasahara, Isao Matsumoto

    Proceedings of SPIE 10950, Medical Imaging 2019     頁: 109503M-1-6   2019年2月

     詳細を見る

    記述言語:英語  

  388. Multiclass vertebral fracture classification using probability SVM with multi-feature selection

    Liyuan Zhang, Huamin Yang, Jiashi Zhao, Weili Shi, Yu Miao, Fei He, Wei He, Yanfang Li, Ke Zhang, Kensaku Mori, Zhengang Jiang

    Proceedings of SPIE 10950, Medical Imaging 2019     頁: 1095025-1-11   2019年2月

     詳細を見る

    記述言語:英語  

  389. Spinal curvature segmentation and location by transfer learning

    Jiashi Zhao, Zhengang Jiang, Kensaku Mori, Liyuan Zhang, Wei He, Weili Shi, Yu Miao, Fei Yan, Fei He

    Proceedings of SPIE 10950, Medical Imaging 2019     頁: 1095023-1-6   2019年2月

     詳細を見る

    記述言語:英語  

  390. Unsupervised segmentation of micro-CT images based on a hybrid of variational inference and adversarial learning

    Takayasu Moriya, Holger R. Roth, Shota Nakamura, Hirohisa Oda, Masahiro Oda, Kensaku Mori

    Proceedings of SPIE 10953, Medical Imaging 2019     頁: 109530L-1-8   2019年2月

     詳細を見る

    記述言語:英語  

  391. Weakly-supervised deep learning of interstitial lung disease types on CT images

    Chenglong Wang, Takayasu Moriya, Yuichiro Hayashi, Holger Roth, Le Lu, Masahiro Oda, Hirotugu Ohkubo, Kennsaku Mori

    Proceedings of SPIE 10950, Medical Imaging 2019     頁: 109501H-1-7   2019年2月

     詳細を見る

    記述言語:英語  

  392. Multi-class abdominal organs segmentation with improved V-Nets

    Chen Shen, Fausto Milletari, Holger R. Roth, Hirohisa Oda, Masahiro Oda, Yuichiro Hayashi, Kazunari Misawa, Kensaku Mori

    Proceedings of SPIE 10949, Medical Imaging 2019     頁: 109490B-1-7   2019年2月

     詳細を見る

    記述言語:英語  

  393. 3Dプリンティングの最新動向

    森 健策

    インナービジョン 2019年2月号   34 巻 ( 2 ) 頁: 40-41   2019年2月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)  

  394. Colonoscope tracking method based on shape estimation network

    Masahiro Oda, Holger R. Roth, Takayuki Kitasaka, Kazuhiro Furukawa, Yoshiki Hirooka, Nassir Navab, Kensaku Mori

    Proceedings of SPIE 10951, Medical Imaging 2019     頁: 109510Q-1-6   2019年2月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  395. Automated hand eye calibration in laparoscope holding robot for robot assisted surgery

    Shuai Jiang, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kazunari Misawa, Kensaku Mori

    IFMIA 2019     頁: 0   2019年1月

     詳細を見る

    記述言語:英語  

  396. Investigation of extracting the interlobular septa with combination of Hessian analysis and radial structure tensor in micro-CT volume

    Xiaotian Zhao, Hirohisa Oda, Shota Nakamura, Yuichiro Hayashi, Hayato Itoh, Masahiro Oda, Kensaku Mori

    IFMIA 2019     頁: 0   2019年1月

     詳細を見る

    記述言語:英語  

  397. Wavelength Dependence of Ultrahigh-Resolution Optical Coherence Tomography Using Supercontinuum for Biomedical Imaging 査読有り

    Norihiko Nishizawa, Hiroyuki Kawagoe, Masahito Yamanaka, Miyoko atsushima, Kensaku Mori, Tsutomu Kawabe

    IEEE Journal of Selected Topics in Quantum Electronics   25 巻 ( 1 ) 頁: 7101115   2019年1月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1109/JSTQE.2018.2854595

  398. 敵対的Dense U-netを用いた切除肺マイクロCT像の超解像

    鄭 通, 小田 紘久, Holger R. Roth, 小田 昌宏, 中村 彰太, 森 健策

    電子情報通信学会技術研究報告(MI)   118 巻 ( 412 ) 頁: 7-12   2019年1月

     詳細を見る

    記述言語:日本語  

  399. AI支援手術に向けた鏡視下手術画像の自動認識システムの開発

    北口 大地, 松崎 博貴, 渡部 嘉気, 青柳 吉博, 佐藤 大介, 巣籠 悠輔, 原 聖吾, 森 健策, 伊藤 雅昭

    第1回日本メディカルAI学会学術集会, 日本メディカルAI学会誌   1 巻   頁: 81   2019年1月

     詳細を見る

    記述言語:日本語  

  400. 切除肺のマイクロCT像における3D-DBPNを用いた超解像の検討

    鄭 通, 小田 紘久, 小田 昌宏, 守谷 享泰, 中村 彰太, 森 健策

    第11回呼吸機能イメージング研究会学術集会, プログラム抄録集     頁: 82   2019年1月

     詳細を見る

    記述言語:日本語  

  401. MICCAI2018参加報告

    小田 昌宏, 大竹 義人, 伊東 隼人, 杉野 貴明, 斉藤 篤, 古川 亮, 大西 峻, 井宮 淳, 森 健策

    電子情報通信学会技術研究報告(MI)   118 巻 ( 412 ) 頁: 221-228   2019年1月

     詳細を見る

    記述言語:日本語  

  402. 機械学習を用いた腹部動脈血管名自動命名におけるデータ拡張法の適用に関する検討

    鉄村 悠介, 林 雄一郎, 小田 昌宏, 北坂 孝幸, 三澤 一成, 森 健策

    電子情報通信学会技術研究報告(MI)   118 巻 ( 412 ) 頁: 191-196   2019年1月

     詳細を見る

    記述言語:英語  

  403. CTからの腹部多臓器抽出におけるgroup normalizationの影響に関する考察

    申 忱, Fausto Milletari, Holger R. Roth, 小田 紘久, 小田 昌宏, 林 雄一郎, 三澤 一成, 森 健策

    電子情報通信学会技術研究報告(MI)   118 巻 ( 412 ) 頁: 143-148   2019年1月

     詳細を見る

    記述言語:日本語  

  404. 不均衡データからの特徴選択 超拡大内視鏡画像の病理類型分類に向けて

    伊東 隼人, 森 悠一, 三澤 将史, 小田 昌宏, 工藤 進英, 森 健策

    電子情報通信学会技術研究報告(MI)   118 巻 ( 412 ) 頁: 109-114   2019年1月

     詳細を見る

    記述言語:日本語  

  405. 経時CT像間の腹部臓器の変形を考慮したリンパ節自動対応付け手法の検討

    舘 高基, 小田 昌宏, 林 雄一郎, 伊東 隼人, 中村 嘉彦, 北坂 孝幸, 三澤 一成, 森 健策

    電子情報通信学会技術研究報告(MI)   118 巻 ( 412 ) 頁: 97-102   2019年1月

     詳細を見る

    記述言語:日本語  

  406. マルチモーダル画像を用いた深層学習ベースの頭部解剖構造抽出 少量画像データ学習における抽出精度検証

    杉野 貴明, Holger R. Roth, 小田 昌宏, 金 太一, 森 健策

    電子情報通信学会技術研究報告(MI)   118 巻 ( 412 ) 頁: 65-70   2019年1月

     詳細を見る

    記述言語:日本語  

  407. AI支援手術に向けた鏡視下手術画像の自動認識システムの開発

    北口 大地, 松崎 博貴, 渡部 嘉気, 青柳 吉博, 佐藤 大介, 巣籠 悠輔, 原 聖吾, 森 健策, 伊藤 雅昭

    第1回日本メディカルAI学会学術集会, 日本メディカルAI学会誌   1 巻   頁: 81   2019年1月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  408. CTからの腹部多臓器抽出におけるgroup normalizationの影響に関する考察

    申 忱, Fausto Milletari, Holger R. Roth, 小田 紘久, 小田 昌宏, 林 雄一郎, 三澤 一成, 森 健策

    電子情報通信学会技術研究報告(MI)   118 巻 ( 412 ) 頁: 143-148   2019年1月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  409. Automated hand eye calibration in laparoscope holding robot for robot assisted surgery

    Shuai Jiang, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kazunari Misawa, Kensaku Mori

    IFMIA 2019     頁: 0   2019年1月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  410. AI搭載医療診断システム・機器のレギュレーションについて 査読有り

    鎮西清行, 清水昭伸, 森健策, 原田香奈子, 武田英明, 橋爪誠, 石塚真由美, 加藤進昌, 河盛隆造, 許俊鋭, 永田恭介, 山根隆志, 佐久間一郎, 大江和彦, 光石衛

    レギュラトリーサイエンス学会誌   9 巻 ( 1 ) 頁: 31 - 36   2019年

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:一般社団法人 レギュラトリーサイエンス学会  

    近年, 人工知能〔Artificial Intelligence (AI)〕の研究の著しい発展に伴い, AI技術を搭載する医療機器とその他の医療システムの研究開発が急速に進められている. AI医療システムには開発者や審査において考慮すべき, 従来の医療システムにはない新しい要素がある. それは学習によりシステム性能などが変化しうるという可塑性, ブラックボックスとしての性質がもたらすシステムの振る舞いの予測困難さ, 高度な自律能に伴う医師らと患者の関係性の変化である. AI医療システムは, 今後急速に医療の場に導入されてくると考えられ, これらの特徴を十分考慮した研究開発や審査の体制を早急に構築しなければならない.

    DOI: 10.14982/rsmp.9.31

    CiNii Research

  411. 大腸内視鏡診断への人工知能応用:Endocytoを用いた診断支援システムの研究開発経験から

    森 悠一, 工藤 進英, 森 健策

    日本消化器病学会雑誌   115 巻 ( 12 ) 頁: 1030-1036   2018年12月

     詳細を見る

    記述言語:日本語  

  412. 大腸内視鏡治療誘導のためのRecurrent Neural Networkを用いた大腸内視鏡トラッキング手法の開発 査読有り

    小田 昌宏, Holger R. Roth, 北坂 孝幸, 古川 和宏, 宮原 良二, 廣岡 芳樹, Nassir Navab, 森 健策

    日本バーチャルリアリティ学会論文誌   23 巻 ( 4 ) 頁: 249-252   2018年12月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.18974/tvrsj.23.4_249

  413. Micro CT and Histopathological Image Registration Based on Deep-Learning Assisted Image Registration

    Kensaku Mori, Kai Nagara, Shota Nakamura, Hirohisa Oda, MENG, Holger R. Roth,, Masahiro Oda

    RSNA2018     頁: CH218-ED-X   2018年11月

     詳細を見る

    記述言語:英語  

  414. U‒Net を用いた腹腔鏡動画像における出血領域検出に関する検討

    小澤 卓也, 小田 紘久, 伊東 隼人, 北坂 孝幸, Holger R. Roth, 小田 昌宏, 林 雄一郎, 三澤 一成, 伊藤 雅昭, 竹下 修由 , 森 健策

    日本コンピュータ外科学会誌 第27回日本コンピュータ外科学会大会特集号   20 巻 ( 4 18(6)-10 ) 頁: 370   2018年11月

     詳細を見る

    記述言語:日本語  

  415. ディープラーニングを用いた腹腔鏡映像からの腹腔鏡下胃切除術の手術工程解析の検討

    林 雄一郎, 杉野 貴明, 小田 昌宏, 三澤 一成, 森 健策

    日本コンピュータ外科学会誌 第27回日本コンピュータ外科学会大会特集号   20 巻 ( 4 18(ⅩⅠ)-10 ) 頁: 368-369   2018年11月

     詳細を見る

    記述言語:日本語  

  416. 腹腔鏡把持ロボットのための自動ハンドアイキャリブレーションの検討

    蒋 帥, 林 雄一郎, 小田 昌宏, 北坂 孝幸, 三澤 一成, 森 健策

    日本コンピュータ外科学会誌 第27回日本コンピュータ外科学会大会特集号   20 巻 ( 4 18(Ⅹ)-10 ) 頁: 359   2018年11月

     詳細を見る

    記述言語:日本語  

  417. イレウス診断支援システムにおける閉塞部位の誤検出修正及び改善ツールの構築

    西尾 光平, 小田 紘久, 千馬 耕亮, 北坂 孝幸, Holger R. Roth, 伊東 隼人, 林 雄一郎, 小田 昌宏, 檜 顕成, 内田 広夫, 森 健策

    日本コンピュータ外科学会誌 第27回日本コンピュータ外科学会大会特集号   20 巻 ( 4 18(Ⅹ)-3 ) 頁: 348-349   2018年11月

     詳細を見る

    記述言語:日本語  

  418. SLAM ベースのビジュアルトラッキングにおける隣接フレーム利用再構成手法の評価

    王 成, 小田 昌宏, 林 雄一郎, 北坂 孝幸, 本間 裕敏, 高畠 博嗣, 森 雅樹, 名取 博, 森 健策

    日本コンピュータ外科学会誌 第27回日本コンピュータ外科学会大会特集号   20 巻 ( 4 18(Ⅸ)-4 ) 頁: 342-343   2018年11月

     詳細を見る

    記述言語:日本語  

  419. ステレオ手術顕微鏡画像からの脳表の 3 次元形状復元と術前 MRI 画像との融合による脳神経外科手術支援の検討

    林 雄一郎, 藤井 正純, 柴田 睦実, Dilip Bhandari, 森 健策

    日本コンピュータ外科学会誌 第27回日本コンピュータ外科学会大会特集号   20 巻 ( 4 18(Ⅷ)-4 ) 頁: 334-335   2018年11月

     詳細を見る

    記述言語:日本語  

  420. CT 像より自動抽出された動脈領域に対応した機械学習に基づく腹部動脈血管名自動命名法

    鉄村 悠介, 林 雄一郎, 小田 昌宏, 北坂 孝幸, 三澤 一成, 森 健策

    日本コンピュータ外科学会誌 第27回日本コンピュータ外科学会大会特集号   20 巻 ( 4 18(Ⅶ)-4 ) 頁: 322-323   2018年11月

     詳細を見る

    記述言語:日本語  

  421. 生成モデルを利用したマイクロ CT 画像の半教師ありセグメンテーション

    守谷 享泰, Holger R. Roth, 中村 彰太, 小田 紘久, 小田 昌宏, 森 健策

    日本コンピュータ外科学会誌 第27回日本コンピュータ外科学会大会特集号   20 巻 ( 4 18(Ⅵ)-9 ) 頁: 312-313   2018年11月

     詳細を見る

    記述言語:日本語  

  422. 不均衡データセットからの学習データセット構築法 ―機械学習に基づく医用画像分類に向けて―

    伊東 隼人, 森 悠一, 三澤 将史, 小田 昌宏, 工藤 進英, 森 健策

    日本コンピュータ外科学会誌 第27回日本コンピュータ外科学会大会特集号   20 巻 ( 4 18(Ⅲ)-5 ) 頁: 261-262   2018年11月

     詳細を見る

    記述言語:日本語  

  423. 深層学習を用いた屈折 X 線 CT 画像からの眼球構造抽出 ―Sparse annnotation データの学習法に関する検討―

    杉野 貴明, Holger R. Roth, 小田 昌宏, 砂口 尚輝, 島雄 大介, 森 健策

    日本コンピュータ外科学会誌 第27回日本コンピュータ外科学会大会特集号   20 巻 ( 4 18(Ⅲ)-4 ) 頁: 259-260   2018年11月

     詳細を見る

    記述言語:日本語  

  424. 深層学習を用いたマイクロ CT 画像の超解像に関する初期的検討

    鄭 通, Holger R. Roth, 小田 昌宏, 小田 紘久, 中村 彰太, 森 健策

    日本コンピュータ外科学会誌 第27回日本コンピュータ外科学会大会特集号   20 巻 ( 4 18(Ⅱ)-8 ) 頁: 252-253   2018年11月

     詳細を見る

    記述言語:日本語  

  425. 医用画像処理のための深層学習サンプルコード集 DMED

    小田 昌宏, 原 武史, 森 健策

    日本コンピュータ外科学会誌 第27回日本コンピュータ外科学会大会特集号   20 巻 ( 4 18(Ⅱ)-5 ) 頁: 248-249   2018年11月

     詳細を見る

    記述言語:日本語  

  426. 超拡大内視鏡におけるAI

    森 健策, 伊東 隼人, 三澤 将史, 森 悠一, 工藤 進英

    日本光学会年次学術講演会講演予稿集     頁: 226-227   2018年11月

     詳細を見る

    記述言語:日本語  

  427. Investigation on the condition of using adjacent reconstruction in visual bronchoscope tracking

      118 巻 ( 286 ) 頁: 27-32   2018年11月

     詳細を見る

    記述言語:英語  

  428. 病院内位置測位手法の検討

    山下 佳子, 大山 慎太郎, 大谷 智洋, 白鳥 義宗, 森 健策

    電子情報通信学会技術研究報告(MI), MICT2018-47   118 巻 ( 285 ) 頁: 41-44   2018年11月

     詳細を見る

    記述言語:日本語  

  429. Computer Assistance in Comparison of Kidney Function Variation Between Pre- and Post-nephrectomy

    Chenglong Wang, Masahiro Oda, Jun Nagayama, Yasushi Yoshino, Tokunori Yamamoto, Kensaku Mori

    RSNA2018     頁: UR007-EB-WEA   2018年11月

     詳細を見る

    記述言語:英語  

  430. 3D High-Resolution Microstructure Imaging of the Heart

    Hirohisa Oda, MENG, Holger R. Roth,, Naoki Sunaguch, Tetsuya Yuasa, Toshiaki Akita, Kensaku Mori, Daisuke Shimao, Shu Ichihara, Masami Ando, Noriko Usami, Masahiro Oda, Yuji Narita

    RSNA2018     頁: CA001-EC-X   2018年11月

     詳細を見る

    記述言語:英語  

  431. 3D High-Resolution Microstructure Imaging of the Heart

    Hirohisa Oda, MENG, Holger R. Roth, Naoki Sunaguch, Tetsuya Yuasa, Toshiaki Akita, Kensaku Mori, Daisuke Shimao, Shu Ichihara, Masami Ando, Noriko Usami, Masahiro Oda, Yuji Narita

    RSNA2018     頁: CA001-EC-X   2018年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  432. Computer Assistance in Comparison of Kidney Function Variation Between Pre- and Post-nephrectomy

    Chenglong Wang, Masahiro Oda, Jun Nagayama, Yasushi Yoshino, Tokunori Yamamoto, Kensaku Mori

    RSNA2018     頁: UR007-EB-WEA   2018年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  433. CT 像より自動抽出された動脈領域に対応した機械学習に基づく腹部動脈血管名自動命名法

    鉄村 悠介, 林 雄一郎, 小田 昌宏, 北坂 孝幸, 三澤 一成, 森 健策

    日本コンピュータ外科学会誌 第27回日本コンピュータ外科学会大会特集号   20 巻 ( 4 18(Ⅶ)-4 ) 頁: 322-323   2018年11月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  434. DRINet for Medical Image Segmentation 査読有り

    Liang Chen, Paul Bentley, Kensaku Mori, Kazunari Misawa, Michitaka Fujiwara, Daniel Rueckert

    IEEE Transaction on Medical Imaging   37 巻 ( 11 ) 頁: 2453-2462   2018年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

  435. Real-Time Use of Artificial Intelligence in Identification of Diminutive Polyps During Colonoscopy: A Prospective Study 査読有り

    Yuichi Mori, Shin-ei Kudo, Masashi Misawa, Yutaka Saito, Hiroaki Ikematsu, Kinichi Hotta, Kazuo Ohtsuka, Fumihiko Urushibara, Shinichi Kataoka, Yushi Ogawa, Yasuharu Maeda, Kenichi Takeda, Hiroki Nakamura, Katsuro Ichimasa, Toyoki Kudo, Takemasa Hayashi, Kunihiko Wakamura, Fumio Ishida, Haruhiro Inoue, Hayato Itoh, Masahiro Oda, Kensaku Mori

    Annals of Internal Medicine     2018年9月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.7326/M18-0249

  436. Application of three-dimensional print in minor hepatectomy following liver partition between anterior and posterior sectors 査読有り

    Tsuyoshi Igami, Yoshihiko Nakamura, Masahiro Oda, Hiroshi Tanaka, Motoi Nojiri, Tomoki Ebata, Yukihiro Yokoyama, Gen Sugawara, Takashi Mizuno, Junpei Yamaguchi, Kensaku Mori, Masato Nagino

    ANZ Journal of Surgery   88 巻 ( 9 ) 頁: 882-885   2018年9月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

  437. BESNet: Boundary-enhanced Segmentation of Cells in Histopathological Images

    Hirohisa Oda, Holger Roth, Kosuke Chiba, Jure Sokolic, Takayuki Kitasaka, Masahiro Oda, Akinari Hinoki, Hiroo Uchida, Julia A Schnabel, Kensaku Mori

    MICCAI 2018, LNCS 11071     頁: 228-236   2018年9月

     詳細を見る

    記述言語:英語  

  438. Towards Automated Colonoscopy Diagnosis: Binary Polyp Size Estimation via Unsupervised Depth Learning

    Hayato Itoh, Holger Roth, Le Lu, Masahiro Oda, Masashi Misawa, Yuichi Mori, Shin-ei Kudo, Kensaku Mori

    MICCAI 2018, LNCS 11071     頁: 176-184   2018年9月

     詳細を見る

    記述言語:英語  

  439. Colon Shape Estimation Method for Colonoscope Tracking using Recurrent Neural Networks

    Masahiro Oda, Holger Roth, Takayuki Kitasaka, Kazuhiro Furukawa, Ryoji Miyahara, Yoshiki Hirooka, Hidemi Goto, Nassir Navab, Kensaku Mori

    MICCAI 2018, LNCS 11073     頁: 176-184   2018年9月

     詳細を見る

    記述言語:英語  

  440. A Multi-scale Pyramid of 3D Fully Convolutional Networks for Abdominal Multiorgan Segmentation

    Holger Roth, Chen Shen, Hirohisa Oda, Takaaki Sugino, Masahiro Oda, Yuichiro Hayashi, Kazunari Misawa, Kensaku Mori

    MICCAI 2018, LNCS 11073     頁: 417-425   2018年9月

     詳細を見る

    記述言語:英語  

  441. Fully Convolutional Network-based Eyeball Segmentation from Sparse Annotation for Eye Surgery Simulation Model

    Takaaki Sugino, Holger R. Roth, Masahiro Oda, Kensaku Mori

    International Workshop on Bio-Imaging and Visualization for Patient-Customized Simulations, BIVPCS 2018, LNCS 11042     頁: 118-126   2018年9月

     詳細を見る

    記述言語:英語  

  442. Fully automated diagnostic system with artificial intelligence using endocytoscopy to identify the presence of histologic inflammation associated with ulcerative colitis (with video) 査読有り

    Yasuharu Maeda, Shin-eiKudo, Yuichi Mori, Masashi Misawa, Noriyuki Ogata, Seiko Sasanuma, Kunihiko Wakamura, Masahiro Oda, Kensaku Mori, Kazuo Ohtsuka

    Gastrointestinal endoscopy     2018年9月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1016/j.gie.2018.09.024

  443. BESNet: Boundary-enhanced Segmentation of Cells in Histopathological Images

    Hirohisa Oda, Holger Roth, Kosuke Chiba, Jure Sokolic, Takayuki Kitasaka, Masahiro Oda, Akinari Hinoki, Hiroo Uchida, Julia A Schnabel, Kensaku Mori

    MICCAI 2018, LNCS 11071     頁: 228-236   2018年9月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  444. A Multi-scale Pyramid of 3D Fully Convolutional Networks for Abdominal Multiorgan Segmentation

    Holger Roth, Chen Shen, Hirohisa Oda, Takaaki Sugino, Masahiro Oda, Yuichiro Hayashi, Kazunari Misawa, Kensaku Mori

    MICCAI 2018, LNCS 11073     頁: 417-425   2018年9月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  445. Application of three-dimensional print in minor hepatectomy following liver partition between anterior and posterior sectors 査読有り

    Tsuyoshi Igami, Yoshihiko Nakamura, Masahiro Oda, Hiroshi Tanaka, Motoi Nojiri, Tomoki Ebata, Yukihiro Yokoyama, Gen Sugawara, Takashi Mizuno, Junpei Yamaguchi, Kensaku Mori, Masato Nagino

    ANZ Journal of Surgery   88 巻 ( 9 ) 頁: 882-885   2018年9月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

  446. Anatomical location classification of gastroscopic images using DenseNet trained from Cyclical Learning Rate

    Qier Meng, Kiyohito Tanaka, Shin'ichi Satoh, Masaru Kitsuregawa, Yusuke Kurose, Tatsuya Harada, Hideaki Hayashi, Ryoma Bise, Seiichi Uchida, Masahiro Oda, Kensaku Mori

        頁: PS1-51   2018年8月

     詳細を見る

    記述言語:英語  

  447. Sparse annotationによる深層学習ベースの解剖構造抽出:屈折X線CT像からの精密な眼球セグメンテーション

    杉野貴明,Holger R. Roth,小田昌宏,砂口尚輝,島雄大介,市原周,湯浅哲也,安藤正海,森健策

    MIRU2018     頁: PS3-11   2018年8月

     詳細を見る

    記述言語:日本語  

  448. 超拡大内視鏡における病理画像分類のための特徴選択法

    伊東 隼人, 森 悠一, 三澤 将史, 小田 昌宏, 工藤 進英, 森 健策

    MIRU2018     頁: PS2-17   2018年8月

     詳細を見る

    記述言語:日本語  

  449. Radiomics nomogram for predicting the malignant potential of gastrointestinal stromal tumours preoperatively 査読有り

    Tao Chen, Zhenyuan Ning, Lili Xu, Xingyu Feng, Shuai Han, Holger R. Roth, Wei Xiong, Xixi Zhao, Yanfeng Hu, Hao Liu, Jiang Yu, Yu Zhang, Yong Li, Yikai Xu, Kensaku Mori, Guoxin Li

    European radiology     頁: 1-9   2018年8月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1007/s00330-018-5629-2

  450. Anatomical location classification of gastroscopic images using DenseNet trained from Cyclical Learning Rate

    Qier Meng, Kiyohito Tanaka, Shin'ichi Satoh, Masaru Kitsuregawa, Yusuke Kurose, Tatsuya Harada, Hideaki Hayashi, Ryoma Bise, Seiichi Uchida, Masahiro Oda, Kensaku Mori

    MIRU2018     頁: PS1-51   2018年8月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  451. Attention U-Net: Learning Where to Look for the Pancreas

    Ozan Oktay, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich, Kazunari Misawa, Kensaku Mori, Steven McDonagh?, Nils Y. Hammerla, Bernhard Kainz, Ben Glocker, Daniel Rueckert

    Medical Imaging with Deep Learning MIDL2018     頁: 00   2018年7月

     詳細を見る

    記述言語:英語  

  452. マイクロCT画像からのRSTを用いた小葉壁抽出手法の検討

    趙 笑添, Holger R. Roth, 中村彰太, 小田紘久, 林 雄一郎, 守谷享泰, 長柄 快, 小田昌宏, 森 健策

    電子情報通信学会技術研究報告(MI)   118 巻 ( 150 ) 頁: 11-16   2018年7月

     詳細を見る

    記述言語:日本語  

  453. 腹腔鏡下手術のためのVR 手術再観察システムの開発

    鈴木 拓矢,道満 恵介,目加田 慶人,三澤 一成,森 健策

    第37回日本医用画像工学会大会予稿集     頁: OP8-4   2018年7月

     詳細を見る

    記述言語:日本語  

  454. 脳神経外科手術支援のための手術顕微鏡画像からの脳表の3次元形状復元に関する検討

    林 雄一郎, 柴田 睦実, 藤井 正純, 森 健策

    第37回日本医用画像工学会大会予稿集     頁: OP8-1   2018年7月

     詳細を見る

    記述言語:日本語  

  455. Fully convolutional networkを用いた小構造物セグメンテーション方法の検討及び腹部動脈への適用

    小田 昌宏, Holger R. Roth, 北坂 孝幸, 三澤 一成, 藤原 道隆, 森 健策

    第37回日本医用画像工学会大会予稿集     頁: OP7-5   2018年7月

     詳細を見る

    記述言語:日本語  

  456. 胃の変形情報を利用した経時リンパ節の自動対応付け手法の精度向上に関する研究

    舘 高基, 小田 昌宏, 林 雄一郎, 中村 嘉彦, 北坂 孝幸, 三澤 一成, 森 健策

    第37回日本医用画像工学会大会予稿集     頁: OP4-2   2018年7月

     詳細を見る

    記述言語:日本語  

  457. ウサギ心臓の屈折CT像における線維配向の可視化ならびに評価

    小田 紘久, Holger R. Roth, 砂口 尚輝, 宇佐美 紀子, 小田 昌宏 , 島雄 大介, 市原 周, 湯浅 哲也, 安藤 正海, 秋田 利明, 成田 裕司, 森 健策

    第37回日本医用画像工学会大会予稿集     頁: OP1-8   2018年7月

     詳細を見る

    記述言語:日本語  

  458. 機械学習による内視鏡動画インスタンスセグメンテーションのための手動アノテーションツールの開発

    小澤 卓也, 小田 紘久, 伊東 隼人, 北坂 孝幸, Holger R. Roth, 小田 昌宏, 林 雄一郎, 三澤 一成, 伊藤 雅昭, 竹下 修由, 森 健

    第37回日本医用画像工学会大会予稿集     頁: OP1-7   2018年7月

     詳細を見る

    記述言語:日本語  

  459. Fast Marching Algorithmに基づく小児CT像からの腸管閉塞部位検出手法

    西尾 光平, 小田 紘久, 千馬 耕亮, 北坂 孝幸, Holger Roth, 伊東 隼人, 小田 昌宏, 檜 顕成, 内田 広夫, 森 健策

    第37回日本医用画像工学会大会予稿集     頁: OP1-6   2018年7月

     詳細を見る

    記述言語:日本語  

  460. Fully convolutional networkを用いた少量画像データ学習からの頭部解剖構造抽出

    杉野 貴明,Holger R. Roth,小田 昌宏,庄野 直之,金 太一,森 健策

    第37回日本医用画像工学会大会予稿集     頁: OP1-1   2018年7月

     詳細を見る

    記述言語:日本語  

  461. 隣接復元を用いたSLAMベースの気管支鏡追跡の改善

    王 成, 小田 昌宏, 林 雄一郎, 本間 裕敏, 高畑 博嗣, 森 雅樹, 名取 博, 森 健策

    第37回日本医用画像工学会大会予稿集     頁: OP13-6   2018年7月

     詳細を見る

    記述言語:日本語  

  462. 教師なし深度推定を利用したRGB-D 特徴抽出に基づくポリープのトリナリサイズ推定

    伊東隼人, Holger Roth, 三澤将史, 森悠一, 小田昌宏, 工藤進英, 森健策

    第37回日本医用画像工学会大会予稿集     頁: OP14-4   2018年7月

     詳細を見る

    記述言語:日本語  

  463. 機械学習を用いた腹部動脈血管名自動命名における臓器情報および多血管相互関係利用方法の検討

    鉄村 悠介, Holger Roth, 林 雄一郎, 小田 昌宏, 三澤 一成, 森 健策

    第37回日本医用画像工学会大会予稿集     頁: OP14-2   2018年7月

     詳細を見る

    記述言語:日本語  

  464. Deformation matching of laparoscopic gastrectomy Navigation based on finite element analysis

    T. Chen, G. Wei, W. Shi, Y.Hu, J. Yu, Z.Jiang,K. Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s67-68   2018年6月

     詳細を見る

    記述言語:英語  

  465. Polyp detection in colonoscopic videos by using spatio-temporal feature

    Hayato Itoh, Holger R. Roth, Masashi Misawa, Yuichi Mori, Masahiro Oda, Shin-ei Kudo, Kensaku Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s97-98   2018年6月

     詳細を見る

    記述言語:英語  

  466. Unsupervised deep learning based registration for aligning micro CT and histology images

    K. Nagara, S. Nakamura, H. Roth, M. Oda, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s155-157   2018年6月

     詳細を見る

    記述言語:英語  

  467. Micro-focus X-ray CT of the heart:A comparison with X-ray refraction-contrast CT,

    Hirohisa Oda, Holger R. Roth, Naoki Sunaguchi, Daisuke Shimao, Takaaki Sugino, Masahiro Oda, Toshiaki Akita, Yuji Narita, Shu Ichihara, Tetsuya Yuasa, Masami Ando, Kensaku Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s140-142   2018年6月

     詳細を見る

    記述言語:英語  

  468. Deformation matching of laparoscopic gastrectomy Navigation based on finite element analysis

    T. Chen, G. Wei, W. Shi, Y.Hu, J. Yu, Z.Jiang, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s67-68   2018年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  469. Unsupervised deep learning based registration for aligning micro CT and histology images

    K. Nagara, S. Nakamura, H. Roth, M. Oda, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s155-157   2018年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  470. Micro-focus X-ray CT of the heart:A comparison with X-ray refraction-contrast CT,

    Hirohisa Oda, Holger R. Roth, Naoki Sunaguchi, Daisuke Shimao, Takaaki Sugino, Masahiro Oda, Toshiaki Akita, Yuji Narita, Shu Ichihara, Tetsuya Yuasa, Masami Ando, Kensaku Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s140-142   2018年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  471. Polyp detection in colonoscopic videos by using spatio-temporal feature

    Hayato Itoh, Holger R. Roth, Masashi Misawa, Yuichi Mori, Masahiro Oda, Shin-ei Kudo, Kensaku Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s97-98   2018年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  472. Improvement of robustness of SLAM-based bronchoscope tracking by posture guided feature matching

    Cheng Wang,Masahiro Oda,Yuichiro Hayashi,Hirotoshi Honma, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori, Kensaku Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s11-12   2018年6月

     詳細を見る

    記述言語:英語  

  473. Semi-supervised spherical K-means for segmenting idiopathic interstitial pneumonia from chest CT images

    C. Wang, T. Moriya, Y. Hayashi, M. Oda, H. Ohkubo, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s27-28   2018年6月

     詳細を見る

    記述言語:英語  

  474. Evaluation of 3D fully convolutional networks for multi-class organ segmentation in contrast-enhanced CT

    Chen Shen, Holger R. Roth, Hirohisa Oda, Masahiro Oda, Yuichiro Hayashi, Kazunari Misawa, Tadaaki Miyamoto, and Kensaku Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s21-22   2018年6月

     詳細を見る

    記述言語:英語  

  475. Abdominal artery segmentation from CT volumes using fully convolutional network for small artery segmentation

    Masahiro Oda, Takayuki Kitasaka, Kazunari Misawa, Michitaka Fujiwara, Kensaku Mori,

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s20-21   2018年6月

     詳細を見る

    記述言語:英語  

  476. Auto-context 3D fully convolutional networks for multi-scale semantic segmentation of abdominal CT volumes

    K. Mori, H. Roth, C. Shen, H. Oda, T. Sugino, M. Oda, Y. Hayashi, K. Misawa

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s18-19   2018年6月

     詳細を見る

    記述言語:英語  

  477. Unsupervised 3D micro-CT image segmentation based on a hybrid of VAE and GAN

    T. Moriya, H. Roth, S. Nakamura, H. Oda, K. Nagara, M. Oda, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s15-17   2018年6月

     詳細を見る

    記述言語:英語  

  478. Automated ganglion cell detection using fully convolutional networks and evaluation under different training losses

    Hirohisa Oda, Kosuke Chiba, Holger R. Roth, Takayuki Kitasaka, Masahiro Oda, Akinari Hinoki, Hiroo Uchida, Kensaku Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s104-106   2018年6月

     詳細を見る

    記述言語:英語  

  479. Eye structure segmentation on micro-CT images using 3D fully convolutional network with sparsely-annotated training data

    T. Sugino, H. Roth, M. Oda, S. Omata, S. Sakuma, F. Arai, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s182-184   2018年6月

     詳細を見る

    記述言語:英語  

  480. Abdominal artery segmentation from CT volumes using fully convolutional network for small artery segmentation

    Masahiro Oda, Takayuki Kitasaka, Kazunari Misawa, Michitaka Fujiwara, Kensaku Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s20-21   2018年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  481. Auto-context 3D fully convolutional networks for multi-scale semantic segmentation of abdominal CT volumes

    K. Mori, H. Roth, C. Shen, H. Oda, T. Sugino, M. Oda, Y. Hayashi, K. Misawa

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s18-19   2018年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  482. Automated ganglion cell detection using fully convolutional networks and evaluation under different training losses

    Hirohisa Oda, Kosuke Chiba, Holger R. Roth, Takayuki Kitasaka, Masahiro Oda, Akinari Hinoki, Hiroo Uchida, Kensaku Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s104-106   2018年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  483. Evaluation of 3D fully convolutional networks for multi-class organ segmentation in contrast-enhanced CT

    Chen Shen, Holger R. Roth, Hirohisa Oda, Masahiro Oda, Yuichiro Hayashi, Kazunari Misawa, Tadaaki Miyamoto, Kensaku Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s21-22   2018年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  484. Unsupervised 3D micro-CT image segmentation based on a hybrid of VAE and GAN

    T. Moriya, H. Roth, S. Nakamura, H. Oda, K. Nagara, M. Oda, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s15-17   2018年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  485. Semi-supervised spherical K-means for segmenting idiopathic interstitial pneumonia from chest CT images

    C. Wang, T. Moriya, Y. Hayashi, M. Oda, H. Ohkubo, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s27-28   2018年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  486. Improvement of robustness of SLAM-based bronchoscope tracking by posture guided feature matching

    Cheng Wang, Masahiro Oda, Yuichiro Hayashi, Hirotoshi Honma, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori, Kensaku Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s11-12   2018年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  487. Eye structure segmentation on micro-CT images using 3D fully convolutional network with sparsely-annotated training data

    T. Sugino, H. Roth, M. Oda, S. Omata, S. Sakuma, F. Arai, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s182-184   2018年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  488. An application of cascaded 3D fully convolutional networks for medical image segmentation 査読有り

    Roth Holger R., Oda Hirohisa, Zhou Xiangrong, Shimizu Natsuki, Yang Ying, Hayashi Yuichiro, Oda Masahiro, Fujiwara Michitaka, Misawa Kazunari, Mori Kensaku

    COMPUTERIZED MEDICAL IMAGING AND GRAPHICS   66 巻   頁: 90 - 99   2018年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Computerized Medical Imaging and Graphics  

    Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ∼10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. 1

    DOI: 10.1016/j.compmedimag.2018.03.001

    Web of Science

    Scopus

    PubMed

  489. Artificial Intelligence-Assisted Polyp Detection for Colonoscopy: Initial Experience 査読有り

    Masashi Misawa, Shin-eiKudo, Yuichi Mori, Tomonari Cho, Shinichi Kataoka, Akihiro Yamauchi, Yushi Ogawa, Yasuharu Maeda, Kenichi Takeda, Katsuro Ichimasa, Hiroki Nakamura, Yusuke Yagawa, Naoya Toyoshima, Noriyuki Ogata, Toyoki Kudo, Tomokazu Hisayuki, Takemasa Hayashi, Kunihiko Wakamura, Toshiyuki Baba, Fumio Ishida, Hayato Ito, Roth Holger, Kensaku Mori

    Gastroenterology   154 巻 ( 8 ) 頁: 2027-2029   2018年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1053/j.gastro.2018.04.003

  490. Port placement planning method for assistant surgeon in laparoscopic gastrectomy

    Y. Hayashi, K. Misawa, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s231-232   2018年6月

     詳細を見る

    記述言語:英語  

  491. Port placement planning method for assistant surgeon in laparoscopic gastrectomy

    Y. Hayashi, K. Misawa, K. Mori

    International Journal of Computer Assisted Radiology and Surgery   13 巻 ( 1 ) 頁: s231-232   2018年6月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  492. Regulatory Science on AI-based Medical Devices and Systems 査読有り

    Kiyoyuki Chinzei, Akinobu Shimizu, Kensaku Mori, Kanako Harada, Hideaki Takeda, Makoto Hashizume, Mayumi Ishizuka, Nobumasa Kato, Ryuzo Kawamori, Shunei Kyo, Kyosuke Nagata, Takashi Yamane, Ichiro Sakuma, Kazuhiko Ohe, Mamoru Mitsuishi

    Advanced Biomedical Engineering   7 巻   頁: 118-123   2018年5月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: org/10.14326/abe.7.118

  493. デスクトップ型マイクロCTによる微細解剖構造イメージング

    森 健策

    Medical Imaging Technology   36 巻 ( 3 ) 頁: 127-132   2018年5月

     詳細を見る

    記述言語:日本語  

  494. 特集/マイクロ解剖学のための微細解剖構造イメージング

    森 健策

    Medical Imaging Technology   36 巻 ( 3 ) 頁: 105-106   2018年5月

     詳細を見る

    記述言語:日本語  

  495. DRINet for Medical Image Segmentation 査読有り

    Liang Chen, Paul Bentley, Kensaku Mori, Kazunari Misawa, Michitaka Fujiwara, Daniel Rueckert

    IEEE Transaction on Medical Imaging     2018年5月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1109/TMI.2018.2835303

  496. Potential of artificial intelligence-assisted colonoscopy using an endocytoscope (with video) 査読有り

    Yuichi Mori, Shin-ei Kudo, Kensaku Mori

    Digestive Endoscopy   30 巻 ( S1 ) 頁: 52-53   2018年4月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: org/10.1111/den.13005

  497. Cascaded 3D Fully Convolutional Networks for Medical Image Segmentation

    Holger Roth, Hirohisa Oda, Xiangrong Zhou, Natsuki Shimizu, Ying Yang, Chen Shen, Yuichiro Hayashi, Masahiro Oda, Michitaka Fujiwara, Kazunari Misawa, Kensaku Mori,

    GTC2018,     頁: S8532   2018年3月

     詳細を見る

    記述言語:英語  

  498. Cascaded 3D Fully Convolutional Networks for Medical Image Segmentation

    Holger Roth, Hirohisa Oda, Xiangrong Zhou, Natsuki Shimizu, Ying Yang, Chen Shen, Yuichiro Hayashi, Masahiro Oda, Michitaka Fujiwara, Kazunari Misawa, Kensaku Mori

    GTC2018,     頁: S8532   2018年3月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  499. ディープラーニングを用いた教師なし学習によるレジストレーション手法の初期的検討

    長柄 快, Holger R. Roth, 中村彰太, 小田昌宏, 森 健策

    電子情報通信学会技術研究報告(MI)   117 巻 ( 518 ) 頁: 7-12   2018年3月

     詳細を見る

    記述言語:日本語  

  500. MICCAI2017参加報告

    大竹義人, 伊藤康一, 小田昌宏, 備瀬竜馬, 諸岡健一, 周 向栄, 斉藤 篤, 清水昭伸, 増谷佳孝, 佐藤嘉伸, 森 健策

    電子情報通信学会技術研究報告(MI)   117 巻 ( 518 ) 頁: 125-131   2018年3月

     詳細を見る

    記述言語:日本語  

  501. 複数のステレオ内視鏡画像からの臓器形状復元の定量評価

    柴田睦実, 林 雄一郎, 小田昌宏, 三澤一成, 森 健策

    電子情報通信学会技術研究報告(MI)   117 巻 ( 518 ) 頁: 117-122   2018年3月

     詳細を見る

    記述言語:日本語  

  502. CNNによる回帰を用いた臓器領域の位置推定手法の初期的検討

    清水南月, 小田昌宏, ロス ホルガー, 林 雄一郎, 三澤一成, 藤原道隆, 森 健策

    電子情報通信学会技術研究報告(MI)   117 巻 ( 518 ) 頁: 81-86   2018年3月

     詳細を見る

    記述言語:日本語  

  503. 3D U-Netと測地距離カーネルを取り入れた全連結条件付き確率場に基づく医用画像からの多臓器自動抽出

    楊 瀛, Roth Holger, 小田昌宏, 北坂孝幸, 三澤一成, 森 健策

    電子情報通信学会技術研究報告(MI)   117 巻 ( 518 ) 頁: 75-80   2018年3月

     詳細を見る

    記述言語:日本語  

  504. 超拡大内視鏡画像における腫瘍性ポリープ分類に向けたグラスマン距離に基づく特徴選択法

    伊東隼人, 森 悠一, 三澤将史, 小田昌宏, 工藤進英, 森 健策

    電子情報通信学会技術研究報告(MI)   117 巻 ( 518 ) 頁: 51-56   2018年3月

     詳細を見る

    記述言語:日本語  

  505. 開腹手術映像における遮蔽物除去手法の改善 FFDによる位置合わせ精度の評価

    北坂孝幸, 奥田透生, 佐藤 準, 豊田誠仁, 澤野弘明, 末永康仁, 三澤一成, 森 健策

    電子情報通信学会技術研究報告(MI)   117 巻 ( 518 ) 頁: 31-32   2018年3月

     詳細を見る

    記述言語:日本語  

  506. Pre/intra-operative diagnosis and navigational assistance based on multidisciplinary computational anatomy

    Kensaku Mori, Masahiro Oda, Holger R roth, Yoshihiko Nakamura, Yoshito Mekada, Takayuki Kitasaka, Kazunari Misawa, Michitaka Fujiwara, Kazuhiro Durukawa, Shu Ichihara

        頁: 87-105   2018年3月

     詳細を見る

    記述言語:英語  

  507. Artificial intelligence may help in predicting the need for additional surgery after endoscopic resection of T1 colorectal cancer 査読有り 国際誌

    Ichimasa Katsuro, Kudo Shin-ei, Mori Yuichi, Misawa Masashi, Matsudaira Shingo, Kouyama Yuta, Baba Toshiyuki, Hidaka Eiji, Wakamura Kunihiko, Hayashi Takemasa, Kudo Toyoki, Ishigaki Tomoyuki, Yagawa Yusuke, Nakamura Hiroki, Takeda Kenichi, Haji Amyn, Hamatani Shigeharu, Mori Kensaku, Ishida Fumio, Miyachi Hideyuki

    ENDOSCOPY   50 巻 ( 3 ) 頁: 230 - 240   2018年3月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Georg Thieme Verlag KG  

    <jats:title>Abstract</jats:title><jats:p>
    Background and study aims Decisions concerning additional surgery after endoscopic resection of T1 colorectal cancer (CRC) are difficult because preoperative prediction of lymph node metastasis (LNM) is problematic. We investigated whether artificial intelligence can predict LNM presence, thus minimizing the need for additional surgery.</jats:p><jats:p>
    Patients and methods Data on 690 consecutive patients with T1 CRCs that were surgically resected in 2001 – 2016 were retrospectively analyzed. We divided patients into two groups according to date: data from 590 patients were used for machine learning for the artificial intelligence model, and the remaining 100 patients were included for model validation. The artificial intelligence model analyzed 45 clinicopathological factors and then predicted positivity or negativity for LNM. Operative specimens were used as the gold standard for the presence of LNM. The artificial intelligence model was validated by calculating the sensitivity, specificity, and accuracy for predicting LNM, and comparing these data with those of the American, European, and Japanese guidelines.</jats:p><jats:p>
    Results Sensitivity was 100 % (95 % confidence interval [CI] 72 % to 100 %) in all models. Specificity of the artificial intelligence model and the American, European, and Japanese guidelines was 66 % (95 %CI 56 % to 76 %), 44 % (95 %CI 34 % to 55 %), 0 % (95 %CI 0 % to 3 %), and 0 % (95 %CI 0 % to 3 %), respectively; and accuracy was 69 % (95 %CI 59 % to 78 %), 49 % (95 %CI 39 % to 59 %), 9 % (95 %CI 4 % to 16 %), and 9 % (95 %CI 4 % – 16 %), respectively. The rates of unnecessary additional surgery attributable to misdiagnosing LNM-negative patients as having LNM were: 77 % (95 %CI 62 % to 89 %) for the artificial intelligence model, and 85 % (95 %CI 73 % to 93 %; P < 0.001), 91 % (95 %CI 84 % to 96 %; P < 0.001), and 91 % (95 %CI 84 % to 96 %; P < 0.001) for the American, European, and Japanese guidelines, respectively.</jats:p><jats:p>
    Conclusions Compared with current guidelines, artificial intelligence significantly reduced unnecessary additional surgery after endoscopic resection of T1 CRC without missing LNM positivity.</jats:p>

    DOI: 10.1055/s-0043-122385

    Web of Science

    PubMed

    CiNii Research

  508. Correction: Artificial intelligence may help in predicting the need for additional surgery after endoscopic resection of T1 colorectal cancer 査読有り

    Katsuro Ichimasa, Shin-ei Kudo, Yuichi Mori, Masashi Misawa, Shingo Matsudaira, Yuta Kouyama, Toshiyuki Baba, Eiji Hidaka, Kunihiko Wakamura, Takemasa Hayashi, Toyoki Kudo, Tomoyuki Ishigaki, Yusuke Yagawa, Hiroki Nakamura, Kenichi Takeda, Amyn Haji, Shigeharu Hamatani, Kensaku Mori, Fumio Ishida, Hideyuki Miyach

    Endoscopy   50 巻 ( 3 ) 頁: C2   2018年3月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1055/s-0044-100290

  509. Correction: Artificial intelligence may help in predicting the need for additional surgery after endoscopic resection of T1 colorectal cancer 査読有り

    Katsuro Ichimasa, Shin-ei Kudo, Yuichi Mori, Masashi Misawa, Shingo Matsudaira, Yuta Kouyama, Toshiyuki Baba, Eiji Hidaka, Kunihiko Wakamura, Takemasa Hayashi, Toyoki Kudo, Tomoyuki Ishigaki, Yusuke Yagawa, Hiroki Nakamura, Kenichi Takeda, Amyn Haji, Shigeharu Hamatani, Kensaku Mori, Fumio Ishida, Hideyuki Miyach

    Endoscopy   50 巻 ( 3 ) 頁: C2   2018年3月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

  510. Correction: Artificial intelligence may help in predicting the need for additional surgery after endoscopic resection of T1 colorectal cancer 査読有り

    Katsuro Ichimasa, Shin-ei Kudo, Yuichi Mori, Masashi Misawa, Shingo Matsudaira, Yuta Kouyama, Toshiyuki Baba, Eiji Hidaka, Kunihiko Wakamura, Takemasa Hayashi, Toyoki Kudo, Tomoyuki Ishigaki, Yusuke Yagawa, Hiroki Nakamura, Kenichi Takeda, Amyn Haji, Shigeharu Hamatani, Kensaku Mori, Fumio Ishida, Hideyuki Miyach

    Endoscopy   50 巻 ( 3 ) 頁: C2   2018年3月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

  511. 3D U-Netと測地距離カーネルを取り入れた全連結条件付き確率場に基づく医用画像からの多臓器自動抽出

    楊 瀛, Roth Holger, 小田昌宏, 北坂孝幸, 三澤一成, 森 健策

    電子情報通信学会技術研究報告(MI)   117 巻 ( 518 ) 頁: 75-80   2018年3月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  512. Machine learning-based colon deformation estimation method for colonscope tracking

    Masahiro Oda, Takayuki Kitasaka, Kazuhiro Furukawa, Ryoji Miyahara, Yoshiki Hirooka, Hidemi Goto, Nassir Navab, Kensaku Mori

    Proc. SPIE 10576, Medical Imaging 2018     頁: 1057619-1-1057619-6   2018年2月

     詳細を見る

    記述言語:英語  

  513. Dense volumetric detection and segmentation of mediastinal lymph nodes in chest CT images

    Hirohisa Oda, Holger Roth, Kanwal K. Bhatia, Masahiro Oda, Takayuki Kitasaka, Shingo Iwano, Hirotoshi Homma, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori, Julia A. Schnabel, Kensaku Mori,

    Proc. SPIE 10575, Medical Imaging 2018     頁: 1057502-1-1057502-6   2018年2月

     詳細を見る

    記述言語:英語  

    DOI: 10.1117/12.2287066

  514. Unsupervised pathology image segmantaion using representation learning with spherical k-means

    Takayasu Moriya, Holger R. Roth, Shota Nakamura, Hirohisa Oda, Kai Nagara, Masahiro Oda, Kensaku Mori,

    Proc. SPIE 10581, Medical Imaging 2018     頁: 1058111-1-1058111-7   2018年2月

     詳細を見る

    記述言語:英語  

    DOI: 10.1117/12.2292172

  515. Unsupervised segmentation of 3D medical images based on clustering and deep representation learning

    Takayasu Moriya, Holger R. Roth, Shota Nakamura, Hirohisa Oda, Kai Nagara, Masahiro Oda, Kensaku Mori

    Proc. SPIE 10578, Medical Imaging 2018     頁: 105780-1-105780-7   2018年2月

     詳細を見る

    記述言語:英語  

    DOI: 10.1117/12.2293414

  516. Towards dense volumetric pancreas segmentation in CT using 3D fully convolutional network

    Holger Roth, Masahiro Oda, Natsuki Shimizu, Hirohisa Oda, Yuichiro Hayashi, Takayuki Kitasaka, Michitaka Fujiwara, Kazunari Misawa, Kensaku Mori

    Proc. SPIE 10574, Medical Imaging 2018     頁: 105740B-1-105740B-6   2018年2月

     詳細を見る

    記述言語:英語  

    DOI: 10.1117/12.2293499

  517. Fine segmentation of tiny blood vessel based on fully-connected conditional random field

    Chenglong Wang, Masahiro Oda, Yasushi Yoshino, Tokunori Yamamoto, Kensaku Mori

    Proc. SPIE 10574, Medical Imaging 2018     頁: 10740K-1-10740K-7   2018年2月

     詳細を見る

    記述言語:英語  

    DOI: 10.1117/12.2293486

  518. Develop and Validate a Finite Element Method Model for Deformation Matching of Laparoscopic Gastrectomy Navigation

    Tao Chena, Guodong Wei, Weili Shic, Yuichiro Hayashi, Masahiro Oda, Zhengang Jiang, Guoxin Li, Kensaku Mori

    Proc. SPIE 10576, Medical Imaging 2018     頁: 105761Y-1-10576Y-6   2018年2月

     詳細を見る

    記述言語:英語  

  519. 医用工学と放射線技術科学との融合:期待される新技術

    戸田 尚宏, 小林 哲生, 山谷 泰賀, 有村 秀孝, 内山 良一, 森 健策, 藤田 広志, 原 武史

    日本放射線技術学会雑誌, 第73回総会学術大会シンポジウム2   74 巻 ( 2 ) 頁: 175-190   2018年2月

     詳細を見る

    記述言語:日本語  

  520. 医用工学と放射線技術科学との融合:期待される新技術

    戸田 尚宏, 小林 哲生, 山谷 泰賀, 有村 秀孝, 内山 良一, 森 健策, 藤田 広志, 原 武史

    日本放射線技術学会雑誌, 第73回総会学術大会シンポジウム2   74 巻 ( 2 ) 頁: 175-190   2018年2月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  521. 術前術中後診断治療支援

    森 健策

    月刊「細胞]   50 巻 ( 1 ) 頁: 14-18   2018年1月

     詳細を見る

    担当区分:筆頭著者   記述言語:日本語  

  522. 3Dプリンタの最新動向

    森 健策

    インナービジョン 2018年2月号   33 巻 ( 2 ) 頁: 35-36   2018年1月

     詳細を見る

    記述言語:日本語  

  523. 3Dプリンタの最新動向

    森 健策

    インナービジョン 2018年2月号   33 巻 ( 2 ) 頁: 35-36   2018年1月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)  

  524. Advanced Endoscopic Navigation: Surgical Big Data, Methodology, and Applications 査読有り

    Luo Xiongbiao, Mori Kensaku, Peters Terry M.

    ANNUAL REVIEW OF BIOMEDICAL ENGINEERING, VOL 20   20 巻 ( 1 ) 頁: 221 - 251   2018年

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Annual Review of Biomedical Engineering  

    Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.

    DOI: 10.1146/annurev-bioeng-062117-120917

    Web of Science

    Scopus

    PubMed

    CiNii Research

  525. A Multi-scale Pyramid of 3D Fully Convolutional Networks for Abdominal Multi-organ Segmentation 招待有り 査読有り

    Roth Holger R., Shen Chen, Oda Hirohisa, Sugino Takaaki, Oda Masahiro, Hayashi Yuichiro, Misawa Kazunari, Mori Kensaku

    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2018, PT IV   11073 巻   頁: 417 - 425   2018年

     詳細を見る

    記述言語:英語   掲載種別:研究論文(国際会議プロシーディングス)   出版者・発行元:Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)  

    Recent advances in deep learning, like 3D fully convolutional networks (FCNs), have improved the state-of-the-art in dense semantic segmentation of medical images. However, most network architectures require severely downsampling or cropping the images to meet the memory limitations of today’s GPU cards while still considering enough context in the images for accurate segmentation. In this work, we propose a novel approach that utilizes auto-context to perform semantic segmentation at higher resolutions in a multi-scale pyramid of stacked 3D FCNs. We train and validate our models on a dataset of manually annotated abdominal organs and vessels from 377 clinical CT images used in gastric surgery, and achieve promising results with close to 90% Dice score on average. For additional evaluation, we perform separate testing on datasets from different sources and achieve competitive results, illustrating the robustness of the model and approach.

    DOI: 10.1007/978-3-030-00937-3_48

    Web of Science

    Scopus

    その他リンク: https://dblp.uni-trier.de/db/conf/miccai/miccai2018-4.html#RothSOSOHMM18

  526. [Implementation of artificial intelligence into colonoscopy: experience of research and development of computer-aided diagnostic system for endocytoscopy]. 査読有り

    Mori Y, Kudo SE, Mori K

    Nihon Shokakibyo Gakkai zasshi = The Japanese journal of gastro-enterology   115 巻 ( 12 ) 頁: 1030 - 1036   2018年

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)   出版者・発行元:一般財団法人 日本消化器病学会  

    <p>近年の内視鏡イメージング技術の発展により,大腸病変の内視鏡診断は飛躍的に発展した.しかし同時に,高精度の診断はエキスパート内視鏡医しか実現できないという,ジレンマが明らかになりつつある.このような内視鏡診断能力の限界に対する,革新的な解決策として注目をあびているのが人工知能による内視鏡診断支援システム(computer-aided diagnosis;CAD)である.本稿では内視鏡CADの研究開発の現状について概観した後,医工産官連携プロジェクト(代表研究者:工藤進英)として研究を進めているEndocyto(=520倍ズームの超拡大内視鏡)を用いた内視鏡CADの発案・医師主導研究・薬機法承認申請の取り組みについて紹介する.</p>

    DOI: 10.11405/nisshoshi.115.1030

    PubMed

    CiNii Research

  527. Application of three-dimensional print in minor hepatectomy following liver partition between anterior and posterior sectors 査読有り

    Tsuyoshi Igami, Yoshihiko Nakamura, Masahiro Oda, Hiroshi Tanaka, Motoi Nojiri, Tomoki Ebata, Yukihiro Yokoyama, Gen Sugawara, Takashi Mizuno, Junpei Yamaguchi, Kensaku Mori, Masato Nagino

    ANZ Journal of Surgery     2017年12月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1111/ans.14331

  528. Study on the Robustness of ORB-SLAM Based Outlier Elimination in Bronchoscope Tracking -- RANSAC + EPnP for Outlier Detection --

    Cheng Wang, Masahiro Oda, Yuichiro Hayashi, Hirotoshi Honma, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori, Kensaku Mori

    MI2017-58   117 巻 ( 281 ) 頁: 47-52   2017年11月

     詳細を見る

    記述言語:英語  

  529. On the influence of Dice loss function in multi-class organ segmentation of abdominal CT using 3D fully convolutional networks

    Chen Shen, Holger R. Roth, Hirohisa Oda, Masahiro Oda, Yuichiro Hayashi, Kazunari Misawa, Kensaku Mori

    MI2017-51   117 巻 ( 281 ) 頁: 15-20   2017年11月

     詳細を見る

    記述言語:英語  

  530. Machine Learning Techniques for Automated Accurate Organ Segmentation and Their Applications to Diagnosis Assistance

    Masahiro Oda, Natsuki Shimizu, Holger R. Roth, Takayuki Kitasaka, Kazunari Misawa, Kensaku Mori, Michitaka Fujiwara, Daniel Rueckert

    RSNA 2017 (Radiological Society of North America) Scientific Assembly and Annual Meeting PROGRAM IN BRIEF     頁: 224   2017年11月

     詳細を見る

    記述言語:英語  

  531. 3Dプリンターの基礎と医療応用

    森 健策

    月刊心臓   49 巻 ( 11 ) 頁: 1104-1113   2017年11月

     詳細を見る

    記述言語:日本語  

  532. Automated Multi-Organ Segmentation in Abdominal CT with Hierarchical 3D Fully-Convolutional Networks

    Holger R. Roth, Hirohisa Oda, MENG, Yuichiro Hayashi, Masahiro Oda, Natsuki Shimizu, Kensaku Mori, Michitaka Fujiwara, Kazunari Misawa

    RSNA 2017 (Radiological Society of North America) Scientific Assembly and Annual Meeting PROGRAM IN BRIEF     頁: 267   2017年11月

     詳細を見る

    記述言語:英語  

  533. 3D Microstructure Visualization of Lactiferous Duct Structure Based On Refraction X-Ray CT Imaging

    Kensaku Mori, Naoki Sunaguchi, Masami Ando, Tetsuya Yuasa, Daisuke Shimao, Shu Ichihara, Rajiv Gupta

    RSNA 2017 (Radiological Society of North America) Scientific Assembly and Annual Meeting PROGRAM IN BRIEF     頁: 179   2017年11月

     詳細を見る

    記述言語:英語  

  534. Automated Multi-Organ Segmentation in Abdominal CT with Hierarchical 3D Fully-Convolutional Networks

    Holger R. Roth, Hirohisa Oda, MENG, Yuichiro Hayashi, Masahiro Oda, Natsuki Shimizu, Kensaku Mori, Michitaka Fujiwara, Kazunari Misawa

    RSNA 2017 (Radiological Society of North America) Scientific Assembly and Annual Meeting PROGRAM IN BRIEF     頁: 267   2017年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  535. 3Dプリンターの基礎と医療応用

    森 健策

    月刊心臓   49 巻 ( 11 ) 頁: 1104-1113   2017年11月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(学術雑誌)  

  536. 3D Microstructure Visualization of Lactiferous Duct Structure Based On Refraction X-Ray CT Imaging

    Kensaku Mori, Naoki Sunaguchi, Masami Ando, Tetsuya Yuasa, Daisuke Shimao, Shu Ichihara, Rajiv Gupta

    RSNA 2017 (Radiological Society of North America) Scientific Assembly and Annual Meeting PROGRAM IN BRIEF     頁: 179   2017年11月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  537. サポートベクタマシンを用いたラジオミクスベースの消化管間質性腫瘍リスク評価システム

    陳 韜, 小田紘久, Holger R. Roth,北坂孝幸,小田昌宏,李 国新,森 健策

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 239-240   2017年10月

     詳細を見る

    記述言語:日本語  

  538. 3D Fully Convolutional Networks と全連結条件付確率場による 3 次元 CT 画像からの多臓器自動抽出に関する検討

    楊 瀛,小田昌宏,Roth Holger,北坂孝幸,三澤一成, 森 健策

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 268-269   2017年10月

     詳細を見る

    記述言語:日本語  

  539. レベルセット法を用いた腎臓皮質と髄質領域の分割

    王 成龍,小田昌宏,永山 洵,吉野 能,山本徳則,森 健策

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 266-267   2017年10月

     詳細を見る

    記述言語:日本語  

  540. 開腹手術における 3 次元画像を用いた手術ナビゲーションシステムの臨床応用

    林 雄一郎, 三澤一成,森 健策

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 253-254   2017年10月

     詳細を見る

    記述言語:日本語  

  541. マイクロ CT を用いた膵臓パラフィンブロック標本の解析

    進藤幸治,大内田研宙,Holger R. Roth,小田紘久,岩本千佳,小田昌宏,中村雅史,森 健策,橋爪 誠

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 244-245   2017年10月

     詳細を見る

    記述言語:日本語  

  542. MicroCT を用いた心筋配向解析手法の取り組み 〜MRI diffusion tensor 法との比較〜'

    秋田利明,小田紘久,森 健策

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 243   2017年10月

     詳細を見る

    記述言語:日本語  

  543. 深層学習を用いたマイクロ CT 画像からの眼球構造自動抽出 〜少量データ学習による解剖構造抽出性能の検証

    杉野貴明,Holger R. Roth,小田昌宏,小俣誠二,佐久間臣耶,新井史人,森 健策

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 241-242   2017年10月

     詳細を見る

    記述言語:日本語  

  544. 自動設計特徴量を用いた 3 次元腹部 CT 像における膵臓領域の位置推定

    清水南月,Holger R. Roth,小田昌宏,三澤一成,藤原道隆,森 健策

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 270-271   2017年10月

     詳細を見る

    記述言語:日本語  

  545. Optimal port placement planning method for laparoscopic gastrectomy 査読有り

    Hayashi, Y; Misawa, K; Mori, K

    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY   12 巻 ( 10 ) 頁: 1677 - 1684   2017年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:International Journal of Computer Assisted Radiology and Surgery  

    Purpose: In laparoscopic gastrectomy, as well as other laparoscopic surgery, the surgeon operates on target organs using a laparoscope and forceps inserted into the abdominal cavity through ports placed in the abdominal wall. Therefore, port placement is of vital significance in laparoscopic surgery. In this paper, we present a method for achieving optimal port placement in laparoscopic gastrectomy based on relationships between the locations of the ports and anatomical structures. Methods: We utilize three angle conditions to determine the optimal port placement. Proper angles for the angle conditions are calculated from measurements obtained during laparoscopic gastrectomy. The port positions determined by surgeons experienced in laparoscopic gastrectomy are measured using a three-dimensional positional tracker. The locations of the blood vessels, as well as other vital anatomical structures that are also critical in laparoscopic gastrectomy, are identified from computed tomography images. The angle relationships between the port and blood vessel locations are analyzed using the obtained positional information. Optimal port placement is determined based on the angle conditions. Results: We evaluated the proposed method using the positional information obtained during 26 laparoscopic gastrectomies. Our evaluation determined that the proposed method generates optimal port placement with average errors of 22.2 and 21.2 mm in the left- and the right-hand side ports for a lead surgeon. Experienced surgeons confirmed that the optimal port placement generated by the proposed method was sufficient for clinical use. Conclusions: The proposed method provides optimal port placement in laparoscopic gastrectomy and enables a novice surgeon to determine port placement much like an experienced surgeon.

    DOI: 10.1007/s11548-017-1548-y

    Web of Science

    Scopus

    PubMed

  546. Automatic Segmentation of Head Anatomical Structures from Sparsely-annotated Images 査読有り

    Takaaki Sugino, Holger R. Roth, Mohammad Eshghi, Masahiro Oda, Min Suk Chung, Kensaku Mori

    IEEE International Conference on Cyborg and Bionic Systems     頁: 145-149   2017年10月

     詳細を見る

    記述言語:英語  

  547. 機械学習を用いた腹部動脈血管名自動命名における臓器情報利用方法に関する一考察

    鉄村悠介,Holger Roth,林 雄一郎,小田昌宏,進藤幸治,大内田研宙,橋爪 誠,三澤一成, 森 健策

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 361-362   2017年10月

     詳細を見る

    記述言語:日本語  

  548. 大腸内視鏡トラッキングのための regression forests を用いた大腸変形モデルの開発

    小田昌宏,北坂孝幸,古川和宏,宮原良二,廣岡芳樹,後藤秀実,Nassir Navabe, 森 健策

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 343-344   2017年10月

     詳細を見る

    記述言語:日本語  

  549. 腹腔鏡下胃切除ナビゲーションにおける変形マッチングの有限要素法モデルを検証するための動物実験

    陳 韜, 魏 国棟,何 静怡,陳 光鋒,李 鐿,師 爲禮,祁 小龍,林 雄一郎,蒋 振剛,森 健策, 李 国新

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 339-340   2017年10月

     詳細を見る

    記述言語:日本語  

  550. 複数フレームのステレオ内視鏡画像を用いた臓器表面形状復元に関する検討

    柴田睦実,林 雄一郎,小田昌宏,三澤一成,森 健策

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 326-327   2017年10月

     詳細を見る

    記述言語:日本語  

  551. 気管支鏡追跡における ORB-SLAM 適用に関する初期的検討

    王 成,小田昌宏,林 雄一郎,森 健策

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 324-325   2017年10月

     詳細を見る

    記述言語:日本語  

  552. 超拡大大腸内視鏡画像を利用した病理自動診断 〜腫瘍性病変に関する分類精度解析〜

    伊東隼人,森 悠一,三澤将史,小田昌宏,工藤進英,森 健策

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 319-320   2017年10月

     詳細を見る

    記述言語:日本語  

  553. 腹腔鏡下手術の教育支援に向けた VR 訓練システムの開発

    鈴木拓矢,道満恵介,目加田慶人,三澤一成,森 健策

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 293   2017年10月

     詳細を見る

    記述言語:日本語  

  554. 音声認識及びジェスチャ認識による腹腔鏡下手術ナビゲーション非接触操作システムの開発

    阿部史明,道満恵介,目加田慶人,三澤一成,森 健策

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 285   2017年10月

     詳細を見る

    記述言語:日本語  

  555. 血管芯線を用いた経時リンパ節の自動対応付け

    舘 高基,小田昌宏,中村嘉彦,三澤一成,森 健策

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 278-279   2017年10月

     詳細を見る

    記述言語:日本語  

  556. μCT 画像を用いた大変形を含む連続切片 HE 染色画像の 3 次元再構築

    長柄 快,Holger Roth,中村彰太,小田紘久,守谷享泰,小田昌宏,森 健策

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 363-364   2017年10月

     詳細を見る

    記述言語:日本語  

  557. 色ヒストグラム特徴を用いた腹腔鏡手術映像の体外・体内シーン分類

    山田 和希, 道満 恵介, 目加田 慶人, 三澤 一成, 森 健策

    平成 29 年度日本生体医工学会東海支部大会プログラム・抄録集     頁: 00   2017年10月

     詳細を見る

    記述言語:日本語  

  558. Automatic Segmentation of Head Anatomical Structures from Sparsely-annotated Images 査読有り

    Takaaki Sugino, Holger R. Roth, Mohammad Eshghi, Masahiro Oda, Min Suk Chung, Kensaku Mori

    IEEE International Conference on Cyborg and Bionic Systems     頁: 145-149   2017年10月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  559. 3D Fully Convolutional Networks と全連結条件付確率場による 3 次元 CT 画像からの多臓器自動抽出に関する検討

    楊 瀛, 小田昌宏, Roth Holger, 北坂孝幸, 三澤一成, 森 健策

    日本コンピュータ外科学会誌 第26回日本コンピュータ外科学会大会特集号   19 巻 ( 4 ) 頁: 268-269   2017年10月

     詳細を見る

    記述言語:日本語   掲載種別:研究論文(その他学術会議資料等)  

  560. 畳み込みニューラルネットワークを利用した超拡大大腸内視鏡画像における腫瘍・非腫瘍の分類

    伊東 隼人, 森 悠一, 三澤 将史, 小田 昌宏, 工藤 進英, 森 健策

    電子情報通信学会技術研究報告(MI)   117 巻 ( 220 ) 頁: 17-21   2017年9月

     詳細を見る

    記述言語:日本語  

  561. Micro-CT Guided 3D Reconstruction of Histological images

    Kai Nagara, Holger R. Roth, Shota Nakamura, Hirohisa Oda, Takayasu Moriya, Masahiro Oda, Kensaku Mori

    LNCS 10530     頁: 93-101   2017年9月

     詳細を見る

    記述言語:英語  

  562. Motion Vector for Outlier Elimination in Feature Matching and Its Application in SLAM Based Laparoscopic Tracking

    Cheng Wang, Masahiro Oda, Yuichiro Hayashi, Kazunari Misawa, Holger Roth, Kensaku Mori

    LNCS 10550     頁: 60-69   2017年9月

     詳細を見る

    記述言語:英語  

  563. 3D FCN Feature Driven Regression Forest-Based Pancreas Localization and Segmentation

    Masahiro Oda, Natsuki Shimizu, Holger R. Roth, Ken'ichi Karasawa, Takayuki Kitasaka, Kazunari Misawa, Michitaka Fujiwara, Daniel Rueckert, Kensaku Mori

    LNCS 10553     頁: 222-230   2017年9月

     詳細を見る

    記述言語:英語  

  564. 3D FCN Feature Driven Regression Forest-Based Pancreas Localization and Segmentation

    Masahiro Oda, Natsuki Shimizu, Holger R. Roth, Ken'ichi Karasawa, Takayuki Kitasaka, Kazunari Misawa, Michitaka Fujiwara, Daniel Rueckert, Kensaku Mori

    LNCS 10553     頁: 222-230   2017年9月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資料等)  

  565. Virtual 3D microscope and magnified 3D print for naked eye analyses of alveoli and alveolar duct structures by Heitzman lung specimen with micro CT

    Hiroshi Natori, Masaki Mori, Hirotsugu Takabatake, Hirotoshi Homma, ensaku Mori, Masahiro Oda, Hiroyuki Koba, Hiroki Takahashi

    ERS International congress 2017     頁: Session 439   2017年9月

     詳細を見る

    記述言語:英語  

  566. Tracking and Segmentation of the Airways in Chest CT Using a Fully Convolutional Network

    Qier Meng, Holger R. Roth, Takayuki Kitasaka, Masahiro Oda, Junji Ueno, Kensaku Mori

    LNCS 10434     頁: 198-207   2017年9月

     詳細を見る

    記述言語:英語  

  567. TBS: Tensor-Based Supervoxels for Unfolding the Heart

    Hirohisa Oda, Holger R. Roth, Kanwal K. Bhatia, Masahiro Oda, Takayuki Kitasaka, Toshiaki Akita, Julia A. Schnabel, Kensaku Mori

    LNCS 10433     頁: 681-689   2017年9月

     詳細を見る

    記述言語:英語  

  568. Accuracy of diagnosing invasive colorectal cancer using computer-aided endocytoscopy 査読有り

    Takeda Kenichi, Kudo Shin-ei, Mori Yuichi, Misawa Masashi, Kudo Toyoki, Wakamura Kunihiko, Katagiri Atsushi, Baba Toshiyuki, Hidaka Eiji, Ishida Fumio, Inoue Haruhiro, Oda Masahiro, Mori Kensaku

    ENDOSCOPY   49 巻 ( 8 ) 頁: 798 - 802   2017年8月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)   出版者・発行元:Endoscopy  

    Background and study aims Invasive cancer carries the risk of metastasis, and therefore, the ability to distinguish between invasive cancerous lesions and less-aggressive lesions is important. We evaluated a computer-aided diagnosis system that uses ultra-high (approximately×400) magnification endocytoscopy (EC-CAD). Patients and methods We generated an image database from a consecutive series of 5843 endocytoscopy images of 375 lesions. For construction of a diagnostic algorithm, 5543 endocytoscopy images from 238 lesions were randomly extracted from the database for machine learning. We applied the obtained algorithm to 200 endocytoscopy images and calculated test characteristics for the diagnosis of invasive cancer. We defined a high-confidence diagnosis as having a≥90% probability of being correct. Results Of the 200 test images, 188 (94.0%) were assessable with the EC-CADsystem. Sensitivity, specificity, accuracy, positive predictive value (PPV), and negative predictive value (NPV) were 89.4%, 98.9%, 94.1%, 98.8%, and 90.1%, respectively. High-confidence diagnosis had a sensitivity, specificity, accuracy, PPV, and NPV of 98.1%, 100%, 99.3%, 100%, and 98.8%, respectively. Conclusion: EC-CADmay be a useful tool in diagnosing invasive colorectal cancer.

    DOI: 10.1055/s-0043-105486

    Web of Science

    Scopus

    PubMed

    CiNii Research

  569. Multi-atlas pancreas segmentation: Atlas selection based on vessel structure 査読有り

    Kenichi Karasawa, Masahiro Oda, Takayuki Kitasaka, Kazunari Misawa, Michitaka Fujiwara, Chengwen Chu, Guoyan Zheng, Daniel Rueckert, Kensaku Mori

    Medical Image Analysis   39 巻   頁: 18-28   2017年7月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(学術雑誌)  

    DOI: 10.1016/j.media.2017.03.006

  570. K-means 法と Joint Unsupervised Learning による3次元医用画像の教師なしセグメンテーション

    守谷享泰, Holger R. Roth, 中村彰太, 小田紘久, 長柄快, 小田昌宏, 森健策

    第36回日本医用画像工学会大会予稿集     頁: OP16-5   2017年7月

     詳細を見る

    記述言語:日本語  

  571. 条件付き確率場による医用画像からの多臓器抽出におけるHigher Order Potential とボクセル連結構造の影響に関する考察

    楊瀛, 小田昌宏, Roth Holger, 北坂孝幸, 三澤一成, 森健策

    第36回日本医用画像工学会大会予稿集     頁: OP16-4   2017年7月

     詳細を見る

    記述言語:日本語  

  572. 3DU-Netによる3次元胸部CT像からのリンパ節検出

    小田 紘久, KanwalK.Bhatia, HolgerR.Roth, 小田 昌宏, 北坂 孝幸, 岩野 信吾, 本間 裕敏, 高畠 博嗣, 森 雅樹, 名取 博 ,JuliaA.Schnabel, 森 健策

    第36回日本医用画像工学会大会予稿集     頁: OP1-6   2017年7月

     詳細を見る

    記述言語:日本語  

  573. Torso organ segmentation in CT using fine-tuned 3D fully convolutional networks

    Holger ROTH,Ying YANG,Masahiro ODA,Hirohisa ODA, Yuichiro HAYASHI,Natsuki SHIMIZU,Takayuki KITASAKA,Michitaka FUJIWARA,Kazunari MISAWA,Kensaku MORI

        頁: OP1-8   2017年7月

     詳細を見る

    記述言語:英語  

  574. Improvement on Robustness of ORB-SLAM Based Surgical Navigation System by Building Submap

    王成 , 小田昌宏, 林雄一郎, 三澤一成, 森健策

    第36回日本医用画像工学会大会予稿集     頁: OP2-6   2017年7月

     詳細を見る

    記述言語:英語  

  575. ステレオ内視鏡画像からの臓器形状復元手法における複数フレームの利用に関する初期的検討

    柴田 睦実, 林 雄一郎, 小田 昌宏, 三澤 一成, 森 健策

    第36回日本医用画像工学会大会予稿集     頁: OP2-8   2017年7月

     詳細を見る

    記述言語:日本語  

  576. 機械学習を用いた腹部動脈血管名自動命名における肝動脈分岐情報利用方法に関する一考察

    鉄村 悠介, 張 暁楠, Holger Roth, 林 雄一郎, 小田 昌宏, 三澤 一成, 森 健策

    第36回日本医用画像工学会大会予稿集     頁: OP6-1   2017年7月

     詳細を見る

    記述言語:日本語  

  577. A Study on Fine Blood Vessel Segmentation Using Fully-connected Conditional Random Field

        頁: OP11-2   2017年7月

     詳細を見る

    記述言語:英語  

  578. CT 像から抽出した腹部動脈領域におけるCNN を用いた過検出削減でのパッチ画像生成手法の検討

    小田 昌宏, 山本 徳則, 吉野 能, 森 健策

    第36回日本医用画像工学会大会予稿集     頁: OP11-7   2017年7月

     詳細を見る

    記述言語:日本語  

  579. マイクロ CT 画像情報を利用した特徴点対応付けに基づく顕微鏡画像の 3 次元再構築

    長柄 快, Holger R. ROTH, 中村 彰太, 小田 紘久, 守谷 享泰, 小田 昌宏, 森 健策

    第36回日本医用画像工学会大会予稿集     頁: OP14-1   2017年7月

     詳細を見る

    記述言語:日本語  

  580. 血管情報を用いた経時リンパ節の自動対応付け手法に関する研究

    舘 高基, 小田 昌宏, 中村 嘉彦, 寶珠山 裕, 三澤 一成, 森 健策

    第36回日本医用画像工学会大会予稿集     頁: OP15-4   2017年7月

     詳細を見る

    記述言語:日本語  

  581. A Study on Fine Blood Vessel Segmentation Using Fully-connected Conditional Random Field

    王成龍, 小田昌宏, 吉野能, 山本徳則, 森健策

    第36回日本医用画像工学会大会予稿集     頁: OP11-2   2017年7月

     詳細を見る

    記述言語:英語   掲載種別:研究論文(その他学術会議資