Updated on 2026/03/11

写真a

 
TAKENO Shion
 
Organization
Graduate School of Engineering Mechanical Systems Engineering 2 Assistant Professor
Graduate School
Graduate School of Engineering
Undergraduate School
School of Engineering Mechanical and Aerospace Engineering
Title
Assistant Professor

Degree 3

  1. Doctor of Engineering ( 2023.3   Nagoya Institute of Technology ) 

  2. Master of Engineering ( 2020.3   Nagoya Institute of Technology ) 

  3. Bachelor of Engineering ( 2018.3   Nagoya Institute of Technology ) 

Research Interests 5

  1. Materials Informatics

  2. Bayesian optimization

  3. Bioinformatics

  4. Gaussian process bandit

  5. Active Learning

Research History 9

  1. Japan Science and Technology Agency   PRESTO   Researcher

    2024.10

  2. Nagoya University   Graduate School of Engineering   Assistant Professor

    2024.4

      More details

    Country:Japan

  3. Japan Science and Technology Agency   ACT-X   Researcher

    2023.10 - 2025.3

      More details

    Country:Japan

  4. RIKEN   Data-Driven Biomedical Science Team   Postdoctoral Researcher

    2023.4 - 2024.3

      More details

    Country:Japan

  5. CyberAgents   AI Lab   Collaborative Researcher

    2022.6 - 2023.3

      More details

    Country:Japan

  6. Japan Society for the Promotion of Science   Research Fellowship for Young Scientists (DC2)

    2021.4 - 2023.3

      More details

    Country:Japan

  7. RIKEN   Data-Driven Biomedical Science Team   Visiting researcher

    2024.5 - 2025.3

      More details

    Country:Japan

  8. Nagoya University   Graduate School of Engineering   Invited Researcher

    2023.4 - 2024.3

      More details

    Country:Japan

  9. RIKEN   Data-Driven Biomedical Science Team   Junior Research Associate

    2020.4 - 2021.3

      More details

    Country:Japan

▼display all

Education 2

  1. Nagoya Institute of Technology   Graduate School of Engineering Doctor's Course   Computer Science and Engineering

    2020.4 - 2023.3

      More details

    Country: Japan

  2. Nagoya Institute of Technology   Graduate School of Engineering Master's course   Computer Science and Engineering

    2018.4 - 2020.3

      More details

    Country: Japan

Professional Memberships 1

  1. The Institute of Electronics, Information and Communication Engineers

    2025.11

Awards 7

  1. Funai Information Technology Award for Young Reseachers

    2026.2   The Funai Foundation for Information Technology   Developments and Theoretical Guarantees of Bayesian Optimization

    Shion Takeno

  2. Nagoya Institute of Technology Student Research Encouragement Award by President

    2023.3   Nagoya Institute of Technology  

  3. IBIS Workshop 優秀プレゼンテーション賞 ファイナリスト

    2024.11   情報論的学習理論と機械学習研究会 (IBISML)  

    竹野思温, 稲津佑, 烏山昌幸, 竹内一郎

  4. IBIS Workshop 優秀プレゼンテーション賞 ファイナリスト

    2023.11  

    竹野思温, 稲津佑, 烏山昌幸, 竹内一郎

  5. IBISML 研究会賞 ファイナリスト

    2023.10  

    竹野思温, 稲津佑, 烏山昌幸

  6. IEICE TC-IBISML Research Award Finalist (Co-author)

    2022.10  

  7. Nagoya Institute of Technology Student Research Encouragement Award by Vice-president

    2021.3   Nagoya Institute of Technology  

▼display all

 

Papers 36

  1. Regret Analysis for Randomized Gaussian Process Upper Confidence Bound Reviewed Open Access

    Shion Takeno, Yu Inatsu, Masayuki Karasuyama

    Journal of Artificial Intelligence Research   Vol. 84 ( 18 )   2025.11

     More details

    Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)  

    Gaussian process upper confidence bound (GP-UCB) is a theoretically established algorithm for Bayesian optimization (BO), where we assume the objective function $f$ follows a GP. One notable drawback of GP-UCB is that the theoretical confidence parameter $β$ increases along with the iterations and is too large. To alleviate this drawback, this paper analyzes the randomized variant of GP-UCB called improved randomized GP-UCB (IRGP-UCB), which uses the confidence parameter generated from the shifted exponential distribution. We analyze the expected regret and conditional expected regret, where the expectation and the probability are taken respectively with $f$ and noise and with the randomness of the BO algorithm. In both regret analyses, IRGP-UCB achieves a sub-linear regret upper bound without increasing the confidence parameter if the input domain is finite. Furthermore, we show that randomization plays a key role in avoiding an increase in confidence parameter by showing that GP-UCB using a constant confidence parameter can incur linearly growing expected cumulative regret. Finally, we show numerical experiments using synthetic and benchmark functions and real-world emulators.

    DOI: 10.1613/jair.1.19393

    Open Access

    arXiv

    Other Link: https://arxiv.org/pdf/2409.00979v3

  2. Regret Analysis of Posterior Sampling-Based Expected Improvement for Bayesian Optimization Reviewed

    Shion Takeno, Yu Inatsu, Masayuki Karasuyama, Ichiro Takeuchi

    Transactions on Machine Learning Research     2025.9

     More details

    Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)  

    Bayesian optimization is a powerful tool for optimizing an expensive-to-evaluate black-box function. In particular, the effectiveness of expected improvement (EI) has been demonstrated in a wide range of applications. However, theoretical analyses of EI are limited compared with other theoretically established algorithms. This paper analyzes a randomized variant of EI, which evaluates the EI from the maximum of the posterior sample path. We show that this posterior sampling-based random EI achieves the sublinear Bayesian cumulative regret bounds under the assumption that the black-box function follows a Gaussian process. Finally, we demonstrate the effectiveness of the proposed method through numerical experiments.

    arXiv

    Other Link: https://arxiv.org/pdf/2507.09828v3

  3. Distributionally Robust Active Learning for Gaussian Process Regression Reviewed

    Shion Takeno, Yoshito Okura, Yu Inatsu, Aoyama Tatsuya, Tomonari Tanaka, Akahane Satoshi, Hiroyuki Hanada, Noriaki Hashimoto, Taro Murayama, Hanju Lee, Shinya Kojima, Ichiro Takeuchi

    Proceedings of The 42nd International Conference on Machine Learning (ICML 2025)   Vol. 267   page: 58339 - 58358   2025.7

     More details

    Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

    Gaussian process regression (GPR) or kernel ridge regression is a widely used and powerful tool for nonlinear prediction. Therefore, active learning (AL) for GPR, which actively collects data labels to achieve an accurate prediction with fewer data labels, is an important problem. However, existing AL methods do not theoretically guarantee prediction accuracy for target distribution. Furthermore, as discussed in the distributionally robust learning literature, specifying the target distribution is often difficult. Thus, this paper proposes two AL methods that effectively reduce the worst-case expected error for GPR, which is the worst-case expectation in target distribution candidates. We show an upper bound of the worst-case expected squared error, which suggests that the error will be arbitrarily small by a finite number of data labels under mild conditions. Finally, we demonstrate the effectiveness of the proposed methods through synthetic and real-world datasets.

    arXiv

    Other Link: https://arxiv.org/pdf/2502.16870v3

  4. Posterior Sampling-Based Bayesian Optimization with Tighter Bayesian Regret Bounds Reviewed

    Shion Takeno, Yu Inatsu, Masayuki Karasuyama, Ichiro Takeuchi

    Proceedings of The 41th International Conference on Machine Learning (ICML 2024)   Vol. 235   page: 47510 - 47534   2024.7

     More details

    Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

    Among various acquisition functions (AFs) in Bayesian optimization (BO), Gaussian process upper confidence bound (GP-UCB) and Thompson sampling (TS) are well-known options with established theoretical properties regarding Bayesian cumulative regret (BCR). Recently, it has been shown that a randomized variant of GP-UCB achieves a tighter BCR bound compared with GP-UCB, which we call the tighter BCR bound for brevity. Inspired by this study, this paper first shows that TS achieves the tighter BCR bound. On the other hand, GP-UCB and TS often practically suffer from manual hyperparameter tuning and over-exploration issues, respectively. Therefore, we analyze yet another AF called a probability of improvement from the maximum of a sample path (PIMS). We show that PIMS achieves the tighter BCR bound and avoids the hyperparameter tuning, unlike GP-UCB. Furthermore, we demonstrate a wide range of experiments, focusing on the effectiveness of PIMS that mitigates the practical issues of GP-UCB and TS.

    arXiv

    Other Link: https://arxiv.org/pdf/2311.03760v3

  5. Randomized Gaussian Process Upper Confidence Bound with Tighter Bayesian Regret Bounds Reviewed

    Shion Takeno, Yu Inatsu, Masayuki Karasuyama

    Proceedings of the 40th International Conference on Machine Learning (ICML)   Vol. 202   page: 33490 - 33515   2023.7

     More details

    Authorship:Lead author   Language:English   Publishing type:Research paper (international conference proceedings)  

    Gaussian process upper confidence bound (GP-UCB) is a theoretically promising approach for black-box optimization; however, the confidence parameter $β$ is considerably large in the theorem and chosen heuristically in practice. Then, randomized GP-UCB (RGP-UCB) uses a randomized confidence parameter, which follows the Gamma distribution, to mitigate the impact of manually specifying $β$. This study first generalizes the regret analysis of RGP-UCB to a wider class of distributions, including the Gamma distribution. Furthermore, we propose improved RGP-UCB (IRGP-UCB) based on a two-parameter exponential distribution, which achieves tighter Bayesian regret bounds. IRGP-UCB does not require an increase in the confidence parameter in terms of the number of iterations, which avoids over-exploration in the later iterations. Finally, we demonstrate the effectiveness of IRGP-UCB through extensive experiments.

    arXiv

  6. Towards Practical Preferential Bayesian Optimization with Skew Gaussian Processes Reviewed

    Shion Takeno, Masahiro Nomura, Masayuki Karasuyama

    Proceedings of the 40th International Conference on Machine Learning (ICML)   Vol. 202   page: 33516 - 33533   2023.7

     More details

    Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

    We study preferential Bayesian optimization (BO) where reliable feedback is limited to pairwise comparison called duels. An important challenge in preferential BO, which uses the preferential Gaussian process (GP) model to represent flexible preference structure, is that the posterior distribution is a computationally intractable skew GP. The most widely used approach for preferential BO is Gaussian approximation, which ignores the skewness of the true posterior. Alternatively, Markov chain Monte Carlo (MCMC) based preferential BO is also proposed. In this work, we first verify the accuracy of Gaussian approximation, from which we reveal the critical problem that the predictive probability of duels can be inaccurate. This observation motivates us to improve the MCMC-based estimation for skew GP, for which we show the practical efficiency of Gibbs sampling and derive the low variance MC estimator. However, the computational time of MCMC can still be a bottleneck in practice. Towards building a more practical preferential BO, we develop a new method that achieves both high computational efficiency and low sample complexity, and then demonstrate its effectiveness through extensive numerical experiments.

    arXiv

  7. A Generalized Framework of Multifidelity Max-Value Entropy Search Through Joint Entropy. Reviewed

    Shion Takeno, Hitoshi Fukuoka, Yuhki Tsukada, Toshiyuki Koyama, Motoki Shiga, Ichiro Takeuchi, Masayuki Karasuyama

    Neural Computation   Vol. 34 ( 10 ) page: 2145 - 2203   2022.9

     More details

    Authorship:Lead author   Language:English   Publishing type:Research paper (scientific journal)  

    DOI: 10.1162/neco_a_01530

  8. Sequential and Parallel Constrained Max-value Entropy Search via Information Lower Bound. Reviewed Open Access

    Shion Takeno, Tomoyuki Tamura, Kazuki Shitara, Masayuki Karasuyama

    Proceedings of the 39th International Conference on Machine Learning (ICML)     page: 20960 - 20986   2022.7

     More details

    Authorship:Lead author   Language:English   Publishing type:Research paper (international conference proceedings)  

    Max-value entropy search (MES) is one of the state-of-the-art approaches in Bayesian optimization (BO). In this paper, we propose a novel variant of MES for constrained problems, called Constrained MES via Information lower BOund (CMES-IBO), that is based on a Monte Carlo (MC) estimator of a lower bound of a mutual information (MI). Unlike existing studies, our MI is defined so that uncertainty with respect to feasibility can be incorporated. We derive a lower bound of the MI that guarantees non-negativity, while a constrained counterpart of conventional MES can be negative. We further provide theoretical analysis that assures the low-variability of our estimator which has never been investigated for any existing information-theoretic BO. Moreover, using the conditional MI, we extend CMES-IBO to the parallel setting while maintaining the desirable properties. We demonstrate the effectiveness of CMES-IBO by several benchmark functions and real-world problems.

    Open Access

    arXiv

    Other Link: https://dblp.uni-trier.de/rec/conf/icml/2022

  9. Cost-effective search for lower-error region in material parameter space using multifidelity Gaussian process modeling Reviewed Open Access

    Shion Takeno, Yuhki Tsukada, Hitoshi Fukuoka, Toshiyuki Koyama, Motoki Shiga, Masayuki Karasuyama

    Physical Review Materials   Vol. 4 ( 8 )   2020.8

     More details

    Authorship:Lead author   Language:English   Publishing type:Research paper (scientific journal)  

    Information regarding precipitate shapes is critical for estimating material parameters. Hence, we considered estimating a region of material parameter space in which a computational model produces precipitates having shapes similar to those observed in the experimental images. This region, called the lower-error region (LER), reflects intrinsic information of the material contained in the precipitate shapes. However, the computational cost of LER estimation can be high because the accurate computation of the model is required many times to better explore parameters. To overcome this difficulty, we used a Gaussian-process-based multifidelity modeling, in which training data can be sampled from multiple computations with different accuracy levels (fidelity). Lower-fidelity samples may have lower accuracy, but the computational cost is lower than that for higher-fidelity samples. Our proposed sampling procedure iteratively determines the most cost-effective pair of a point and a fidelity level for enhancing the accuracy of LER estimation. We demonstrated the efficiency of our method through estimation of the interface energy and lattice mismatch between MgZn2 and α-Mg phases in an Mg-based alloy. The results showed that the sampling cost required to obtain accurate LER estimation could be drastically reduced.

    DOI: 10.1103/PhysRevMaterials.4.083802

    Open Access

    Scopus

    arXiv

    Other Link: https://arxiv.org/pdf/2003.13428v1

  10. Multi-fidelity Bayesian Optimization with Max-value Entropy Search and its Parallelization. Reviewed

    Shion Takeno, Hitoshi Fukuoka, Yuhki Tsukada, Toshiyuki Koyama, Motoki Shiga, Ichiro Takeuchi, Masayuki Karasuyama

    Proceedings of the 37th International Conference on Machine Learning (ICML)     page: 9334 - 9345   2020.7

     More details

    Authorship:Lead author   Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:PMLR  

    In a standard setting of Bayesian optimization (BO), the objective function evaluation is assumed to be highly expensive. Multi-fidelity Bayesian optimization (MFBO) accelerates BO by incorporating lower fidelity observations available with a lower sampling cost. In this paper, we focus on the information-based approach, which is a popular and empirically successful approach in BO. For MFBO, however, existing information-based methods are plagued by difficulty in estimating the information gain. We propose an approach based on max-value entropy search (MES), which greatly facilitates computations by considering the entropy of the optimal function value instead of the optimal input point. We show that, in our multi-fidelity MES (MF-MES), most of additional computations, compared with usual MES, is reduced to analytical computations. Although an additional numerical integration is necessary for the information across different fidelities, this is only in one dimensional space, which can be performed efficiently and accurately. Further, we also propose parallelization of MF-MES. Since there exist a variety of different sampling costs, queries typically occur asynchronously in MFBO. We show that similar simple computations can be derived for asynchronous parallel MFBO. We demonstrate effectiveness of our approach by using benchmark datasets and a real-world application to materials science data.

    arXiv

    Other Link: https://dblp.uni-trier.de/rec/conf/icml/2020

  11. ガウス過程トンプソンサンプリングの高確率リグレット保証に関する検討

    竹野思温, 岩崎省吾

    信学技報   Vol. 125 ( 308 ) page: 1 - 5   2025.12

     More details

    Authorship:Lead author, Corresponding author   Language:Japanese   Publishing type:Research paper (conference, symposium, etc.)  

  12. 事後分布サンプリングに基づく並列ベイズ最適化のリグレット解析

    杉浦秀平, 竹内一郎, 竹野思温

    信学技報   Vol. 125 ( 308 ) page: 22 - 29   2025.12

     More details

    Authorship:Corresponding author   Language:Japanese   Publishing type:Research paper (conference, symposium, etc.)  

  13. 二値,選好情報を利用する非近視眼的ベイズ最適化

    栗聡汰, 杉浦秀平, 竹内一郎, 竹野思温

    信学技報   Vol. 125 ( 308 ) page: 6 - 13   2025.12

     More details

    Authorship:Corresponding author   Language:Japanese   Publishing type:Research paper (conference, symposium, etc.)  

  14. 内側の関数の観測が得られない合成関数に対するベイズ最適化

    戸田海仁, 杉浦秀平, 竹野思温, 竹内一郎

    信学技報   Vol. 125 ( 308 ) page: 14 - 21   2025.12

     More details

    Language:Japanese   Publishing type:Research paper (conference, symposium, etc.)  

  15. Improved Regret Analysis in Gaussian Process Bandits: Optimality for Noiseless Reward, RKHS norm, and Non-Stationary Variance Reviewed

    Shogo Iwazaki, Shion Takeno

    Proceedings of The 42nd International Conference on Machine Learning (ICML 2025)   Vol. 267   page: 26642 - 26672   2025.7

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)  

    We study the Gaussian process (GP) bandit problem, whose goal is to minimize regret under an unknown reward function lying in some reproducing kernel Hilbert space (RKHS). The maximum posterior variance analysis is vital in analyzing near-optimal GP bandit algorithms such as maximum variance reduction (MVR) and phased elimination (PE). Therefore, we first show the new upper bound of the maximum posterior variance, which improves the dependence of the noise variance parameters of the GP. By leveraging this result, we refine the MVR and PE to obtain (i) a nearly optimal regret upper bound in the noiseless setting and (ii) regret upper bounds that are optimal with respect to the RKHS norm of the reward function. Furthermore, as another application of our proposed bound, we analyze the GP bandit under the time-varying noise variance setting, which is the kernelized extension of the linear bandit with heteroscedastic noise. For this problem, we show that MVR and PE-based algorithms achieve noise variance-dependent regret upper bounds, which matches our regret lower bound.

    arXiv

    Other Link: https://arxiv.org/pdf/2502.06363v1

  16. Near-Optimal Algorithm for Non-Stationary Kernelized Bandits Reviewed

    Shogo Iwazaki, Shion Takeno

    Proceedings of the 28th International Conference on Artificial Intelligence and Statistics (AISTATS)   Vol. 258   page: 406 - 414   2025.5

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)  

    This paper studies a non-stationary kernelized bandit (KB) problem, also called time-varying Bayesian optimization, where one seeks to minimize the regret under an unknown reward function that varies over time. In particular, we focus on a near-optimal algorithm whose regret upper bound matches the regret lower bound. For this goal, we show the first algorithm-independent regret lower bound for non-stationary KB with squared exponential and Matérn kernels, which reveals that an existing optimization-based KB algorithm with slight modification is near-optimal. However, this existing algorithm suffers from feasibility issues due to its huge computational cost. Therefore, we propose a novel near-optimal algorithm called restarting phased elimination with random permutation (R-PERP), which bypasses the huge computational cost. A technical key point is the simple permutation procedures of query candidates, which enable us to derive a novel tighter confidence bound tailored to the non-stationary problems.

    arXiv

    Other Link: https://arxiv.org/pdf/2410.16052v1

  17. No-Regret Bayesian Optimization with Stochastic Observation Failures Reviewed Open Access

    Shogo Iwazaki, Tomohiko Tanabe, Mitsuru Irie, Shion Takeno, Kota Matsui, Yu Inatsu

    Proceedings of the 28th International Conference on Artificial Intelligence and Statistics (AISTATS)   Vol. 258   page: 415 - 423   2025.5

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)  

    Open Access

  18. ガウス過程回帰モデルによるテスト分布を考慮した能動学習

    大藏芳斗, 竹野思温, 稲津佑, 青山竜也, 田中智成, 赤羽智志, 花田博幸, 橋本典明, 村山太朗, 李 翰柱, 小嶋信矢, 竹内一郎

    信学技報   Vol. 124 ( 321 ) page: 41 - 48   2024.12

     More details

    Language:Japanese   Publishing type:Research paper (conference, symposium, etc.)  

  19. Hot off the Press: Towards Practical Preferential Bayesian Optimization with Skew Gaussian Processes Reviewed Open Access

    Shion Takeno, Masahiro Nomura, Masayuki Karasuyama

    Proceedings of the Companion Conference on Genetic and Evolutionary Computation (GECCO2024), Association for Computing Machinery     page: 59 - 60   2024.8

     More details

    Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

    DOI: 10.1145/3638530.3664060

    Open Access

  20. Bounding Box-based Multi-objective Bayesian Optimization of Risk Measures under Input Uncertainty Reviewed Open Access

    Yu Inatsu, Shion Takeno, Hiroyuki Hanada, Kazuki Iwata, Ichiro Takeuchi

    Proceedings of the 27th International Conference on Artificial Intelligence and Statistics   Vol. 238   page: 4564 - 4572   2024.7

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)  

    In this study, we propose a novel multi-objective Bayesian optimization (MOBO) method to efficiently identify the Pareto front (PF) defined by risk measures for black-box functions under the presence of input uncertainty (IU). Existing BO methods for Pareto optimization in the presence of IU are risk-specific or without theoretical guarantees, whereas our proposed method addresses general risk measures and has theoretical guarantees. The basic idea of the proposed method is to assume a Gaussian process (GP) model for the black-box function and to construct high-probability bounding boxes for the risk measures using the GP model. Furthermore, in order to reduce the uncertainty of non-dominated bounding boxes, we propose a method of selecting the next evaluation point using a maximin distance defined by the maximum value of a quasi distance based on bounding boxes. As theoretical analysis, we prove that the algorithm can return an arbitrary-accurate solution in a finite number of iterations with high probability, for various risk measures such as Bayes risk, worst-case risk, and value-at-risk. We also give a theoretical analysis that takes into account approximation errors because there exist non-negligible approximation errors (e.g., finite approximation of PFs and sampling-based approximation of bounding boxes) in practice. We confirm that the proposed method outperforms compared with existing methods not only in the setting with IU but also in the setting of ordinary MOBO through numerical experiments.

    Open Access

    arXiv

    Other Link: https://arxiv.org/pdf/2301.11588v3

  21. Risk Seeking Bayesian Optimization under Uncertainty for Obtaining Extremum Reviewed Open Access

    Shogo Iwazaki, Tomohiko Tanabe, Mitsuru Irie, Shion Takeno, Yu Inatsu

    Proceedings of the 27th International Conference on Artificial Intelligence and Statistics   Vol. 238   page: 1252 - 1260   2024.5

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)  

    Open Access

  22. Multi-objective Bayesian Optimization with Active Preference Learning Reviewed

    Ryota Ozaki, Kazuki Ishikawa, Youhei Kanzaki, Shinya Suzuki, Shion Takeno, Ichiro Takeuchi, Masayuki Karasuyama

    Proceedings of the 38th AAAI Conference on Artificial Intelligence   Vol. 38 ( 13 ) page: 14490 - 14498   2024.2

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)  

    There are a lot of real-world black-box optimization problems that need to optimize multiple criteria simultaneously. However, in a multi-objective optimization (MOO) problem, identifying the whole Pareto front requires the prohibitive search cost, while in many practical scenarios, the decision maker (DM) only needs a specific solution among the set of the Pareto optimal solutions. We propose a Bayesian optimization (BO) approach to identifying the most preferred solution in the MOO with expensive objective functions, in which a Bayesian preference model of the DM is adaptively estimated by an interactive manner based on the two types of supervisions called the pairwise preference and improvement request. To explore the most preferred solution, we define an acquisition function in which the uncertainty both in the objective functions and the DM preference is incorporated. Further, to minimize the interaction cost with the DM, we also propose an active learning strategy for the preference estimation. We empirically demonstrate the effectiveness of our proposed method through the benchmark function optimization and the hyper-parameter optimization problems for machine learning models.

    DOI: 10.1609/aaai.v38i13.29364

    arXiv

    Other Link: https://arxiv.org/pdf/2311.13460v1

  23. Active Learning for Level Set Estimation Using Randomized Straddle Algorithms. Reviewed

    Yu Inatsu, Shion Takeno, Kentaro Kutsukake, Ichiro Takeuchi

    Transactions on Machine Learning Research   Vol. 2024   2024

     More details

    Language:English  

    Level set estimation (LSE), the problem of identifying the set of input points where a function takes value above (or below) a given threshold, is important in practical applications. When the function is expensive-to-evaluate and black-box, the \textit{straddle} algorithm, which is a representative heuristic for LSE based on Gaussian process models, and its extensions having theoretical guarantees have been developed. However, many of existing methods include a confidence parameter $β^{1/2}_t$ that must be specified by the user, and methods that choose $β^{1/2}_t$ heuristically do not provide theoretical guarantees. In contrast, theoretically guaranteed values of $β^{1/2}_t$ need to be increased depending on the number of iterations and candidate points, and are conservative and not good for practical performance. In this study, we propose a novel method, the \textit{randomized straddle} algorithm, in which $β_t$ in the straddle algorithm is replaced by a random sample from the chi-squared distribution with two degrees of freedom. The confidence parameter in the proposed method has the advantages of not needing adjustment, not depending on the number of iterations and candidate points, and not being conservative. Furthermore, we show that the proposed method has theoretical guarantees that depend on the sample complexity and the number of iterations. Finally, we confirm the usefulness of the proposed method through numerical experiments using synthetic and real data.

    arXiv

    Other Link: https://dblp.org/db/journals/tmlr/tmlr2024.html#InatsuTKT24

  24. Failure-Aware Gaussian Process Optimization with Regret Bounds Reviewed

    Shogo Iwazaki, Shion Takeno, Tomohiko Tanabe, Mitsuru Irie

    Advances in Neural Information Processing Systems 36 (NeurIPS 2023)     page: 24388 - 24400   2023.11

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)  

  25. 複数目的のレベル集合推定のための適応的意思決定アルゴリズムの提案

    岩崎 省吾, 竹野 思温, 稲津 佑, 松井 孝太

    人工知能学会全国大会論文集   Vol. JSAI2023   2023.6

     More details

    Language:Japanese   Publishing type:Research paper (conference, symposium, etc.)  

  26. 最適値の情報量に基づいたマルチフィデリティベイズ最適化

    竹野思温, 烏山昌幸

    人工知能学会全国大会論文集   Vol. JSAI2023   2023.6

     More details

    Authorship:Lead author   Language:Japanese   Publishing type:Research paper (conference, symposium, etc.)  

  27. 選好考慮型多目的ベイズ最適化によるユーザの好みを反映したハイパーパラメータ最適化

    尾崎令拓, 石川和樹, 神崎陽平, 竹野思温, 竹内一郎, 烏山昌幸

    信学技報   Vol. 122 ( 325 ) page: 120 - 127   2022.12

     More details

    Language:Japanese   Publishing type:Research paper (conference, symposium, etc.)  

  28. 乱択GP-UCBアルゴリズムのリグレット解析

    竹野思温, 稲津 佑, 烏山昌幸

    信学技報   Vol. 122 ( 325 ) page: 38 - 45   2022.12

     More details

    Authorship:Lead author   Language:Japanese   Publishing type:Research paper (conference, symposium, etc.)  

  29. Bayesian Optimization for Cascade-Type Multistage Processes. Reviewed Open Access

    Shunya Kusakawa, Shion Takeno, Yu Inatsu, Kentaro Kutsukake, Shogo Iwazaki, Takashi Nakano, Toru Ujihara, Masayuki Karasuyama, Ichiro Takeuchi

    Neural Computation   Vol. 34 ( 12 ) page: 2408 - 2431   2022.11

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

    Complex processes in science and engineering are often formulated as multistage decision-making problems. In this paper, we consider a type of multistage decision-making process called a cascade process. A cascade process is a multistage process in which the output of one stage is used as an input for the subsequent stage. When the cost of each stage is expensive, it is difficult to search for the optimal controllable parameters for each stage exhaustively. To address this problem, we formulate the optimization of the cascade process as an extension of the Bayesian optimization framework and propose two types of acquisition functions based on credible intervals and expected improvement. We investigate the theoretical properties of the proposed acquisition functions and demonstrate their effectiveness through numerical experiments. In addition, we consider an extension called suspension setting in which we are allowed to suspend the cascade process at the middle of the multistage decision-making process that often arises in practical problems. We apply the proposed method in a test problem involving a solar cell simulator, which was the motivation for this study.

    DOI: 10.1162/neco_a_01550

    arXiv

  30. Preferential Bayesian Optimization with Hallucination Believer Reviewed

    Shion Takeno, Masahiro Nomura, Masayuki Karasuyama

    NeurIPS Workshop on Gaussian Processes, Spatiotemporal Modeling, and Decision-making Systems     2022.11

     More details

    Authorship:Lead author, Corresponding author   Language:English  

  31. Bayesian Optimization for Distributionally Robust Chance-constrained Problem. Reviewed

    Yu Inatsu, Shion Takeno, Masayuki Karasuyama, Ichiro Takeuchi

    Proceedings of the 39th International Conference on Machine Learning (ICML)     page: 9602 - 9621   2022.7

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)  

    In black-box function optimization, we need to consider not only controllable design variables but also uncontrollable stochastic environment variables. In such cases, it is necessary to solve the optimization problem by taking into account the uncertainty of the environmental variables. Chance-constrained (CC) problem, the problem of maximizing the expected value under a certain level of constraint satisfaction probability, is one of the practically important problems in the presence of environmental variables. In this study, we consider distributionally robust CC (DRCC) problem and propose a novel DRCC Bayesian optimization method for the case where the distribution of the environmental variables cannot be precisely specified. We show that the proposed method can find an arbitrary accurate solution with high probability in a finite number of trials, and confirm the usefulness of the proposed method through numerical experiments.

    arXiv

    Other Link: https://dblp.uni-trier.de/rec/conf/icml/2022

  32. 情報量の下界に基づく逐次的及び並列的な制約付きベイズ最適化

    竹野思温, 田村友幸, 設楽一希, 烏山昌幸

    信学技報   Vol. 121 ( 321 ) page: 9 - 16   2022.1

     More details

    Authorship:Lead author   Language:Japanese   Publishing type:Research paper (conference, symposium, etc.)  

  33. 情報量に基づく複数タスクの同時ベイズ最適化

    山田倫太郎, 竹野思温, 烏山昌幸

    信学技報   Vol. 121 ( 321 ) page: 75 - 80   2022.1

     More details

    Language:Japanese   Publishing type:Research paper (conference, symposium, etc.)  

  34. 分布的ロバストな機会制約付き最適化問題に対する能動学習

    稲津 佑, 竹野思温, 烏山昌幸, 竹内一郎

    信学技報   Vol. 121 ( 80 ) page: 47 - 54   2021.6

     More details

    Language:Japanese   Publishing type:Research paper (conference, symposium, etc.)  

  35. Multi-objective Bayesian Optimization using Pareto-frontier Entropy. Reviewed

    Shinya Suzuki, Shion Takeno, Tomoyuki Tamura, Kazuki Shitara, Masayuki Karasuyama

    Proceedings of the 37th International Conference on Machine Learning(ICML)     page: 9279 - 9288   2020.7

     More details

    Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:PMLR  

    This paper studies an entropy-based multi-objective Bayesian optimization (MBO). The entropy search is successful approach to Bayesian optimization. However, for MBO, existing entropy-based methods ignore trade-off among objectives or introduce unreliable approximations. We propose a novel entropy-based MBO called Pareto-frontier entropy search (PFES) by considering the entropy of Pareto-frontier, which is an essential notion of the optimality of the multi-objective problem. Our entropy can incorporate the trade-off relation of the optimal values, and further, we derive an analytical formula without introducing additional approximations or simplifications to the standard entropy search setting. We also show that our entropy computation is practically feasible by using a recursive decomposition technique which has been known in studies of the Pareto hyper-volume computation. Besides the usual MBO setting, in which all the objectives are simultaneously observed, we also consider the "decoupled" setting, in which the objective functions can be observed separately. PFES can easily adapt to the decoupled setting by considering the entropy of the marginal density for each output dimension. This approach incorporates dependency among objectives conditioned on Pareto-frontier, which is ignored by the existing method. Our numerical experiments show effectiveness of PFES through several benchmark datasets.

    arXiv

    Other Link: https://dblp.uni-trier.de/rec/conf/icml/2020

  36. Estimation of material parameters based on precipitate shape: efficient identification of low-error region with Gaussian process modeling Reviewed Open Access

    Yuhki Tsukada, Shion Takeno, Masayuki Karasuyama, Hitoshi Fukuoka, Motoki Shiga, Toshiyuki Koyama

    Scientific Reports   Vol. 9 ( 1 )   2019.12

     More details

    Language:English   Publishing type:Research paper (scientific journal)  

    In this study, an efficient method for estimating material parameters based on the experimental data of precipitate shape is proposed. First, a computational model that predicts the energetically favorable shape of precipitate when a d-dimensional material parameter (x) is given is developed. Second, the discrepancy (y) between the precipitate shape obtained through the experiment and that predicted using the computational model is calculated. Third, the Gaussian process (GP) is used to model the relation between x and y. Finally, for identifying the “low-error region (LER)” in the material parameter space where y is less than a threshold, we introduce an adaptive sampling strategy, wherein the estimated GP model suggests the subsequent candidate x to be sampled/calculated. To evaluate the effectiveness of the proposed method, we apply it to the estimation of interface energy and lattice mismatch between MgZn2 (β1') and α-Mg phases in an Mg-based alloy. The result shows that the number of computational calculations of the precipitate shape required for the LER estimation is significantly decreased by using the proposed method.

    DOI: 10.1038/s41598-019-52138-0

    Open Access

    Scopus

    PubMed

▼display all

Books 2

  1. マテリアル・機械学習・ロボット(現代化学増刊48)

    竹野思温, 烏山昌幸, 竹内一郎( Role: Contributor ,  第Ⅲ部 材料科学のためのベイズ最適化 使いこなし, 9 精度・正確度と観測コストを考慮したマルチフィデリティ最適化)

    東京化学同人  2024.3  ( ISBN:9784807913480

  2. マテリアルズインフォマティクスのためのデータ作成とその解析、応用事例

    執筆者:58名( Role: Contributor ,  第4章 第5節 精度と観測コストのトレードオフを考慮したベイズ的探索法: 材料パラメータ推定での適用事例)

    技術情報協会  2023.10  ( ISBN:4861048540

     More details

    Total pages:500   Language:Japanese

    CiNii Research

    ASIN

MISC 5

  1. On Regret Bounds of Thompson Sampling for Bayesian Optimization Reviewed

    Shion Takeno, Shogo Iwazaki

        2026.3

     More details

    We study a widely used Bayesian optimization method, Gaussian process Thompson sampling (GP-TS), under the assumption that the objective function is a sample path from a GP. Compared with the GP upper confidence bound (GP-UCB) with established high-probability and expected regret bounds, most analyses of GP-TS have been limited to expected regret. Moreover, whether the recent analyses of GP-UCB for the lenient regret and the improved cumulative regret upper bound can be applied to GP-TS remains unclear. To fill these gaps, this paper shows several regret bounds: (i) a regret lower bound for GP-TS, which implies that GP-TS suffers from a polynomial dependence on $1/δ$ with probability $δ$, (ii) an upper bound of the second moment of cumulative regret, which directly suggests an improved regret upper bound on $δ$, (iii) expected lenient regret upper bounds, and (iv) an improved cumulative regret upper bound on the time horizon $T$. Along the way, we provide several useful lemmas, including a relaxation of the necessary condition from recent analysis to obtain improved regret upper bounds on $T$.

    arXiv

    Other Link: https://arxiv.org/pdf/2603.09276v1

  2. Randomized Kiring Believer for Parallel Bayesian Optimization with Regret Bounds Reviewed

    Shuhei Sugiura, Ichiro Takeuchi, Shion Takeno

        2026.3

     More details

    We consider an optimization problem of an expensive-to-evaluate black-box function, in which we can obtain noisy function values in parallel. For this problem, parallel Bayesian optimization (PBO) is a promising approach, which aims to optimize with fewer function evaluations by selecting a diverse input set for parallel evaluation. However, existing PBO methods suffer from poor practical performance or lack theoretical guarantees. In this study, we propose a PBO method, called randomized kriging believer (KB), based on a well-known KB heuristic and inheriting the advantages of the original KB: low computational complexity, a simple implementation, versatility across various BO methods, and applicability to asynchronous parallelization. Furthermore, we show that our randomized KB achieves Bayesian expected regret guarantees. We demonstrate the effectiveness of the proposed method through experiments on synthetic and benchmark functions and emulators of real-world data.

    arXiv

    Other Link: https://arxiv.org/pdf/2603.01470v1

  3. Dose-finding design based on level set estimation in phase I cancer clinical trials Reviewed

    Keiichiro Seno, Kota Matsui, Shogo Iwazaki, Yu Inatsu, Shion Takeno, Shigeyuki Matsui

        2025.4

     More details

    The primary objective of phase I cancer clinical trials is to evaluate the safety of a new experimental treatment and to find the maximum tolerated dose (MTD). We show that the MTD estimation problem can be regarded as a level set estimation (LSE) problem whose objective is to determine the regions where an unknown function value is above or below a given threshold. Then, we propose a novel dose-finding design in the framework of LSE. The proposed design determines the next dose on the basis of an acquisition function incorporating uncertainty in the posterior distribution of the dose-toxicity curve as well as overdose control. Simulation experiments show that the proposed LSE design achieves a higher accuracy in estimating the MTD and involves a lower risk of overdosing allocation compared to existing designs, thereby indicating that it provides an effective methodology for phase I cancer clinical trial design.

    arXiv

    Other Link: https://arxiv.org/pdf/2504.09157v1

  4. Distributionally Robust Safe Sample Elimination under Covariate Shift Reviewed

    Hiroyuki Hanada, Tatsuya Aoyama, Satoshi Akahane, Tomonari Tanaka, Yoshito Okura, Yu Inatsu, Noriaki Hashimoto, Shion Takeno, Taro Murayama, Hanju Lee, Shinya Kojima, Ichiro Takeuchi

        2024.6

     More details

    We consider a machine learning setup where one training dataset is used to train multiple models across slightly different data distributions. This occurs when customized models are needed for various deployment environments. To reduce storage and training costs, we propose the DRSSS method, which combines distributionally robust (DR) optimization and safe sample screening (SSS). The key benefit of this method is that models trained on the reduced dataset will perform the same as those trained on the full dataset for all possible different environments. In this paper, we focus on covariate shift as a type of data distribution change and demonstrate the effectiveness of our method through experiments.

    arXiv

    Other Link: https://arxiv.org/pdf/2406.05964v2

  5. ベイズ最適化の基礎と材料探索への応用

    Materials stage     2023.6

Presentations 37

  1. 選好に基づくベイズ最適化 Invited

    竹野思温

    第27回情報論的学習理論ワークショップ (IBIS2024)  2024.11.6 

     More details

    Language:Japanese   Presentation type:Oral presentation (invited, special)  

  2. マルチフィデリティベイズ最適化によるミクロ構造最適化

    栗田智啓, 竹野思温, 崔羿, 竹内 一郎

    第72回応用物理学会春季学術講演会  2025.3.17 

     More details

    Event date: 2025.3

    Language:Japanese   Presentation type:Oral presentation (general)  

  3. 乱択ガウス過程UCBとその期待リグレット上界 Invited

    竹野思温

    第23回情報科学技術フォーラム トップコンファレンスセッション  2024.9.4 

     More details

    Event date: 2024.9

    Language:Japanese   Presentation type:Oral presentation (invited, special)  

  4. 選好考慮型多目的ベイズ最適化によるユーザの好みを反映したハイパーパラメータ最適化

    尾崎令拓, 石川和樹, 神崎陽平, 竹野思温, 竹内一郎, 烏山昌幸

    情報論的学習理論と機械学習研究会(IBISML2022)  2022.12 

     More details

    Presentation type:Oral presentation (general)  

  5. 複数目的のレベル集合推定のための適応的意思決定アルゴリズムの提案

    岩崎 省吾, 竹野思温, 稲津佑, 松井孝太

    第37回人工知能学会全国大会  2023.6 

     More details

    Presentation type:Oral presentation (general)  

  6. 最適値の情報量に基づいたマルチフィデリティベイズ最適化

    竹野思温, 烏山昌幸

    第37回人工知能学会全国大会  2023.6 

     More details

    Presentation type:Oral presentation (general)  

  7. 情報量の下界に基づく逐次的及び並列的な制約付きベイズ最適化

    竹野思温, 田村友幸, 設楽一希, 烏山昌幸

    情報論的学習理論と機械学習研究会(IBISML2022)  2022.1 

     More details

    Presentation type:Oral presentation (general)  

  8. 情報量の下界に基づく制約付きベイズ最適化

    竹野思温, 田村友幸, 設楽一希, 烏山昌幸

    第24回情報論的学習理論ワークショップ (IBIS2021)  2021.11 

     More details

    Presentation type:Oral presentation (general)  

  9. 情報量に基づく複数タスクの同時ベイズ最適化

    山田倫太郎, 竹野思温, 烏山昌幸

    情報論的学習理論と機械学習研究会(IBISML2022)  2022.1 

     More details

    Presentation type:Oral presentation (general)  

  10. 専門家の知識を入れたものづくりのためのデータ同化(i)-手法の提案-

    沓掛健太朗, 竹野思温, 太田壮音, 烏山昌幸, 竹内一郎, 宇治原徹

    第83回応用物理学会秋季学術講演会  2022.9 

  11. 専門家の知識を入れたものづくりのためのデータ同化(ii)-SiC溶液成長シミュレーションへの適用-

    太田壮音, 沓掛健太朗, 竹野思温, 烏山昌幸, 竹内一郎, 原田俊太, 田川美穂, 宇治原徹

    第83回応用物理学会秋季学術講演会  2022.9 

  12. 多段階プロセスに対するベイズ最適化の中断可能設定への拡張

    草川隼也, 竹野思温, 沓掛健太朗, 竹内一郎

    第18回情報学ワークショップ (WINF2020)  2020.11 

     More details

    Presentation type:Oral presentation (general)  

  13. 分布的ロバストな機会制約付き最適化問題に対する能動学習

    稲津佑, 竹野思温, 烏山昌幸, 竹内一郎

    情報論的学習理論と機械学習研究会(IBISML2021)  2021.6 

     More details

    Presentation type:Oral presentation (general)  

  14. 出力空間の情報量を用いたベイズ最適化とその発展 Invited

    竹野思温

    第5回 統計・機械学習若手シンポジウム (StatsML Symposium'20)  2020.12 

     More details

    Presentation type:Public lecture, seminar, tutorial, course, or other speech  

  15. 出力空間の情報量に基づくベイズ最適化の発展

    竹野思温

    理研AIP データ駆動型生物医科学チーム オンラインセミナー  2022.9 

     More details

    Presentation type:Public lecture, seminar, tutorial, course, or other speech  

  16. 入力不確実性が存在する下での信用領域を用いたリスク尺度に対する多目的ベイズ最適化手法

    稲津佑, 花田博幸, 岩田和樹, 竹野思温, 竹内一郎

    情報論的学習理論と機械学習研究会(IBISML2023)  2023.10.30 

  17. 入力データ分布に対する不確実性の下での分布ロバストな能動学習

    大藏芳斗, 稲津佑, 竹野思温, 花田博幸, 青山竜也, 田中智成, 赤羽智志, 小嶋信矢, 李翰柱, 竹内一郎

    情報論的学習理論と機械学習研究会(IBISML2023)  2023.10.30 

  18. 事後分布からのサンプルに基づくベイズ最適化

    竹野思温, 稲津佑, 烏山昌幸, 竹内一郎

    情報論的学習理論と機械学習研究会(IBISML2023)  2023.10.30 

     More details

    Presentation type:Poster presentation  

  19. 乱択GP-UCBアルゴリズムのリグレット解析

    竹野思温, 稲津佑, 烏山昌幸

    情報論的学習理論と機械学習研究会(IBISML2022)  2022.12 

     More details

    Presentation type:Oral presentation (general)  

  20. ガウス過程に対する乱択UCBアルゴリズム

    Japanese Joint Statistical Meeting.  2023.9.5 

     More details

    Language:Japanese   Presentation type:Oral presentation (general)  

  21. カスケードタイプの多ステージプロセスに対するベイズ最適化

    草川隼也, 竹野思温, 稲津佑, 沓掛健太朗, 岩崎省吾, 中野高志, 烏山昌幸, 宇治原徹, 竹内一郎

    第24回情報論的学習理論ワークショップ (IBIS2021),  2021.11 

     More details

    Presentation type:Oral presentation (general)  

  22. Pareto-frontier Entropyに基づく多目的ベイズ最適化

    鈴木進也, 竹野思温, 田村友幸, 設楽一希, 烏山昌幸

    第22回情報論的学習理論ワークショップ (IBIS2019)  2019.11 

     More details

    Presentation type:Poster presentation  

  23. Max-value Entropy Searchに基づくMulti-fidelityベイズ最適化

    竹野思温, 福岡準史, 塚田祐貴, 小山敏幸, 志賀元紀, 竹内一郎, 烏山昌幸

    第22回情報論的学習理論ワークショップ (IBIS2019)  2019.11 

     More details

    Presentation type:Poster presentation  

  24. 人間参加型選好ベイズ最適化の半導体製造プロセス開発への応用

    松田凌芽, 霜田大貴, 吉田拓未, 竹野思温, 沓掛健太朗, 宇治原徹, 竹内 一郎

    第71回応用物理学会春季学術講演会  2024.3 

     More details

    Language:Japanese   Presentation type:Oral presentation (general)  

  25. ガウス過程回帰モデルによるテスト分布を考慮した能動学習

    大藏芳斗, 竹野思温, 稲津佑, 青山竜也, 田中智成, 赤羽智志, 花田博幸, 橋本典明, 村山太朗, 李翰柱, 小嶋信矢, 竹内一郎

    第27回情報論的学習理論ワークショップ (IBIS2024)  2024.11 

     More details

    Language:Japanese   Presentation type:Poster presentation  

  26. 系列データに対する潜在変数を用いた連続緩和に基づくベイズ最適化

    田中優次, 竹野思温, 増田慎太郎, 稲津佑, 烏山昌幸, 永田崇, 井上圭一, 竹内一郎

    第27回情報論的学習理論ワークショップ (IBIS2024),  2024.11 

     More details

    Language:Japanese   Presentation type:Poster presentation  

  27. 発展的問題のためのベイズ最適化 Invited

    竹野思温

    計測・制御・システム工学部会シンポジウム  2024.11.14 

     More details

    Language:Japanese   Presentation type:Oral presentation (invited, special)  

  28. 多様な誤差を考慮した製造装置シミュレーションのデータ同化

    沓掛 健太朗, 竹野思温, 太田 壮音, 竹内 一郎, 宇治原 徹

    日本機械学会 第37回計算力学講演会 (CMD2024)  2024.10 

     More details

    Language:Japanese  

  29. 二値応答と選好情報を活用するベイズ最適化

    栗聡汰, 竹野思温, 稲津佑, 竹内一郎

    第27回情報論的学習理論ワークショップ (IBIS2024)  2024.11 

     More details

    Language:Japanese   Presentation type:Poster presentation  

  30. 乱択期待改善量に基づくベイズ最適化

    竹野思温, 稲津佑, 烏山昌幸, 竹内一郎

    第27回情報論的学習理論ワークショップ (IBIS2024)  2024.11 

     More details

    Language:Japanese   Presentation type:Poster presentation  

  31. レベル集合推定のためのランダム化に基づく適応的意思決定アルゴリズム

    稲津佑, 竹野思温, 沓掛健太朗, 竹内一郎

    第27回情報論的学習理論ワークショップ (IBIS2024)  2024.11 

     More details

    Language:Japanese   Presentation type:Poster presentation  

  32. 内側の関数の観測が得られない合成関数に対するベイズ最適化

    戸田海仁, 杉浦秀平, 竹野思温, 竹内一郎

    情報論的学習理論と機械学習研究会 (IBISML2025),  2025.12.22 

     More details

    Language:Japanese   Presentation type:Oral presentation (general)  

  33. Data assimilation for crystal growth simulation incorporating multiple uncertainties using machine learning

    Kentaro Kutsukake, Shion Takeno, Masato Ota, Ichiro Takeuchi, Toru Ujihara

    11th International Workshop on Modeling in Crystal Growth  2024.9.24 

     More details

    Language:English   Presentation type:Oral presentation (general)  

  34. ガウス過程トンプソンサンプリングの高確率リグレット保証に関する検討

    竹野思温, 岩崎省吾

    情報論的学習理論と機械学習研究会 (IBISML2025)  2025.12.22 

     More details

    Language:Japanese   Presentation type:Oral presentation (general)  

  35. 事後分布サンプリングに基づく並列ベイズ最適化のリグレット解析

    杉浦秀平, 竹内一郎, 竹野思温

    情報論的学習理論と機械学習研究会 (IBISML2025)  2025.12.22 

     More details

    Language:Japanese   Presentation type:Oral presentation (general)  

  36. 二値,選好情報を利用する非近視眼的ベイズ最適化

    栗聡汰, 杉浦秀平, 竹内一郎, 竹野思温

    情報論的学習理論と機械学習研究会 (IBISML2025)  2025.12.22 

     More details

    Language:Japanese   Presentation type:Oral presentation (general)  

  37. Cascade-type multistage Bayesian optimization for High-Performance Multilayer Electron-Selective Films Containing Nanocrystalline Silicon Oxide

    Soma Kondo, Yasuyoshi Kurokawa, Kentaro Kutsukake, Shion Takeno, Ryoji Katsube, Noritaka Usami

    36th International Photovoltaic Science and Engineering Conference (PVSEC-36)  2025.11.9 

     More details

    Language:English   Presentation type:Oral presentation (general)  

▼display all

KAKENHI (Grants-in-Aid for Scientific Research) 3

  1. 発展的最適化問題のための理論保証付き乱択ベイズ最適化法の構築と材料分野への応用

    Grant number:24K20847  2024.4 - 2027.3

    日本学術振興会  科学研究費助成事業  若手研究

    竹野 思温

  2. 乱択ベイズ最適化法の開発およびその理論保証と材料分野への応用

    Grant number:23K19967  2023.8 - 2025.3

    日本学術振興会  科学研究費助成事業  研究活動スタート支援

    竹野 思温

      More details

    Authorship:Principal investigator 

    Grant amount:\2860000 ( Direct Cost: \2200000 、 Indirect Cost:\660000 )

  3. 出力空間情報量に基づくマルチフィデリティベイズ最適化とその材料分野への応用

    Grant number:21J14673  2021.4 - 2023.3

    日本学術振興会  科学研究費助成事業 特別研究員奨励費  特別研究員奨励費

    竹野 思温

      More details

    Authorship:Principal investigator 

    Grant amount:\1500000 ( Direct Cost: \1500000 )

    本年度は, 出力空間情報量に基づくマルチフィデリティベイズ最適化について拡張を行った. まず, 提案法に必要となる数値積分などの近似方法についてより効率的な計算法を確立した. これにより, 近似精度を高めつつさらに高速な情報量の評価が可能となった. さらに, より複雑な, 複数のデータの観測を同時に行える場合を考えた. 例えば, 計算材料科学などの分野では, 長い時間をかけてシミュレーションを行うことで物性値などを計算することがある. このような場合には, その途中で近似的な物性値も同時に観測することができる. 本拡張ではこのような複数のデータが同時に与えられることを考慮した指標を設計した. また, この指標の非常に効率的な導出法を示し, より効率的な最適化が行えることを示した. 本研究成果は国際雑誌へと投稿を行い, Major Revision の判定を受け修正中である.
    また, 実践上重要な制約付き最適化問題にも取り組み, 出力空間情報量に基づく制約付きベイズ最適化の研究も行った. 本研究では,制約付き最適化問題に対して既存法を単純に拡張したアプローチを適用すると理論・実践的問題が生じることを示した. この問題点に対し, 情報量の下界に基づくより頑健な近似方法を提案し, また並列観測が行える場合への拡張も行った. さらに, 得られた推定量に関する理論的な検討も行った. 既存の出力空間情報量に基づく単純な拡張を含むいくつかの手法と比較し, 高い性能を持つことを数値実験により示した. この研究成果は国際学会へと投稿を行っている.

Industrial property rights 1

  1. Estimation device, estimation method, and program

     More details

    Application no:特願2022-134361  Date applied:2022.8

 

Academic Activities 4

  1. Program committee of The 27th Information-Based Induction Sciences Workshop (IBIS 2024)

    Role(s):Planning, management, etc.

    2024.11

     More details

    Type:Academic society, research group, etc. 

  2. Reviewer at The 41th International Conference on Machine Learning

    Role(s):Peer review

     More details

    Type:Academic society, research group, etc. 

  3. Reviewer at The International Conference on Artificial Intelligence and Statistics (2024, 2025)

    Role(s):Peer review

     More details

    Type:Academic society, research group, etc. 

  4. Reviewer at Advances in Neural Information Processing Systems (NeurIPS 2023, Top Reviewer)

    Role(s):Peer review

     More details

    Type:Academic society, research group, etc.