[go: up one dir, main page]

Follow
Takashi Ishida
Takashi Ishida
Research Scientist, RIKEN AIP / Associate Professor, University of Tokyo
Verified email at ms.k.u-tokyo.ac.jp - Homepage
Title
Cited by
Cited by
Year
Learning from complementary labels
T Ishida, G Niu, W Hu, M Sugiyama
Advances in neural information processing systems (NeurIPS 2017), 2017
2392017
Do We Need Zero Training Loss After Achieving Zero Training Error?
T Ishida, I Yamane, T Sakai, G Niu, M Sugiyama
International Conference on Machine Learning (ICML 2020), 2020
1952020
Complementary-label learning for arbitrary losses and models
T Ishida, G Niu, AK Menon, M Sugiyama
International Conference on Machine Learning (ICML 2019), 2019
1482019
Binary classification from positive-confidence data
T Ishida, G Niu, M Sugiyama
Advances in neural information processing systems (NeurIPS 2018), 2018
982018
Machine Learning from Weak Supervision: An Empirical Risk Minimization Approach
M Sugiyama, H Bao, T Ishida, N Lu, T Sakai, G Niu
MIT Press, 2022
472022
LocalDrop: A hybrid regularization for deep neural networks
Z Lu, C Xu, B Du, T Ishida, L Zhang, M Sugiyama
IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (7), 3590-3601, 2021
262021
Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
T Ishida, I Yamane, N Charoenphakdee, G Niu, M Sugiyama
The Eleventh International Conference on Learning Representations (ICLR 2023), 2023
222023
Learning from Noisy Complementary Labels with Robust Loss Functions
H ISHIGURO, T ISHIDA, M SUGIYAMA
IEICE TRANSACTIONS on Information and Systems 105 (2), 364-376, 2022
122022
Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical
W Wang, T Ishida, YJ Zhang, G Niu, M Sugiyama
International Conference on Machine Learning (ICML 2024), 2024
9*2024
Mediated Uncoupled Learning and Validation with Bregman Divergences: Loss Family with Maximal Generality
I Yamane, Y Chevaleyre, T Ishida, F Yger
International Conference on Artificial Intelligence and Statistics (AISTATS …, 2023
32023
Importance Weighting for Aligning Language Models under Deployment Distribution Shift
T Lodkaew, T Fang, T Ishida, M Sugiyama
Transactions on Machine Learning Research (TMLR), 2025
22025
EDINET-Bench: Evaluating LLMs on Complex Financial Tasks using Japanese Financial Statements
I Sugiura, T Ishida, T Makino, C Tazuke, T Nakagawa, K Nakago, D Ha
arXiv preprint arXiv:2506.08762, 2025
22025
Flooding regularization for stable training of generative adversarial networks
I Yahiro, T Ishida, N Yokoya
arXiv preprint arXiv:2311.00318, 2023
22023
Scalable Oversight via Partitioned Human Supervision
R Yin, T Ishida, M Sugiyama
arXiv preprint arXiv:2510.22500, 2025
2025
LLM Routing with Dueling Feedback
CK Chiang, T Ishida, M Sugiyama
arXiv preprint arXiv:2510.00841, 2025
2025
Off-Policy Corrected Reward Modeling for Reinforcement Learning from Human Feedback
J Ackermann, T Ishida, M Sugiyama
Conference on Language Modeling (COLM 2025), 2025
2025
Practical estimation of the optimal classification error with soft labels and calibration
R Ushio, T Ishida, M Sugiyama
arXiv preprint arXiv:2505.20761, 2025
2025
How Can I Publish My LLM Benchmark Without Giving the True Answers Away?
T Ishida, T Lodkaew, I Yamane
ICML 2025 Workshop on the Impact of Memorization on Trustworthy Foundation …, 2025
2025
The system can't perform the operation now. Try again later.
Articles 1–18