The International Conference on Learning Representations (ICLR) 2026 is underway, founded by Yann LeCun and Yoshua Bengio it focuses on deep learning and is part of the op academic conferences on AI together with NeurIPS and ICML. The treemap above shows the top 100 publishing organisations. Chinese institutions account for about 40% of author affiliations on poster papers and 30% of oral papers, a status only 4% of papers receive. US institutions mirror this picture by contributing to about 40% of oral papers and 30% on posters.
These findings are consistent what we found at NeurIPS 2025 and is exemplified by this year's Outstanding Paper Awards. Of the three recognized papers, two come out of US institutions: LLMs Get Lost In Multi-Turn Conversation (Microsoft and Salesforce) and the Honorable Mention The Polar Express (from NYU and the Flatiron Institute), while one is a European collaboration called Transformers are Inherently Succinct, authored by researchers from RPTU Kaiserslautern-Landau, ETH Zürich and the Max Planck Institute.
Chinese AI research is still largely underrated in the western space. Where their research output is meagerly drawing attention, the same holds for their AI products: this week's new model releases Kimi K2.6 and DeepSeek V4 were dwarfed by the focus on OpenAI’s GPT 5.5 and Anthropic’s Opus 4.7, despite the impressive capabilities they offer for open weights.
Similarly, the research landscape spans wider than the US and China. At ICLR, top universities from South Korea, Singapore, Canada, Australia, Japan and the UAE contributed significantly as well. The top European contributions come per usual from the UK and Switzerland, and comparing the top 100 institutes outside of the US and China shows that contributions from Singapore and South Korea are competitive with the EU-27.
An important methodological change is that our analysis on NeurIPS 2025 papers included affiliations based on the registrations of authors on the OpenReview platform. For ICLR 2026 we have moved to a more robust approach using direct affiliations cited in the papers themselves. The visualization can be customized by type of papers and top organizations here.