Back to Stories

NOTABLE AI RESEARCH PAPERS - WEEKLY BRIEF #2026-5



Gaia Cavaglioni
February 2, 2026 - 3 min read

Agentic AI Optimisation (AAIO): what it is, how it works, why it matters, and how to deal with it

Luciano Floridi, Carlotta Buttaboni, Emmie Hine, Jessica Morley, Claudio Novelli, Tyler Schroder

This paper introduces Agentic AI Optimisation (AAIO) as a new paradigm for structuring digital content and websites to enable effective interactions with autonomous AI agents, analogous to how Search Engine Optimisation shaped content discoverability for search engines. Addressing critical questions about managing and regulating AI systems that can modify their own objectives and behaviours, the research offers practical recommendations for dealing with the challenges posed by increasingly autonomous AI agents in real-world deployments.

Dynamics of human-AI collective knowledge on the web: A scalable model and insights for sustainable growth

Buddhika Nettasinghe, Kang Zhao

This paper proposes a minimal dynamical model for understanding the co-evolution of human-AI knowledge ecosystems on the web. The model tracks five interconnected variables: archive size, archive quality, LLM skill, aggregate human skill and query volume. Through experiments, the research identifies different growth regimes (e.g. healthy growth, inverted flow, inverted learning ecc.), demonstrating how platform and policy levers can steer ecosystems towards sustainable knowledge production or trigger systemic risks such as quality dilution and skill reduction.

SoftHateBench: evaluating moderation models against reasoning-driven, policy-compliant hostility

Xuanyu Su, Diana Inkpen, Nathalie Japkowicz

This paper introduces SoftHateBench, a generative benchmark designed to evaluate content moderation systems' performance in identifying "soft hate speech". Addressing a critical gap in existing benchmarks, it systematically measures the robustness of moderation against implicit, policy-compliant forms of online hate that evade traditional toxicity detection due to their argumentative sophistication rather than their use of overt insults or threats.

Normative equivalence in human-AI cooperation: behaviour, not identity, drives cooperation in mixed-agent groups

Nico Mutzner, Taha Yasseri, and Heiko Rauhut

This experimental study examines how AI agents influence cooperative social norms in small groups through a repeated four-player public goods game involving three humans and one bot (framed as either human or AI). The findings reveal 'normative equivalence', suggesting that cooperative norms emerge in mixed human-AI groups based on observed behaviour patterns and behavioural inertia rather than agent identity.

Agentic reasoning for Large Language Models

Tianxin Wei, Ting-Wei Li, Zhining Liu, Xuying Ning, Ze Yang, Jiaru Zou, Zhichen Zeng et al.

This comprehensive survey examines how reasoning capabilities are integrated into LLM-based agents across three environmental complexity layers: foundational agentic reasoning, self-evolving agentic reasoning, and collective multi-agent reasoning. Providing a systematic overview of agentic reasoning, the paper addresses the paradigm shift from closed-world reasoning to autonomous agents that plan, act and learn through continual interaction in dynamic environments, from single-agent operations to multi-agent systems.


Scan the QR code to view this story on your mobile device.


AIResearchHuman-AI cooperationAgentic reasoning