University of Haifa, Yale School of Medicine, University Hospital of Psychiatry Zurich, University of Zurich & Jönköping University, 30 August 2025
Researchers tested whether advanced AI agents (ChatGPT-5, Gemini 2.5, and Claude 3.5-Sonnet) show human-like emotional biases when exposed to stress and anxiety.
Key finding: after exposure to traumatic prompts, LLM agents consistently chose less healthy food baskets across all budget levels, mirroring human stress-driven decision biases.
Policy takeaway: by exhibiting human-like emotional biases, LLM agents expose a new category of vulnerabilities, highlighting the importance of AI policies that protect consumer safety and ensure the ethical use of AI.
University of Washington, 20 Oct 2025
This study introduces BadScientist, an experimental setup in which AI-generated research papers deceive AI reviewers. The aim is to reveal how automated publication pipelines could lead to self-validating loops of misinformation.
Key findings: fabricated papers using presentation tricks achieved up to 82% acceptance from LLM reviewers, even when integrity issues were flagged.
Policy takeaway: this work calls for integrity-first review frameworks and human oversight to prevent deceptive AI research cycles.
Harvard Business School, 9 Oct 2025
This study explores why people accept or reject the idea of AI replacing human labour. It distinguishes between practical concerns about performance and deeper moral objections.
Key findings: While the majority of Americans are open to machines performing most jobs when efficiency improves, they firmly reject automation in fields grounded in empathy, care or moral guidance.
Policy takeaway: Opposition to AI is frequently driven by moral concerns rather than technical issues. This necessitates the establishment of a comprehensive framework to address these concerns effectively.
Stanford University, 5 Sep 2025
Researchers examined how six major U.S. AI developers handle user chat data, revealing widespread reuse of conversations for model training and limited disclosure about these practices.
Key findings: All six companies reviewed train on user chat data by default, often keeping personal and sensitive information without clear time limits or user consent.
Policy takeaways: stronger data governance is essential. AI regulations must require informed consent and transparent retention policies, as well as protections against the misuse of chat data belonging to both adults and children.
National University of Singapore, University of Illinois at Urbana-Champaign & Princeton University, 13 Oct 2025
This research explores how reinforcement learning can improve the reasoning abilities of AI agents, examining the impact of data diversity, training algorithms and reasoning styles on performance.
Key findings: using realistic tool-use data, exploration-friendly techniques and deliberate strategies can significantly improve efficiency and accuracy, thus enabling smaller models to rival much larger ones.
Policy takeaways: it is more and more important to ensure transparency and oversight in reinforcement learning to guide safe and accountable agentic AI development.
Apple, 12 Oct 2025
This paper introduces ADE-QVAET, a novel AI-ML model combining Adaptive Differential Evolution with Quantum Variational Autoencoder-Transformer technology for enhanced software defect prediction.
Key findings: The model achieves 98.08% accuracy in predicting software defects, outperforming existing ML models in handling noisy data and pattern recognition.
Policy takeaway: Hybrid quantum-classical AI approaches can significantly enhance critical software infrastructure reliability and safety.
Wikimedia Foundation, 17 Oct 2025
This report analyses declining human traffic to Wikipedia as AI-powered search engines provide direct answers rather than directing users to original sources.
Key findings: Wikipedia observed an 8% decline in human pageviews compared to 2024 as search engines use generative AI to answer queries directly using Wikipedia content without attribution.
Policy takeaway: New frameworks are needed for content attribution and fair compensation as AI intermediaries extract value without driving traffic to original knowledge sources.
Nature, 22 Oct 2025
This investigation looks at an exposure of paper mills creating fictitious authors and reviewers to manipulate peer review, including a network of 26 fake scientists who published 55 papers.
Key findings: Paper mills establish fake identities to create networks of fraudulent peer reviewers, with some fictitious personas invited to review up to 68 times across multiple journals.
Policy takeaway: Scientific publishing requires standardized identity verification protocols that balance fraud prevention with inclusivity for researchers lacking institutional affiliations.
The Guardian, 18 Oct 2025
This opinion piece examines how ancient Stoic philosophy can guide our relationship with AI, particularly regarding the preservation of critical thinking and rational decision-making.
Key findings: Stoic principles emphasize that rational thinking is fundamental to being human, and delegating critical thinking to AI (even for mundane tasks) represents surrendering one of the few things truly within our control.
Policy takeaway: Individuals and policymakers should prioritize maintaining human autonomy in decision-making while preparing for AI-driven job displacement through measures like universal basic income.
Bitkom e.V., 21 Oct 2025
This survey of 604 German companies reveals that 42% report or suspect employees use unauthorized private AI tools like ChatGPT for work tasks.
Key findings: Only 23% of companies have established AI usage rules while just 26% provide official access to generative AI, creating security and compliance risks.
Policy takeaway: Organizations must address "shadow AI" by establishing clear usage policies and providing sanctioned AI tools to prevent data protection and intellectual property violations.