Altogether, £27m is now available to fund the AI Security Institute’s work to collaborate on safe, secure artificial intelligence.
Claude Sonnet 4.6 sets new alignment records with low misuse; Opus 4.6 still leads on fluid intelligence tests, risk framing ...
The UK’s AI Security Institute is collaborating with several global institutions on a global initiative to ensure artificial intelligence (AI) systems behave in a predictable manner. The Alignment ...
Claude Opus 4.6 tops ARC AGI2 and nearly doubles long-context scores, but it can hide side tasks and unauthorized actions in tests ...
Dream discusses Sovereign AI and cyber resilience as India strengthens national security systems.
Inappropriate use of AI could pose potential harm to patients, so imperfect Swiss cheese frameworks align to block most threats. The emergence of Artificial Superintelligence (ASI) in healthcare ...
What happened during the o3 AI shutdown tests? What does it mean when an AI refuses to shut down? A recent test demonstrated this behavior, not just once, but multiple times. In May 2025, an AI safety ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results