Recently, several major events in the world of artificial intelligence have come together - from the release of a powerful open model, to warnings about the security risks of a popular protocol, to the adoption of comprehensive regulation in California.
Ling-1T: A revolution in training, not architecture
Ant Group has introduced giant open model Ling-1T with one trillion parameters.
Unlike the competition does not need a special „thinking mode“. His ability to reason is directly built into the immediate response through intensive training on data containing chains of reasoning.
In the tests outperformed models such as the GPT-5 and Gemini 2.5 Pro in 22 out of 31 benchmarks, particularly in mathematics and reasoning.
The model is freely downloadable under the MIT license, which blurs the distinction between open and commercial models.
Expert Warning: the MCP protocol hides serious security risks
A new study highlights exponential increase in security threats for systems using the Model Context Protocol (MCP).
The problem is in the so-called. compositional risk: the vulnerability increases rapidly with each added MCP server. Just 2 servers means 36% risk, with 10 servers it approaches 92 %.
Attackers can exploit servers that receive data from untrusted sources to execution of unauthorised orders, such as running code.
California shows the way: first comprehensive AI regulation in the US
In response to the absence of federal laws, California has adopted a package four groundbreaking regulations.
SB 53 obliges the creators of the most powerful models to publish safety protocols.
SB 243 protects minors from the harmful effects of chatbots.
AB 316 provides that legal liability for damage is always borne by the company, not autonomous AI.
AB 853 ordered by clear labelling of AI-generated content.
The laws have provoked mixed reactions, with some firms welcoming them and others warning of a fragmentation of the rules.
Scientists discover a more efficient way: better prompts replace expensive training
A team of researchers has presented an algorithm GEPA, which greatly enhances the AI agent's abilities. The GEPA algorithm automatically improves AI agent prompts through iterative evolution. LLM analyzes agent failures and then revises prompts to correct those specific errors. It iteratively tests and selects the best performing prompts, improving performance more efficiently than re-learning the model itself.
Instead of demanding fine-tuning of the model automatically generates and optimizes instructions (prompts) based on agent error analysis.
This method has achieved better results than traditional learning and at the same time she was up to 35 times more economical on computing resources.
It is the ideal solution for situations with limited data or computing power.
Data Points - DeepLearning.AI by Andrew Ng / gnews.cz - GH