Palo Alto found critical flaws in AI/ML libraries NeMo, Uni2TS, and FlexTok Vulnerabilities allowed arbitrary code execution via malicious model metadata All patched by mid-2025; no exploitation ...
Amazon’s AI chip Trainium is fueling AWS growth despite near-term margin pressure. Read more macro analysis here.
Maia 200 is most efficient inference system Microsoft has ever deployed, with 30% better performance per dollar than latest ...
The Maia 200 deployment demonstrates that custom silicon has matured from experimental capability to production ...
Software King of the World, Microsoft, wants everyone to know it has a new inference chip and it thinks the maths finally works. Volish executive vice president Cloud + AI Scott G ...
Calling it the highest performance chip of any custom cloud accelerator, the company says Maia is optimized for AI inference on multiple models.
Palo Alto found critical flaws in AI/ML libraries NeMo, Uni2TS, and FlexTok Vulnerabilities allowed arbitrary code execution via malicious model metadata All patched by mid-2025; no exploitation ...
Hyperscaler leverages a two-tier Ethernet-based topology, custom AI Transport Layer & software tools to deliver a tightly integrated, low-latency platform ...
New GPU engine in the on-device AI framework delivers comprehensive GPU and NPU support across Android, iOS, macOS, Windows, ...