Cerebras and Neural Magic Unlock the Power of Sparse LLMs for Faster, More Power Efficient, Lower Cost AI Model Training and Deployment

Cerebras and Neural Magic Unlock the Power of Sparse LLMs for Faster, More Power Efficient, Lower Cost AI Model Training and Deployment

Business Wire

Published

SUNNYVALE, Calif. & CAMBRIDGE, Mass.--(BUSINESS WIRE)--Cerebras Systems, the pioneer in accelerating generative AI, and Neural Magic, a leader in high-performance enterprise inference servers, today announced the groundbreaking results of their collaboration for sparse training and deployment of large language models (LLMs). Achieving an unprecedented 70% parameter reduction with full accuracy recovery, training on Cerebras CS-3 systems and deploying on Neural Magic inference server solutions e

Full Article