Essentially all AI training is done with 32-bit floating point. But doing AI inference with 32-bit floating point is expensive, power-hungry and slow. And quantizing models for 8-bit-integer, which is ...
Investigations, conducted together with scientists at CERN, show promising results – with breakthrough performance – in their pursuit of faster Monte Carlo based simulations, which are an important ...
1. Flex Logix’s nnMAX 1K inference tile delivers INT8 Winograd acceleration that improves accuracy while reducing the necessary computations. The InferX X1 chip includes multiple nnMax clusters. It ...
FriendliAI also offers a unique take on the current memory crisis hitting the industry, especially as inference becomes the dominant AI use case. As recently explored by SDxCentral, 2026 is tipped to ...
The general definition of quantization states that it is the process of mapping continuous infinite values to a smaller set of discrete finite values. In this blog, we will talk about quantization in ...
The best kinds of research are those that test new ideas and that also lead to practical innovations in real products. It takes a keen eye to differentiate science projects, which can be fun but which ...
South Korean startup FriendliAI has raised $20 million in a seed extension round to support its efforts to accelerate AI inferencing. FriendliAI offers an inference platform designed to accelerate AI ...