A team of researchers has found a way to steer the output of large language models by manipulating specific concepts inside these models. The new method could lead to more reliable, more efficient, ...
Here are three papers describing different side-channel attacks against LLMs. “Remote Timing Attacks on Efficient Language Model Inference“: Abstract: Scaling up language models has significantly ...
The UK AI Security Institute (AISI) has partnered with the commercial security sector on a new open source framework designed to help large language model (LLM) developers improve security posture.
Also, South Korea gets a pentesting F, US Treasury says bye bye to BAH, North Korean hackers evolve, and more Infosec in Brief As if AI weren't enough of a security concern, now researchers have ...
Microsoft’s research shows how poisoned language models can hide malicious triggers, creating new integrity risks for enterprises using third-party AI systems.
SACRAMENTO — The question for many schools about using large language models (LLMs) has shifted from “if” to “how,” and there are no shortage of technology vendors bidding for their attention. But for ...
Some of the world’s most widely used open-weight generative AI (GenAI) services are profoundly susceptible to so-called “multi-turn” prompt injection or jailbreaking cyber attacks, in which a ...