CVE-2026-21869

🔴 HIGH

llama.cpp is an inference of several LLM models in C/C++. In commits 55d4206c8 and prior, the n_discard parameter is parsed directly from JSON input in the llama.cpp server's completion endpoints with...

Published
Jan 08, 2026
Last Modified
Jan 08, 2026
Views
77
Bookmarks
0

llama.cpp is an inference of several LLM models in C/C++. In commits 55d4206c8 and prior, the n_discard parameter is parsed directly from JSON input in the llama.cpp server's completion endpoints without validation to ensure it's non-negative. When a negative value is supplied and the context fills up, llama_memory_seq_rm/add receives a reversed range and negative offset, causing out-of-bounds memory writes in the token evaluation loop. This deterministic memory corruption can crash the process or enable remote code execution (RCE). There is no fix at the time of publication.

CVSS Scores

CVSS 3.1 8.8
8.8
HIGH
CVSS 2.0 8.8

Additional Information

Source
security-advisories@github.com
State
Awaiting analysis

Share CVE-2026-21869

Share on Social Media

Copy Link

Embed Code

Request Expert Analysis

Request a professional security analysis for CVE-2026-21869 from our verified experts.

Credits System

Use your credits to get expert analysis from verified security professionals. Purchase more credits anytime!

Add 3 credits for accelerated delivery

Base Cost: 8 credits
Priority Upgrade: + credits
SLA Acceleration: +3 credits
Total Cost:
Your Balance:

Insufficient Credits

You need more credits to submit this request.

Buy Credits

Report Analysis