Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Conversation

oobabooga
Copy link
Contributor

@oobabooga oobabooga commented May 2, 2025

Top-nσ support was added in #11223, where it was implemented as a special case that ignored samplers other than top_k and temperature when top_n_sigma was present.

Following #11896 (comment), this PR integrates this sampler into the main sampling chain. This removes the special case handling and makes it possible to combine top_n_sigma with other sampling methods like min_p.

I have used #11896 as a starting point, so this PR also makes top_n_sigma available in llama-server.

Verification

I have tested it with llama-server and it seems to work. Below are the top probabilities after My name is with top_n_sigma=1 (left) and top_n_sigma=5 (right).

print

@oobabooga oobabooga changed the title sampling: Integrate Top-nσ into main sampling chain sampling: Integrate Top-nσ into main sampling chain (and add it to the server) May 2, 2025
tools/server/server.cpp Show resolved Hide resolved
@CISC CISC merged commit 233461f into ggml-org:master May 5, 2025
46 checks passed
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request May 6, 2025
* origin/master: (27 commits)
llama : fix build_ffn without gate (ggml-org#13336)
CUDA: fix bad asserts for partial offload (ggml-org#13337)
convert : qwen2/3moe : set yarn metadata if present (ggml-org#13331)
CUDA: fix --split-mode row for MMQ (ggml-org#13323)
gguf-py : avoid requiring pyside6 for other scripts (ggml-org#13036)
CUDA: fix logic for clearing padding with -ngl 0 (ggml-org#13320)
sampling : Integrate Top-nσ into main sampling chain (and add it to the server) (ggml-org#13264)
server : Webui - change setText command from parent window to also send the message. (ggml-org#13309)
mtmd : rename llava directory to mtmd (ggml-org#13311)
clip : fix confused naming ffn_up and ffn_down (ggml-org#13290)
convert : bailingmoe : set yarn metadata if present (ggml-org#13312)
SYCL: Disable mul_mat kernels for noncontiguous tensor b (ggml-org#13308)
mtmd : add C public API (ggml-org#13184)
rpc : use backend registry, support dl backends (ggml-org#13304)
ggml : activate s390x simd for Q3_K (ggml-org#13301)
llava/mtmd : fixes to fully support dl backends (ggml-org#13303)
llama : build windows releases with dl backends (ggml-org#13220)
CUDA: fix race condition in MMQ stream-k fixup (ggml-org#13299)
CUDA: fix race condition in MMQ ids_dst (ggml-org#13294)
vulkan: Additional type support for unary, binary, and copy (ggml-org#13266)
...
@betweenus
Copy link

Please update documentation (server README).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants

Morty Proxy This is a proxified and sanitized view of the page, visit original site.