You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Implement dry penalty
* Add dry sampling params to requests
* Handle it
* Clippy
* Review: "Implement DRY penalty" (#645)
* Silence bogus Clippy warning
Clippy's suggestion cannot be implemented because of borrowing issues
* Get rid of unnecessary type annotations
Interesting that Clippy doesn't catch this
* Store default sequence breakers in a slice
It's nicer when the length is not hardcoded
* Make default sequence breakers private
No need to leak this as it's not used elsewhere
* Limit match length
Avoids quadratic runtime and potential DoS with adversarial inputs
Ref oobabooga/text-generation-webui#6047
* "Fix" sequence breaker tokenization
Most tokenizers encode punctuation tokens differently depending on where they occur in the input, and which tokens surround them. With the default sequence breakers, the appropriate encoding usually corresponds to the encoding produced when the token occurs after a word, rather than by itself. To emulate this, prefix the token with "a" before encoding, and extract the final token of the result.
See LostRuins/koboldcpp#982 for a correct solution to this problem.
* Nicer
* Even better
* Complete merge
* Fix saturating sub
* Handle when no context
* Make context the entire sequence and refactor
* Remove slicing for all
* Fix the bug with penalty
Credit to @p-e-w for finding this!
Co-authored-by: Philipp Emanuel Weidmann <pew@worldwidemann.com>
* Add custom logits processor API (#702)
* Add custom logits processor api
* Typos
* Nicer interface and update example
* Fix doctest
* Update docs
* Update exports
* Add Gemma 2 PagedAttention support (#704)
* Add gemma2 paged attn support
* Non cuda support?
* Remove error
* It works
* Faster RmsNorm in gemma/gemma2 (#703)
* Fix bug in metal isq (#706)
* Support GGUF BF16 tensors (#691)
* Support GGUF bf16 tensors
* Fix loading of bf16 ggml tensor
* Fix dequant of bf16
* Use merged rev
* Softcapping, real batching + sliding window support for Flash Attention (#707)
* Flash attention varlen kind of works
* Seems to work
* Now it's nice
* Sliding window support and clippy
* Remove warning
* Support smollm
* Update rev to match merged
* Remove some usages of 'pub' in models (#708)
* Support the Phi 3.5 V model (#710)
* Update image_seq_len
* Update the examples
* Format
* Implement the Phi 3.5 MoE model (#709)
* Copy the model
* Add most of it
* Add the blocksparse moe parts
* Clippy
* Fix mscales
* A batch of fixes
* Correctly cast it
* Handle isq on gate
* Even more progress
* Runs now
* Clippy
* Fix to use layernorm
* Remove unused
* Add docs
* Add more docs
* Apply review comments
* Update readme
---------
Co-authored-by: Philipp Emanuel Weidmann <pew@worldwidemann.com>
0 commit comments