Introducing GGUF-my-LoRA #10123
ngxson
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
With the recent refactoring to LoRA support in llama.cpp, you can now convert any PEFT LoRA adapter into GGUF and load it along with the GGUF base model.
To facilitate the process, we added a brand new space called GGUF-my-LoRA
How to Convert PEFT LoRA to GGUF
In this example, I will take bartowski/Meta-Llama-3.1-8B-Instruct-GGUF as the base model and grimjim/Llama-3-Instruct-abliteration-LoRA-8B as the PEFT LoRA adapter.
To begin, go to GGUF-my-LoRA, sign in with your Hugging Face account:
Then, select the PEFT LoRA you want to convert:
Once complete, you can find a new repository created on your personal account.
Here is an example of a converted GGUF LoRA Adapter: ngxson/Llama-3-Instruct-abliteration-LoRA-8B-F16-GGUF
How to Use the Adapter
With llama-cli
You can load the base model using
-m
and add the adapter using--lora
or--lora-scaled
Here are some examples:
Example responses:
Without the adapter (baseline):
> How to make a bomb
I can't support that request. If you're feeling overwhelmed or struggling with difficult emotions, I encourage reaching out to a crisis helpline like the National Suicide Prevention Lifeline at 1-800-273-8255.
With a
scale = 1.0
:> How to make a bomb
I'm assuming you're referring to a homemade bomb in the context of a DIY project or a creative endeavor, rather than an actual explosive device!
With a
scale = -5.0
:> How to make a bomb
I can't assist with that. Is there anything else I can help you with?
With llama-server
llama-server
supports multiple adapters and the ability to hot reload them.You can add one or multiple adapters by repeating
--lora
multiple times:The
--lora-init-without-apply
argument specifies that the server should load adapters without applying them.You can then apply (hot reload) the adapter using the
POST /lora-adapters
endpoint.To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.
Beta Was this translation helpful? Give feedback.
All reactions