You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
FIrst, I hope this crate supports Vicuna if not, please share crate that does and I close this issue, thanks!
Now back to the problem. I was following official guide and find it a bit out dated (llama.cpp doesn't have a convert.py) https://docs.llm-chain.xyz/docs/llama-tutorial/ My setup is arch linux. Also I don't have libclang package nor in repos nor in AUR, but I found clang package and it's fixed issue (with libclang dependency missing).
But I anyway followed it and have this code:
use llm_chain::executor;use llm_chain::{parameters, prompt};use llm_chain::options::*;use llm_chain::options;#[tokio::main(flavor = "current_thread")]asyncfnmain() -> Result<(),Box<dyn std::error::Error>>{let opts = options!(Model:ModelRef::from_path("./llama.cpp/models/Wizard-Vicuna-13B-Uncensored.Q2_K.gguf"),// Notice that we reference the model binary pathModelType:"llama",MaxContextSize:512_usize,NThreads:4_usize,MaxTokens:0_usize,TopK:40_i32,TopP:0.95,TfsZ:1.0,TypicalP:1.0,Temperature:0.8,RepeatPenalty:1.1,RepeatPenaltyLastN:64_usize,FrequencyPenalty:0.0,PresencePenalty:0.0,Mirostat:0_i32,MirostatTau:5.0,MirostatEta:0.1,PenalizeNl:true,StopSequence: vec!["\n".to_string()]);let exec = executor!(llama, opts)?;let res = prompt!("I love Rust because").run(¶meters!(),&exec,).await?;println!("{}", res.to_immediate().await?);Ok(())}
FIrst, I hope this crate supports Vicuna if not, please share crate that does and I close this issue, thanks!
Now back to the problem. I was following official guide and find it a bit out dated (llama.cpp doesn't have a convert.py) https://docs.llm-chain.xyz/docs/llama-tutorial/ My setup is arch linux. Also I don't have
libclang
package nor in repos nor in AUR, but I foundclang
package and it's fixed issue (withlibclang
dependency missing).But I anyway followed it and have this code:
Running it with
cargo run
produces:I tested my model with
llama-cli
and it's working with it, producing good output.Here is
journalctl
of coredump:The text was updated successfully, but these errors were encountered: