This repository contains code for fine-tuning the LLaMA 2 language model on a custom dataset. The fine-tuning process allows you to adapt the pre-trained LLaMA 2 model to perform better on specific tasks or domains.
Load the fine-tuned model and tokenizer
model = LlamaForCausalLM.from_pretrained("path/to/fine-tuned-model")
tokenizer = LlamaTokenizer.from_pretrained("path/to/fine-tuned-model")
Generate text using the fine-tuned model
input_text = "The quick brown fox jumps over the lazy dog."
input_ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(input_ids, max_length=50, num_return_sequences=1)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)```