-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sweep: Add llama3.3 support #780
Comments
🚀 Here's the PR! #781Search Results
ResponseI'll help implement the Llama 2 LLM support following the existing patterns in the codebase. Let's break this down into multiple code changes:
|
Details
Let's add opensource llama model. can you give me a hint so I can try it out
Add Llama 2 LLM Support to Core LLM Module
Description:
Extend the LLM module to support Llama 2 models through LangChain's integration, following the existing pattern for other LLM providers.
Tasks:
In
gpt_all_star/core/llm.py
:LLAMA
toLLM_TYPE
enum_create_chat_llama
helper functioncreate_llm
to handle Llama caseIn
.env.sample
:Test:
tests/core/test_llm.py
:Implementation Notes:
The text was updated successfully, but these errors were encountered: