-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: Does RAGFlow work without Qwen Models? #4314
Comments
Many thanks for your hint. I changed the System model Settings twice, 1. openAI compatible and 2. Google models. Now I get different errors (see below). The embedding model API keys are working, tested multiple times. For the first error, it seems like the API key is not picked up properly when creating the embeddings although I configured it in the model provider settings. With OpenAI compatible API (IONOS) as model providerembedding model: BAAI/bge-large-en-v1.5: Docker container log:
In knowledgebase:
With Gemini as model providerModel: text-embedding-004 Docker log:
In knowledgebase:
|
Describe your problem
Hi,
when I delete the default models ("Tongyi-Qianwen") from the Model providers page, and add an Ollama LLM Chat model and an Embedding model, I get an error "[ERROR]handle_task got exception, please check log" when adding a document to my Knowledge Base. In the logs I can see, that the task says the following:
So it still tries to use the Qwen Models (
llm_id
andimg2txt_id
). Why? Are they hardcoded somewhere? I cannot set a different LLM for my Knowledge Base.Thanks in advance for your time!
The text was updated successfully, but these errors were encountered: