-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(core): Support multi round conversation operator #986
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Collaborator
fangyinc
commented
Dec 27, 2023
- New Data Analyst assistant example with AWEL
- New conversation serve package
…alyst assistant example
How to run Data Analyst assistant exampleRun awel file with dev_modeexport OPENAI_API_KEY=xxx
export OPENAI_API_BASE=https://api.openai.com/v1
python examples/awel/simple_chat_history_example.py Test with curlOpen a new terminal DBGPT_SERVER="http://127.0.0.1:5555"
MODEL="gpt-3.5-turbo"
curl -X POST $DBGPT_SERVER/api/v1/awel/trigger/examples/data_analyst/copilot \
-H "Content-Type: application/json" -d '{
"command": "dbgpt_awel_data_analyst_code_fix",
"model": "gpt-3.5-turbo",
"stream": false,
"context": {
"conv_uid": "uuid_conv_copilot_1234",
"chat_mode": "chat_with_code"
},
"messages": "SELECT * FRM orders WHERE order_amount > 500;"
}' The result like: {
"text": "修复后的代码如下:\n\nSELECT * FROM orders WHERE order_amount > 500;\n\n逐行解释:\n1. \"SELECT *\" 表示选择所有的列,即返回整个表的所有列的数据。\n2. \"FROM orders\" 表示从名为 \"orders\" 的表中检索数据。\n3. \"WHERE order_amount > 500\" 是一个过滤条件,表示只选择订单金额大于500的订单。\n4. 修复后的代码中,修正了一个拼写错误,将 \"FRM\" 改为了 \"FROM\"。这样才能正确地从表中检索数据。",
"error_code": 0,
"model_context": {
"prompt_echo_len_char": -1,
"has_format_prompt": false
},
"finish_reason": null,
"usage": null,
"metrics": {
"collect_index": 139,
"start_time_ms": 1703684771627,
"end_time_ms": 1703684780862,
"current_time_ms": 1703684780862,
"first_token_time_ms": null,
"first_completion_time_ms": 1703684775563,
"first_completion_tokens": null,
"prompt_tokens": null,
"completion_tokens": null,
"total_tokens": null,
"speed_per_second": null,
"current_gpu_infos": null,
"avg_gpu_infos": null
}
} |
csunny
approved these changes
Dec 27, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Aralhi
approved these changes
Dec 27, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
r+
vshy108
pushed a commit
to vshy108/DB-GPT
that referenced
this pull request
Jan 18, 2024
vshy108
pushed a commit
to vshy108/DB-GPT
that referenced
this pull request
Feb 6, 2024
author penghou.ho <[email protected]> 1701341533 +0800 committer penghou.ho <[email protected]> 1707199703 +0800 parent 3f70da4 author penghou.ho <[email protected]> 1701341533 +0800 committer penghou.ho <[email protected]> 1707198697 +0800 parent 3f70da4 author penghou.ho <[email protected]> 1701341533 +0800 committer penghou.ho <[email protected]> 1707198521 +0800 Add requirements.txt Create only necesasary tables Remove reference info in chat completion result Set disable_alembic_upgrade to True Comment _initialize_awel Comment mount_static_files Fix torch.has_mps deprecated Add API key Comment unused API endpoints Install rocksdict to enable DiskCacheStorage Fix the chat_knowledge missing in chat_mode Update requirements.txt Re-enable awel and add api key check for simple_rag_example DAG Merge main bdf9442 Disable disable_alembic_upgrade Compile bitsandbytes from source and enable verbose Tune the prompt of chat knowledge to only refer to context Add the web static files and uncomment previous unused APIs Add back routers Enable KNOWLEDGE_CHAT_SHOW_RELATIONS Display relation based on CFG.KNOWLEDGE_CHAT_SHOW_RELATIONS Stop reference add to last_output if KNOWLEDGE_CHAT_SHOW_RELATIONS is false Fix always no reference Improve chinese prompts Update requirements.txt Improve prompt Improve prompt Fix prompt variable name Use openhermes-2.5-mistral-7b.Q4_K_M.gguf 1. Fix the delete issue of LlamaCppModel 2. Disable verbose log 3. Update diskcache 4. Remove conda-pack Update chinese prompt and process the model response Extract result from varying tags Add back missing content_matches and put tags regex into variable Update english prompt and decide CANNOT_ANSWER based on language configuration Add 3 new models entries and upgrade bitsandbytes Add few chat templates Update model conversation with fastchat code Revert "Update model conversation with fastchat code" This reverts commit a5dc4b5. Revert "Add few chat templates" This reverts commit e6b6c99. Add OpenHermes-2.5-Mistral-7B chat template Fix missing messages and offset in chat template Update fschat Remove model adapter debugging logs and added conversation template Update chinese chat knowledge prompt Avoid to save the long chat history messages Update chinese chat knowledge prompt Temporary workaround to make the GGUF file use different chat template Use ADD_COLON_SINGLE instead of FALCON_CHAT for separator style Allow no model_name in chat completion request Use starling-lm-7b-alpha.Q5_K_M.gguf Add empty string as system for openchat_3.5 chat template Undo response regex in generate_streaming refactor: Refactor storage and new serve template (eosphoros-ai#947) feat(core): Add API authentication for serve template (eosphoros-ai#950) ci: Add python unit test workflows (eosphoros-ai#954) feat(model): Support Mixtral-8x7B (eosphoros-ai#959) feat(core): Support multi round conversation operator (eosphoros-ai#986) chore(build): Fix typo and new pre-commit config (eosphoros-ai#987) feat(model): Support SOLAR-10.7B-Instruct-v1.0 (eosphoros-ai#1001) refactor: RAG Refactor (eosphoros-ai#985) Co-authored-by: Aralhi <[email protected]> Co-authored-by: csunny <[email protected]> Upgrade english prompt for chat knowledge
vshy108
pushed a commit
to vshy108/DB-GPT
that referenced
this pull request
Feb 13, 2024
author penghou.ho <[email protected]> 1701341533 +0800 committer penghou.ho <[email protected]> 1707199703 +0800 parent 3f70da4 author penghou.ho <[email protected]> 1701341533 +0800 committer penghou.ho <[email protected]> 1707198697 +0800 parent 3f70da4 author penghou.ho <[email protected]> 1701341533 +0800 committer penghou.ho <[email protected]> 1707198521 +0800 Add requirements.txt Create only necesasary tables Remove reference info in chat completion result Set disable_alembic_upgrade to True Comment _initialize_awel Comment mount_static_files Fix torch.has_mps deprecated Add API key Comment unused API endpoints Install rocksdict to enable DiskCacheStorage Fix the chat_knowledge missing in chat_mode Update requirements.txt Re-enable awel and add api key check for simple_rag_example DAG Merge main bdf9442 Disable disable_alembic_upgrade Compile bitsandbytes from source and enable verbose Tune the prompt of chat knowledge to only refer to context Add the web static files and uncomment previous unused APIs Add back routers Enable KNOWLEDGE_CHAT_SHOW_RELATIONS Display relation based on CFG.KNOWLEDGE_CHAT_SHOW_RELATIONS Stop reference add to last_output if KNOWLEDGE_CHAT_SHOW_RELATIONS is false Fix always no reference Improve chinese prompts Update requirements.txt Improve prompt Improve prompt Fix prompt variable name Use openhermes-2.5-mistral-7b.Q4_K_M.gguf 1. Fix the delete issue of LlamaCppModel 2. Disable verbose log 3. Update diskcache 4. Remove conda-pack Update chinese prompt and process the model response Extract result from varying tags Add back missing content_matches and put tags regex into variable Update english prompt and decide CANNOT_ANSWER based on language configuration Add 3 new models entries and upgrade bitsandbytes Add few chat templates Update model conversation with fastchat code Revert "Update model conversation with fastchat code" This reverts commit a5dc4b5. Revert "Add few chat templates" This reverts commit e6b6c99. Add OpenHermes-2.5-Mistral-7B chat template Fix missing messages and offset in chat template Update fschat Remove model adapter debugging logs and added conversation template Update chinese chat knowledge prompt Avoid to save the long chat history messages Update chinese chat knowledge prompt Temporary workaround to make the GGUF file use different chat template Use ADD_COLON_SINGLE instead of FALCON_CHAT for separator style Allow no model_name in chat completion request Use starling-lm-7b-alpha.Q5_K_M.gguf Add empty string as system for openchat_3.5 chat template Undo response regex in generate_streaming refactor: Refactor storage and new serve template (eosphoros-ai#947) feat(core): Add API authentication for serve template (eosphoros-ai#950) ci: Add python unit test workflows (eosphoros-ai#954) feat(model): Support Mixtral-8x7B (eosphoros-ai#959) feat(core): Support multi round conversation operator (eosphoros-ai#986) chore(build): Fix typo and new pre-commit config (eosphoros-ai#987) feat(model): Support SOLAR-10.7B-Instruct-v1.0 (eosphoros-ai#1001) refactor: RAG Refactor (eosphoros-ai#985) Co-authored-by: Aralhi <[email protected]> Co-authored-by: csunny <[email protected]> Upgrade english prompt for chat knowledge
vshy108
pushed a commit
to vshy108/DB-GPT
that referenced
this pull request
Feb 13, 2024
author penghou.ho <[email protected]> 1701341533 +0800 committer penghou.ho <[email protected]> 1707199703 +0800 parent 3f70da4 author penghou.ho <[email protected]> 1701341533 +0800 committer penghou.ho <[email protected]> 1707198697 +0800 parent 3f70da4 author penghou.ho <[email protected]> 1701341533 +0800 committer penghou.ho <[email protected]> 1707198521 +0800 Add requirements.txt Create only necesasary tables Remove reference info in chat completion result Set disable_alembic_upgrade to True Comment _initialize_awel Comment mount_static_files Fix torch.has_mps deprecated Add API key Comment unused API endpoints Install rocksdict to enable DiskCacheStorage Fix the chat_knowledge missing in chat_mode Update requirements.txt Re-enable awel and add api key check for simple_rag_example DAG Merge main bdf9442 Disable disable_alembic_upgrade Compile bitsandbytes from source and enable verbose Tune the prompt of chat knowledge to only refer to context Add the web static files and uncomment previous unused APIs Add back routers Enable KNOWLEDGE_CHAT_SHOW_RELATIONS Display relation based on CFG.KNOWLEDGE_CHAT_SHOW_RELATIONS Stop reference add to last_output if KNOWLEDGE_CHAT_SHOW_RELATIONS is false Fix always no reference Improve chinese prompts Update requirements.txt Improve prompt Improve prompt Fix prompt variable name Use openhermes-2.5-mistral-7b.Q4_K_M.gguf 1. Fix the delete issue of LlamaCppModel 2. Disable verbose log 3. Update diskcache 4. Remove conda-pack Update chinese prompt and process the model response Extract result from varying tags Add back missing content_matches and put tags regex into variable Update english prompt and decide CANNOT_ANSWER based on language configuration Add 3 new models entries and upgrade bitsandbytes Add few chat templates Update model conversation with fastchat code Revert "Update model conversation with fastchat code" This reverts commit a5dc4b5. Revert "Add few chat templates" This reverts commit e6b6c99. Add OpenHermes-2.5-Mistral-7B chat template Fix missing messages and offset in chat template Update fschat Remove model adapter debugging logs and added conversation template Update chinese chat knowledge prompt Avoid to save the long chat history messages Update chinese chat knowledge prompt Temporary workaround to make the GGUF file use different chat template Use ADD_COLON_SINGLE instead of FALCON_CHAT for separator style Allow no model_name in chat completion request Use starling-lm-7b-alpha.Q5_K_M.gguf Add empty string as system for openchat_3.5 chat template Undo response regex in generate_streaming refactor: Refactor storage and new serve template (eosphoros-ai#947) feat(core): Add API authentication for serve template (eosphoros-ai#950) ci: Add python unit test workflows (eosphoros-ai#954) feat(model): Support Mixtral-8x7B (eosphoros-ai#959) feat(core): Support multi round conversation operator (eosphoros-ai#986) chore(build): Fix typo and new pre-commit config (eosphoros-ai#987) feat(model): Support SOLAR-10.7B-Instruct-v1.0 (eosphoros-ai#1001) refactor: RAG Refactor (eosphoros-ai#985) Co-authored-by: Aralhi <[email protected]> Co-authored-by: csunny <[email protected]> Upgrade english prompt for chat knowledge
vshy108
pushed a commit
to vshy108/DB-GPT
that referenced
this pull request
Feb 13, 2024
author penghou.ho <[email protected]> 1701341533 +0800 committer penghou.ho <[email protected]> 1707199703 +0800 parent 3f70da4 author penghou.ho <[email protected]> 1701341533 +0800 committer penghou.ho <[email protected]> 1707198697 +0800 parent 3f70da4 author penghou.ho <[email protected]> 1701341533 +0800 committer penghou.ho <[email protected]> 1707198521 +0800 Add requirements.txt Create only necesasary tables Remove reference info in chat completion result Set disable_alembic_upgrade to True Comment _initialize_awel Comment mount_static_files Fix torch.has_mps deprecated Add API key Comment unused API endpoints Install rocksdict to enable DiskCacheStorage Fix the chat_knowledge missing in chat_mode Update requirements.txt Re-enable awel and add api key check for simple_rag_example DAG Merge main bdf9442 Disable disable_alembic_upgrade Compile bitsandbytes from source and enable verbose Tune the prompt of chat knowledge to only refer to context Add the web static files and uncomment previous unused APIs Add back routers Enable KNOWLEDGE_CHAT_SHOW_RELATIONS Display relation based on CFG.KNOWLEDGE_CHAT_SHOW_RELATIONS Stop reference add to last_output if KNOWLEDGE_CHAT_SHOW_RELATIONS is false Fix always no reference Improve chinese prompts Update requirements.txt Improve prompt Improve prompt Fix prompt variable name Use openhermes-2.5-mistral-7b.Q4_K_M.gguf 1. Fix the delete issue of LlamaCppModel 2. Disable verbose log 3. Update diskcache 4. Remove conda-pack Update chinese prompt and process the model response Extract result from varying tags Add back missing content_matches and put tags regex into variable Update english prompt and decide CANNOT_ANSWER based on language configuration Add 3 new models entries and upgrade bitsandbytes Add few chat templates Update model conversation with fastchat code Revert "Update model conversation with fastchat code" This reverts commit a5dc4b5. Revert "Add few chat templates" This reverts commit e6b6c99. Add OpenHermes-2.5-Mistral-7B chat template Fix missing messages and offset in chat template Update fschat Remove model adapter debugging logs and added conversation template Update chinese chat knowledge prompt Avoid to save the long chat history messages Update chinese chat knowledge prompt Temporary workaround to make the GGUF file use different chat template Use ADD_COLON_SINGLE instead of FALCON_CHAT for separator style Allow no model_name in chat completion request Use starling-lm-7b-alpha.Q5_K_M.gguf Add empty string as system for openchat_3.5 chat template Undo response regex in generate_streaming refactor: Refactor storage and new serve template (eosphoros-ai#947) feat(core): Add API authentication for serve template (eosphoros-ai#950) ci: Add python unit test workflows (eosphoros-ai#954) feat(model): Support Mixtral-8x7B (eosphoros-ai#959) feat(core): Support multi round conversation operator (eosphoros-ai#986) chore(build): Fix typo and new pre-commit config (eosphoros-ai#987) feat(model): Support SOLAR-10.7B-Instruct-v1.0 (eosphoros-ai#1001) refactor: RAG Refactor (eosphoros-ai#985) Co-authored-by: Aralhi <[email protected]> Co-authored-by: csunny <[email protected]> Upgrade english prompt for chat knowledge
Hopshine
pushed a commit
to Hopshine/DB-GPT
that referenced
this pull request
Sep 10, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.