Skip to content

Latest commit

 

History

History
19 lines (15 loc) · 967 Bytes

FineTuning_Scenarios.md

File metadata and controls

19 lines (15 loc) · 967 Bytes

微調場景

Scenario LoRA QLoRA PEFT DeepSpeed ZeRO DORA
Adapting pre-trained LLMs to specific tasks or domains
Fine-tuning for NLP tasks such as text classification, named entity recognition, and machine translation
Fine-tuning for QA tasks
Fine-tuning for generating human-like responses in chatbots
Fine-tuning for generating music, art, or other forms of creativity
Reducing computational and financial costs
Reducing memory usage
Using fewer parameters for efficient finetuning
Memory-efficient form of data parallelism that gives access to the aggregate GPU memory of all the GPU devices available

調整效能範例

微調效能