Skip to content

Latest commit

 

History

History
225 lines (129 loc) · 11.7 KB

FineTuning_AIStudio.md

File metadata and controls

225 lines (129 loc) · 11.7 KB

Fine-tuning Phi-3 with Azure AI Studio

Let’s explore how to fine-tune Microsoft’s Phi-3 Mini language model using Azure AI Studio. Fine-tuning allows you to adapt Phi-3 Mini to specific tasks, making it even more powerful and context-aware.

Considerations

  • Capabilities: Which models are fine tunable? What can the base model be fine tuned to do?
  • Cost: What’s the pricing model for fine tuning Customizability: How much can I modify the base model – and in what ways?
  • Convenience: How does fine tuning actually happen – do I need to write custom code? Do I need to bring my own compute?
  • Safety: Fine tuned models are known to have safety risks – are there any guardrails in place to protect against unintended harm?

AIStudio Models

Preparation for fine-tuning

Prerequisites

Note

For Phi-3 family models, the pay-as-you-go model fine-tune offering is only available with hubs created in East US 2 regions.

  • An Azure subscription. If you don't have an Azure subscription, create a paid Azure account to begin.

  • An AI Studio project.

  • Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the Azure AI Developer role on the resource group.

Subscription provider registration

Verify the subscription is registered to the Microsoft.Network resource provider.

  1. Sign in to the Azure portal.
  2. Select Subscriptions from the left menu.
  3. Select the subscription you want to use.
  4. Select AI project settings > Resource providers from the left menu.
  5. Confirm that Microsoft.Network is in the list of resource providers. Otherwise add it.

Data preparation

Prepare your training and validation data to finetune your model. Your training data and validation data sets consist of input and output examples for how you would like the model to perform.

Make sure all your training examples follow the expected format for inference. To finetune models effectively, ensure a balanced and diverse dataset.

This involves maintaining data balance, including various scenarios, and periodically refining training data to align with real-world expectations, ultimately leading to more accurate and balanced model responses.

Different model types require a different format of training data.

Chat Completion

The training and validation data you use must be formatted as a JSON Lines (JSONL) document. For Phi-3-mini-128k-instruct the fine-tuning dataset must be formatted in the conversational format that is used by the Chat completions API.

Example file format

    {"messages": [{"role": "system", "content": "You are an Xbox customer support agent whose primary goal is to help users with issues they are experiencing with their Xbox devices. You are friendly and concise. You only provide factual answers to queries, and do not provide answers that are not related to Xbox."}, {"role": "user", "content": "Is Xbox better than PlayStation?"}, {"role": "assistant", "content": "I apologize, but I cannot provide personal opinions. My primary job is to assist you with any issues related to your Xbox device. Do you have any Xbox-related issues that need addressing?"}]}
    {"messages": [{"role": "system", "content": "You are an Xbox customer support agent whose primary goal is to help users with issues they are experiencing with their Xbox devices. You are friendly and concise. You only provide factual answers to queries, and do not provide answers that are not related to Xbox."}, {"role": "user", "content": "Can I play PC games on my Xbox?"}, {"role": "assistant", "content": "While Xbox consoles don't natively support PC games, they can play an array of Xbox games, many of which are also available on PC. Can I assist you with finding games compatible with your Xbox device?"}]}
    {"messages": [{"role": "system", "content": "You are an Xbox customer support agent whose primary goal is to help users with issues they are experiencing with their Xbox devices. You are friendly and concise. You only provide factual answers to queries, and do not provide answers that are not related to Xbox."}, {"role": "user", "content": "I'm having trouble connecting my Xbox to the Wi-Fi."}, {"role": "assistant", "content": "No worries, let's go through the network settings on your Xbox. Can you please tell me what happens when you try to connect it to the Wi-Fi?"}]}

The supported file type is JSON Lines. Files are uploaded to the default datastore and made available in your project.

Fine-Tuning Phi-3 with Azure AI Studio

Azure AI Studio lets you tailor large language models to your personal datasets by using a process known as fine-tuning. Fine-tuning provides significant value by enabling customization and optimization for specific tasks and applications. It leads to improved performance, cost efficiency, reduced latency, and tailored outputs.

Finetune AI Studio

Create a New Project

  1. Sign in to Azure AI Studio.

  2. Select +New project to create new project in Azure AI Studio.

    FineTuneSelect

  3. Perform the following tasks:

    • Project Hub name. It must be a unique value.
    • Select the Hub to use (create a new one if needed).

    FineTuneSelect

  4. Perform the following tasks to create a new hub:

    • Enter Hub name. It must be a unique value.
    • Select your Azure Subscription.
    • Select the Resource group to use (create a new one if needed).
    • Select the Location you'd like to use.
    • Select the Connect Azure AI Services to use (create a new one if needed).
    • Select Connect Azure AI Search to Skip connecting.

    FineTuneSelect

  5. Select Next.

  6. Select Create a project.

Data Preparation

Before fine-tuning, gather or create a dataset relevant to your task, such as chat instructions, question-answer pairs, or any other pertinent text data. Clean and preprocess this data by removing noise, handling missing values, and tokenizing the text.

Fine-tune Phi-3 models in Azure AI Studio

Note

Fine-tuning of Phi-3 models is currently supported in projects located in East US 2.

  1. Select Model catalog from the left side tab.

  2. Type phi-3 in the search bar and select the phi-3 model you'd like to use.

    FineTuneSelect

  3. Select Fine-tune.

    FineTuneSelect

  4. Enter the Fine-tuned model name.

    FineTuneSelect

  5. Select Next.

  6. Perform the following tasks:

    • Select task type to Chat completion.
    • Select the Training data you'd like to use. You can upload it through Azure AI Studio's data or from your local environment.

    FineTuneSelect

  7. Select Next.

  8. Upload the Validation data you'd like to use. or you can select Automatic split of training data.

    FineTuneSelect

  9. Select Next.

  10. Perform the following tasks:

    • Select the Batch size multiplier you'd like to use.
    • Select the Learning rate you'd like to use.
    • Select the Epochs you'd like to use.

    FineTuneSelect

  11. Select Submit to start the fine-tuning process.

    FineTuneSelect

  12. Once your model is fine-tuned, the status will be displayed as Completed, as shown in the image below. Now you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see How to deploy Phi-3 family of small language models with Azure AI Studio.

    FineTuneSelect

Note

For more detailed information on fine-tuning Phi-3, please visit Fine-tune Phi-3 models in Azure AI Studio.

Cleaning up your fine-tuned models

You can delete a fine-tuned model from the fine-tuning model list in Azure AI Studio or from the model details page. Select the fine-tuned model to delete from the Fine-tuning page, and then select the Delete button to delete the fine-tuned model.

Note

You can't delete a custom model if it has an existing deployment. You must first delete your model deployment before you can delete your custom model.

Cost and quotas

Cost and quota considerations for Phi-3 models fine-tuned as a service

Phi models fine-tuned as a service are offered by Microsoft and integrated with Azure AI Studio for use. You can find the pricing when deploying or fine-tuning the models under the Pricing and terms tab on deployment wizard.

Content filtering

Models deployed as a service with pay-as-you-go are protected by Azure AI Content Safety. When deployed to real-time endpoints, you can opt out of this capability. With Azure AI content safety enabled, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about Azure AI Content Safety.

Fine-Tuning Configuration

Hyperparameters: Define hyperparameters such as learning rate, batch size, and number of training epochs.

Loss Function

Choose an appropriate loss function for your task (e.g., cross-entropy).

Optimizer

Select an optimizer (e.g., Adam) for gradient updates during training.

Fine-Tuning Process

  • Load Pre-Trained Model: Load the Phi-3 Mini checkpoint.
  • Add Custom Layers: Add task-specific layers (e.g., classification head for chat instructions).

Train the Model Fine-tune the model using your prepared dataset. Monitor training progress and adjust hyperparameters as needed.

Evaluation and Validation

Validation Set: Split your data into training and validation sets.

Evaluate Performance

Use metrics like accuracy, F1-score, or perplexity to assess model performance.

Save Fine-Tuned Model

Checkpoint Save the fine-tuned model checkpoint for future use.

Deployment

  • Deploy as a Web Service: Deploy your fine-tuned model as a web service in Azure AI Studio.
  • Test the Endpoint: Send test queries to the deployed endpoint to verify its functionality.

Iterate and Improve

Iterate: If the performance isn't satisfactory, iterate by adjusting hyperparameters, adding more data, or fine-tuning for additional epochs.

Monitor and Refine

Continuously monitor the model's behavior and refine as needed.

Customize and Extend

Custom Tasks: Phi-3 Mini can be fine-tuned for various tasks beyond chat instructions. Explore other use cases! Experiment: Try different architectures, layer combinations, and techniques to enhance performance.

Note

Fine-tuning is an iterative process. Experiment, learn, and adapt your model to achieve the best results for your specific task!