Skip to content

Latest commit

 

History

History
59 lines (34 loc) · 2.86 KB

EnvironmentSetup.md

File metadata and controls

59 lines (34 loc) · 2.86 KB

Get started with Phi-3 locally

This guide will help you set up your local environment to run the Phi-3 model using Ollama. You can run the model in a few different ways, including using GitHub Codespaces, VS Code Dev Containers, or your local environment.

Environment setup

GitHub Codespaces

You can run this template virtually by using GitHub Codespaces. The button will open a web-based VS Code instance in your browser:

  1. Open the template (this may take several minutes):

    Open in GitHub Codespaces

  2. Open a terminal window

VS Code Dev Containers

⚠️ This option will only work if your Docker Desktop is allocated at least 16 GB of RAM. If you have less than 16 GB of RAM, you can try the GitHub Codespaces option or set it up locally.

A related option is VS Code Dev Containers, which will open the project in your local VS Code using the Dev Containers extension:

  1. Start Docker Desktop (install it if not already installed)

  2. Open the project:

    Open in Dev Containers

  3. In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window.

  4. Continue with the deployment steps

Local Environment

  1. Make sure the following tools are installed:

Test the model

  1. Ask Ollama to download and run the phi3:mini model:

    ollama run phi3:mini

    That will take a few minutes to download the model.

  2. Once you see "success" in the output, you can send a message to that model from the prompt.

    >>> Write a haiku about hungry hippos
  3. After several seconds, you should see a response stream in from the model.

  4. To learn about different techniques used with language models, open the Python notebook ollama.ipynb and run each cell . If you used a model other than 'phi3:mini', change the MODEL_NAME in the first cell.

  5. To have a conversation with the phi3:mini model from Python, open the Python file chat.py and run it. You can change the MODEL_NAME at the top of the file as needed, and you can also modify the system message or add few-shot examples if desired.