-
Notifications
You must be signed in to change notification settings - Fork 288
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fail to run Phi-3 Model with DirectML + ONNX Runtime in ARM64 #211
Comments
Hi @yuting1008 It looks like you're encountering a conflict between the x64 and ARM64 architectures or your simply not running the command from a Developer Command Prompt for Visual Studio where cmake is installed This is a common issue when trying to build on different architectures. Here are a few steps to help resolve this: Ensure Consistent Architecture: Make sure that all your tools and dependencies are targeting the same architecture (ARM64 in this case). You might need to reinstall Python and other dependencies for ARM64. Update Build Commands: Adjust your build commands to explicitly target ARM64. You can do this by adding the Recheck Build Commands: Ensure you are building ONNX Runtime with the correct flags for ARM64. ./build.bat --build_shared_lib --skip_tests --parallel --use_dml --config Release --arch ARM64 Python Environment: Confirm that your Python installation is for ARM64. Reinstall if necessary. winget install Python.Python.3.12-arm64 Check Dependencies: Verify that all dependencies, especially those related to DirectML and ONNX Runtime, are compatible with ARM64. Step 1: Set Up Your Development EnvironmentInstall Required Tools: CMake: Install using winget. winget install --id=Kitware.CMake Git: Ensure Git is installed for cloning repositories. Step 2: Clone ONNX Runtime RepositoryClone the ONNX Runtime Repository: git clone https://github.com/microsoft/onnxruntime.git
cd onnxruntime Step 3: Build ONNX Runtime with DirectMLBuild the ONNX Runtime: Run the build script: ./build.bat --build_shared_lib --skip_tests --parallel --use_dml --config Release Step 4: Set Up ONNX Runtime GenAIClone the ONNX Runtime GenAI Repository: cd ..
git clone https://github.com/microsoft/onnxruntime-genai.git
cd onnxruntime-genai
mkdir ort
cd ort
mkdir include
mkdir lib
Copy Necessary Files:
Copy headers and libraries from ONNX Runtime build to the ort directory:
```bash
copy ..\onnxruntime\include\onnxruntime\core\providers\dml\dml_provider_factory.h ort\include
copy ..\onnxruntime\include\onnxruntime\core\session\onnxruntime_c_api.h ort\include
copy ..\onnxruntime\build\Windows\Release\Release\*.dll ort\lib
copy ..\onnxruntime\build\Windows\Release\Release\onnxruntime.lib ort\lib Step 5: Build ONNX Runtime GenAIBuild the Project: Ensure you are still in the Developer Command Prompt for Visual Studio python build.py --use_dml Troubleshooting Tips:Check for Architecture Conflicts: Ensure all tools and dependencies are targeting ARM64. You might need to re-install Python for ARM64 if it’s x64. Environment Variables: Ensure environment variables like CUDA_HOME are set correctly (if applicable). Step 6: Run Your Fine-Tuned ModelPrepare Your Model: Ensure your fine-tuned model is in ONNX format. Run Inference: Use the provided scripts to run inference with your model on the Snapdragon NPU. Testing and Validation Run Tests: Ensure your setup works correctly by running test scripts and validating outputs. Optimize: Tune your model and setup for performance. |
Thanks, it helps a lot! |
I am currently using Surface Pro 11 to reproduce AIPC_Inference.md#2-use-directml--onnx-runtime-to-run-phi-3-model. I run the following commands and fail at
python build.py --use_dml
. My final goal is to run my own fine-tuned model on Snapdragon NPU.The error message is too long to be pasted here. But I think the root cause is as follows:
Please let me know if there is any solution. Thank you!
The text was updated successfully, but these errors were encountered: