Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Apple's MPS backend #123

Closed
wants to merge 9 commits into from

Conversation

daikiad
Copy link

@daikiad daikiad commented Aug 3, 2024

This pull request introduces support for Apple's Metal Performance Shaders (MPS) and refines device compatibility. The main changes include:

  • Replaced the CUDA kernel dependency for connected components calculation with scipy functions, enabling functionality on Apple's MPS backend.
  • Modified setup.py to install the CUDA kernel only in CUDA environments. For other environments, scipy functions will be used.
  • Replaced .cuda() calls with .to(device) to remove the strict CUDA dependency, allowing for flexible device usage.
  • Added configuration for using MPS in Jupyter notebooks

These updates enhance the flexibility and portability of the project, ensuring it runs efficiently on both CUDA and MPS environments

@facebook-github-bot
Copy link

Hi @daikiad!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may want to consider using torch.backends.cuda.is_available() directly here instead of checking for nvcc.
Docs: https://pytorch.org/docs/stable/backends.html

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the advice! I have updated setup.py to use torch.cuda.is_available() instead of checking nvcc.

@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@@ -29,15 +30,17 @@
"hydra-core>=1.3.2",
"iopath>=0.1.10",
"pillow>=9.4.0",
"scipy>=1.14.0",
]

EXTRA_PACKAGES = {
"demo": ["matplotlib>=3.9.1", "jupyter>=1.0.0", "opencv-python>=4.7.0"],
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When trying to work off this branch, I ran into an issue with matplotlib>=3.9.1. Doing matplotlib>=3.9.0 helped.

Separately, I had to comment out ext_modules in the main setup() function.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran into the same problem with matplotlib>=3.9.1 and >=3.9.0 fixed it. Thanks!
Didn’t have any issues with ext_modules though.

@albertjo
Copy link

albertjo commented Aug 7, 2024

@daikiad Thanks for creating this branch. I'm on a M3 pro device and I was able to get SAM2 up and running with device="cpu".

I'm having trouble setting it up with "mps" though, currently seeing RuntimeError: Invalid buffer size: 9.42 GB. Here are my repro steps:

  1. predictor = build_sam2_video_predictor("sam2_hiera_s.yaml", "checkpoints/sam2_hiera_small.pt", device="mps")
  2. os.environ['PYTORCH_ENABLE_MPS_FALLBACK'] = '1'
  3. inference_state = predictor.init_state(video_path="path/to/frames")

@daikiad
Copy link
Author

daikiad commented Aug 22, 2024

Closing this pull request as support for both CPU and MPS has been added in PR #192
Thank you all for your comments and feedback.

@daikiad daikiad closed this Aug 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants