Skip to content

Enable PyTorch compilation on Apple Silicon #48145

Closed
@malfet

Description

@malfet
Contributor

Currently PyTorch can not be compiled natively on Apple Silicon, because it is reported as "arm64" architecture and many third-party libraries only support ARMv8 or aarch64

cc @malfet @seemethere @walterddr

Activity

added
enhancementNot as big of a feature, but technically not a bug. Should be easy to fix
on Nov 18, 2020
qiangbo1222

qiangbo1222 commented on Nov 18, 2020

@qiangbo1222

can PyTorch run smoothly via Rosetta 2?

malfet

malfet commented on Nov 18, 2020

@malfet
ContributorAuthor

@qiangbo1222 it probably can, but it will be slower than naive binary, wouldn't it?

mwidjaja1

mwidjaja1 commented on Nov 18, 2020

@mwidjaja1

I got an m1 Macbook Air and a 2018 Intel Macbook Air right here. For an identical CNN for MNIST, the Intel Macbook took some 1975 seconds whereas the m1 took 2100 seconds.

So the Apple Silicon Macbook is slower, but on paper, it does 'run smoothly' I suppose.

PS: I'm a complete noob to developing on PyTorch but I'd be glad to do some help here or there since I have one of these Macs if the team can just help point me on what to do.

added a commit that references this issue on Nov 18, 2020
added a commit that references this issue on Nov 18, 2020
added a commit that references this issue on Nov 18, 2020
malfet

malfet commented on Nov 18, 2020

@malfet
ContributorAuthor

@mwidjaja1 can you check if https://ossci-macos-build.s3.amazonaws.com/torch-1.8.0a0-cp38-cp38-macosx_11_0_arm64.whl installs on your system, and if it does, how much faster will it be than x86_64 build?

danieldk

danieldk commented on Nov 18, 2020

@danieldk
Contributor

I got an m1 Macbook Air and a 2018 Intel Macbook Air right here. For an identical CNN for MNIST, the Intel Macbook took some 1975 seconds whereas the m1 took 2100 seconds.

So the Apple Silicon Macbook is slower, but on paper, it does 'run smoothly' I suppose.

I tested one of my projects that uses transfomer networks with libtorch and inference time was several times slower under Rosetta 2 on a MacBook Air M1. Note that Rosetta 2 does not support AVX, AVX2, or AVX512 instructions, so it is expected that x86_64 PyTorch will be much slower in many cases on M1 Macs.

80 remaining items

Loading
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNot as big of a feature, but technically not a bug. Should be easy to fixmodule: buildBuild system issuesmodule: macosMac OS related issuestriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      Participants

      @danieldk@iwisher@malfet@byronyi@hxssgaa

      Issue actions

        Enable PyTorch compilation on Apple Silicon · Issue #48145 · pytorch/pytorch