Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ARM64 / Aarch64 support #23

Open
QwertyJack opened this issue Jan 14, 2022 · 3 comments
Open

Add ARM64 / Aarch64 support #23

QwertyJack opened this issue Jan 14, 2022 · 3 comments

Comments

@QwertyJack
Copy link

Would you add arm support? Say arm64 or aarch64.
I find DLTcollab/sse2neon might be helpful.

@lemire
Copy link
Member

lemire commented Jan 14, 2022

Pull request invited.

@alexbakharew
Copy link

Hi

Recently we added support of aarch64 to iresearch project and to ArangoDB.
And we found out some bugs while using combination of simdcomp and sse2neon.

In function __SIMD_fastunpack1_32 following instruction is used: _mm_srli_epi32
Everything works fine on x86 architecture but on aarch64 we got some strange bug with double increment of shift variable. After deep investigation we found that sse2neon implementation of this intrinsic use macros which substitute second parameter twice. And there are a lot of other places where such situation is possible.

The solution is quite simple: Move increment from function call to single line.

static void __SIMD_fastunpack1_32(const  __m128i*   in, uint32_t *    _out) {
    __m128i*   out = (__m128i*)(_out);
    __m128i    InReg1 = _mm_loadu_si128(in);
    __m128i    InReg2 = InReg1;
    __m128i    OutReg1, OutReg2, OutReg3, OutReg4;
    const __m128i mask =  _mm_set1_epi32(1);

    uint32_t i, shift = 0;

    for (i = 0; i < 8; ++i) {
        OutReg1 = _mm_and_si128(  _mm_srli_epi32(InReg1, shift) , mask);
        ++shift;
        OutReg2 = _mm_and_si128(  _mm_srli_epi32(InReg2, shift) , mask);
        ++shift;
        OutReg3 = _mm_and_si128(  _mm_srli_epi32(InReg1, shift) , mask);
        ++shift;
        OutReg4 = _mm_and_si128(  _mm_srli_epi32(InReg2, shift) , mask);
        ++shift;
        _mm_storeu_si128(out++, OutReg1);
        _mm_storeu_si128(out++, OutReg2);
        _mm_storeu_si128(out++, OutReg3);
        _mm_storeu_si128(out++, OutReg4);
    }
}

Please be careful

@jserv
Copy link

jserv commented Oct 8, 2022

In function __SIMD_fastunpack1_32 following instruction is used: _mm_srli_epi32 Everything works fine on x86 architecture but on aarch64 we got some strange bug with double increment of shift variable. After deep investigation we found that sse2neon implementation of this intrinsic use macros which substitute second parameter twice. And there are a lot of other places where such situation is possible.

Recent SSE2NEON improves _mm_srai_epi32 to handle complex arguments.

commit 7ef68928
Author: Developer-Ecosystem-Engineering
AuthorDate: Fri Jul 8 10:52:46 2022 -0700

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants