Uint8 to mm0 register

456 views Asked by At

I've been playing with the example from this presentation (slide 41).

It performs alpha blending as far as I'm concerned.

MOVQ mm0, alpha//4 16-b zero-padding α
MOVD mm1, A //move 4 pixels of image A 
MOVD mm2, B //move 4 pixels of image B
PXOR mm3 mm3 //clear mm3 to all zeroes 
//unpack 4 pixels to 4 words
PUNPCKLBW mm1, mm3 // Because B -A could be
PUNPCKLBW mm2, mm3 // negative, need 16 bits
PSUBW mm1, mm2 //(B-A) 
PMULHW mm1, mm0 //(B-A)*fade/256 
PADDW mm1, mm2 //(B-A)*fade + B 
//pack four words back to four bytes
PACKUSWB mm1, mm3

I want to rewrite it in c with assembler.

For now, I have something like this:

void fade_mmx(SDL_Surface* im1,SDL_Surface* im2,Uint8 alpha, SDL_Surface* imOut)
{
    int pixelsCount = imOut->w * im1->h;
    
    Uint32 *A = (Uint32*) im1->pixels;
    Uint32 *B = (Uint32*) im2->pixels;
    Uint32 *out = (Uint32*) imOut->pixels;
    Uint32 *end = out + pixelsCount;

    __asm__ __volatile__ (
            "\n\t movd  (%0), %%mm0"
            "\n\t movd  (%1), %%mm1"
            "\n\t movd  (%2), %%mm2"
            "\n\t pxor       %%mm3, %%mm3"
            "\n\t punpcklbw  %%mm3, %%mm1"
            "\n\t punpcklbw  %%mm3, %%mm2"
            "\n\t psubw      %%mm2, %%mm1"
            "\n\t pmulhw     %%mm0, %%mm1"
            "\n\t paddw      %%mm2, %%mm1"
            "\n\t packuswb   %%mm3, %%mm1"
    : : "r" (alpha), "r" (A), "r" (B), "r" (out), "r" (end)
    );
    __asm__("emms" : : );
}

When compiling I get this message: Error: (%dl) is not a valid base/index expression regarding the first line in assembler. I suspect it's because alpha is Uint8, I tried casting it but then I get a segmentation fault. In the example, they're talking about 4 16-b zero-padding α which is not really clear to me.

2

There are 2 answers

5
Ross Ridge On BEST ANSWER

Your problem is that you're trying to use your alpha value as an address instead of as a value. The movd (%0), %%mm0 instruction says to use %0 as a location in memory. So you're saying to load the value pointed by alpha instead of its value. Using movd %0, %%mm0 would solve that problem, but then you'd run into the problem that your alpha value only has a 8-bit type and it needs to be a 32-bit type for it to work with the MOVD instruction. You can solve that problem and the fact the alpha value needs to be multiplied by 256 and broadcast to all 4 16-bit words of the destination register for your algorithm to work by multiplying it by 0x0100010001000100ULL and using the MOVQ instruction.

However, you don't need the MOVD/MOVQ instructions at all. You can let the compiler load the values into MMX registers itself by specifying the y constraint with code like this:

typedef unsigned pixel;

static inline pixel
fade_pixel_mmx_asm(pixel p1, pixel p2, unsigned fade) {
    asm("punpcklbw %[zeros], %[p1]\n\t"
        "punpcklbw %[zeros], %[p2]\n\t"
        "psubw     %[p2], %[p1]\n\t"
        "pmulhw    %[fade], %[p1]\n\t"
        "paddw     %[p2], %[p1]\n\t"
        "packuswb  %[zeros], %[p1]"
        : [p1] "+&y" (p1), [p2] "+&y" (p2)
        : [fade] "y" (fade * 0x0100010001000100ULL), [zeros] "y" (0));
    return p1;
}

You'll notice that there's no need for a clobber list here because there's no registers being used that wasn't allocated by the compiler, and no other side effects that the compilers needs to know about. I've left out the necessary EMMS instruction, as you wouldn't want to executed on every pixel. You'll want to insert an asm("emms"); statement after your loop that blends the two surfaces.

Better yet, you don't need to use inline assembly at all. You can use intrinsics instead, and not have to worry about the all the pitfalls of using inline assembly:

#include <mmintrin.h>

static inline pixel
fade_pixel_mmx_intrin(pixel p1, pixel p2, unsigned fade) {
    __m64 zeros = (__m64) 0ULL;
    __m#64 mfade = (__m64) (fade * 0x0100010001000100ULL);
    __m64 mp1 = _m_punpcklbw((__m64) (unsigned long long) p1, zeros);
    __m64 mp2 = _m_punpcklbw((__m64) (unsigned long long) p2, zeros);

    __m64 ret;
    ret = _m_psubw(mp1, mp2);
    ret = _m_pmulhw(ret, mfade);
    ret = _m_paddw(ret, mp2);
    ret = _m_packuswb(ret, zeros);

    return (unsigned long long) ret;
}
    

Similarly to the previous example you need call _m_empty() after your loop to generate the necessary EMMS instruction.

You should also seriously consider just writing the routine in plain C. Autovectorizers are pretty good these days, and it's likely the compiler can generate better code using modern SIMD instructions than what you're trying to do with ancient MMX instructions. For example, this code:

static inline unsigned
fade_component(unsigned c1, unsigned c2, unsigned fade) {
    return c2  + (((int) c1 - (int) c2) * fade) / 256;
}

void
fade_blend(pixel *dest, pixel *src1, pixel *src2, unsigned char fade,
           unsigned len) {
    unsigned char *d = (unsigned char *) dest;
    unsigned char *s1 = (unsigned char *) src1;
    unsigned char *s2 = (unsigned char *) src2;
    unsigned i;
    for (i = 0; i < len * 4; i++) {
        d[i] = fade_component(s1[i], s2[i], fade);
    }
}

With GCC 10.2 and -O3 the above code results in assembly code that uses 128-bit XMM registers and blends 4 pixels at a time in its inner loop:

    movdqu  xmm5, XMMWORD PTR [rdx+rax]
    movdqu  xmm1, XMMWORD PTR [rsi+rax]
    movdqa  xmm6, xmm5
    movdqa  xmm0, xmm1
    punpckhbw       xmm1, xmm3
    punpcklbw       xmm6, xmm3
    punpcklbw       xmm0, xmm3
    psubw   xmm0, xmm6
    movdqa  xmm6, xmm5
    punpckhbw       xmm6, xmm3
    pmullw  xmm0, xmm2
    psubw   xmm1, xmm6
    pmullw  xmm1, xmm2
    psrlw   xmm0, 8
    pand    xmm0, xmm4
    psrlw   xmm1, 8
    pand    xmm1, xmm4
    packuswb        xmm0, xmm1
    paddb   xmm0, xmm5
    movups  XMMWORD PTR [rdi+rax], xmm0

Finally even an unvectorized version of the C code maybe near optimal, as the code is simple enough that you're probably going to be memory bound regardless how exactly the blend is implemented.

7
fuz On

You can broadcast alpha to 64-bit using scalar multiply with 0x0001000100010001ULL before copying to an MM reg. Another option would be to just zero-extend the 8-bit integer to 32-bit for movd, then pshufw to replicate it.

There were also various safety problems with your asm.

#include <SDL/SDL.h>
#include <stdint.h>

void fade_mmx(SDL_Surface* im1,SDL_Surface* im2,Uint8 alpha, SDL_Surface* imOut)
{
    int pixelsCount = imOut->w * im1->h;

    Uint32 *A = (Uint32*) im1->pixels;
    Uint32 *B = (Uint32*) im2->pixels;
    Uint32 *out = (Uint32*) imOut->pixels;
    Uint32 *end = out + pixelsCount;

    Uint64 alphas = (Uint64)alpha * 0x0001000100010001ULL;

    __asm__ __volatile__ (
            "\n\t movd  %0, %%mm0"
            "\n\t movd  %1, %%mm1"
            "\n\t movd  %2, %%mm2"
            "\n\t pxor       %%mm3, %%mm3"
            "\n\t punpcklbw  %%mm3, %%mm1"
            "\n\t punpcklbw  %%mm3, %%mm2"
            "\n\t psubw      %%mm2, %%mm1"
            "\n\t pmulhw     %%mm0, %%mm1"
            "\n\t paddw      %%mm2, %%mm1"
            "\n\t packuswb   %%mm3, %%mm1"
    : // you're probably going to want an "=m"(*something) memory output here
    : "r" (alphas), "m" (*A), "m" (*B), "r" (out), "r" (end)
    : "mm0", "mm1", "mm2", "mm3");
    __asm__("emms" : : );
}

The asm statement doesn't need to be volatile if the compiler knows about all the inputs and output, rather than relying on a "memory" clobber. (Like here, no outputs, and only reads registers and memory that are input operands.)

For 32 bit code, replace "r"(alphas) with "m"(alphas). Or use "rm"(alphas) to let the compiler pick. (But for 32-bit, definitely better to use pshufw instead of making the compiler store a 64-bit multiply result as 2 32-bit halves, then suffer a store-forwarding stall when reloading it with movq. Intrinsics would leave the decision up to the compiler with _mm_set1_epi16(alpha), although you only do that once outside the loop anyway).

Note that I have also added the necessary clobber lists and replaced register operands containing pointers you dereference with memory operands referring to the memory you dereference, thus allowing gcc to reason about what memory you access

Please note that if you don't address these things, gcc is going to be unhappy and behaviour of your code will be undefined, likely failing in mysterious and hard to debug ways. Do not use inline assembly unless you understand exactly what you are doing. Consider using intrinsic functions as a safer and potentially more efficient alternative. (https://gcc.gnu.org/wiki/DontUseInlineAsm).

SSE2 with __m128i vectors makes it easy to do 4 pixels at once, not 2 or 1 wasting half of your pack throughput by packing with zeros. (Use punpckhbw to complement punpcklbw to set up for this). MMX is so obsolete that modern CPUs have lower throughput for MMX versions of some instructions than for the equivalent 128-bit SSE2 XMM instructions.