The below function is intended to convert its parameter, an integer, from decimal to octal.
std::string dec_to_oct(int num) {
    std::string output;
    for(int i=10; i>=0; --i) {
        output += std::to_string( (num >> i*3) & 0b111 );
    }
    return output;
}
It works for any positive input, however, for num = -1 it returns 77777777777, when it should return 37777777777, so the first digit needs to be a 3 instead of a 7. Why is this happening? The function appears to be incorrect for all negative input. How can I adjust the algorithm so that it returns correctly for negative numbers?
Note: this is a CS assignment so I'd appreciate hints/tips.
                        
This is because the arithmetic shift preserves the sign of the number. To overcome this, cast the input integer to the equivalent unsigned type first.
(((unsigned int)num) >> 3*i) & 7Going further, you can make the function templated and cast the pointer to the input to
uint8_t*, usingsizeofto calculate the number of octal digits (as suggested by DanielH). However that will be a bit more involved as the bits for a certain digit may stretch over two bytes.