Casting int pointer (array) to int typed-memoryview in Cython throws ValueError at runtime

303 views Asked by At

Problem Description

I have this function:

def test(unsigned int N):
    cdef int *my_arr = <int *> malloc(N * sizeof(int))
    cdef int[:] new_arr = <int[:N]>my_arr
    free(my_arr)

When I put it on a single isolated file called test.pyx, compile it and call it from CPython's interpreter, it works fine. I call it like this:

import test
test.test(10)

The problem comes when I add this function to an existing lib (in a big project with multiple files and dependencies), it compiles fine but throws an error when I call it:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "av/video/frame.pyx", line 37, in av.video.frame.test
    cdef int[:] new_arr = <int[:N]>my_arr
ValueError

as you can see, it specifically complains about this line:

cdef int[:] new_arr = <int[:N]>my_arr

The function is exactly the same in both cases, and I pass the same N value, it's really weird.

Does anybody know in which context could a simple cast from int * to int[:] throw a ValueError?


Context

I'm trying to add some functionality to the PyAV library, so I'm adding that function to this file:

https://github.com/PyAV-Org/PyAV/blob/main/av/video/frame.pyx

and it throws when I just call it like this:

import av
av.video.frame.test(10)

I'm on MacOS 10.15.7, using Python 3.7.7 and Cython 0.29.21 with Clang 11.0.3 compiler.


Update 1

I've checked the C code generated by Cython for the test() function and it's the same in both cases, this is the code generated for the problematic line:

 /* "av/video/frame.pyx":37
 * def test(unsigned int N):
 *     cdef int *my_arr = <int *> malloc(N * sizeof(int))
 *     cdef int[:] new_arr = <int[:N]>my_arr             # <<<<<<<<<<<<<<
 *     free(my_arr)
 * 
 */
  if (!__pyx_v_my_arr) {
    PyErr_SetString(PyExc_ValueError,"Cannot create cython.array from NULL pointer");
    __PYX_ERR(0, 37, __pyx_L1_error)
  }
  __pyx_t_3 = __pyx_format_from_typeinfo(&__Pyx_TypeInfo_int); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 37, __pyx_L1_error)
  __Pyx_GOTREF(__pyx_t_3);
  __pyx_t_2 = Py_BuildValue((char*) "("  __PYX_BUILD_PY_SSIZE_T  ")", ((Py_ssize_t)__pyx_v_N)); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 37, __pyx_L1_error)
  __Pyx_GOTREF(__pyx_t_2);
  __pyx_t_1 = __pyx_array_new(__pyx_t_2, sizeof(int), PyBytes_AS_STRING(__pyx_t_3), (char *) "c", (char *) __pyx_v_my_arr);
  if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 37, __pyx_L1_error)
  __Pyx_GOTREF(__pyx_t_1);
  __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
  __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
  __pyx_t_4 = __Pyx_PyObject_to_MemoryviewSlice_ds_int(((PyObject *)__pyx_t_1), PyBUF_WRITABLE); if (unlikely(!__pyx_t_4.memview)) __PYX_ERR(0, 37, __pyx_L1_error)
  __Pyx_DECREF(((PyObject *)__pyx_t_1)); __pyx_t_1 = 0;
  __pyx_v_new_arr = __pyx_t_4;
  __pyx_t_4.memview = NULL;
  __pyx_t_4.data = NULL;

So I started commenting lines with __PYX_ERR() calls and found that the one raising the ValueError is this:

__pyx_t_4 = __Pyx_PyObject_to_MemoryviewSlice_ds_int(((PyObject *)__pyx_t_1), PyBUF_WRITABLE); if (unlikely(!__pyx_t_4.memview)) __PYX_ERR(0, 37, __pyx_L1_error)

This line updates a __Pyx_memviewslice that is declared earlier as:

__Pyx_memviewslice __pyx_t_4 = { 0, 0, { 0 }, { 0 }, { 0 } };

This function is likely to be setting the error:

__Pyx_PyObject_to_MemoryviewSlice_ds_int()

so maybe the cast (PyObject *)__pyx_t_1 is doing something wrong or __pyx_t_1 is not what it expects.

But I really don't know how all this works so maybe there is something else causing this behaviour, like some compiler flag maybe?

0

There are 0 answers