• flatbield@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 day ago

    We will have to disagree on that. This is all problem spectific, but I have found C code integrated via ctypes, cffi, or by a C extension is over 100x Python alone. Interestingly Python, Numba, and Numpy together which is a more pythonic solution can get to those speeds too.

    All of the other approaches I have tried are much slower: Nuitka, Cython, Numpy alone, PyPy, etc.

    To get best speeds one has to compile for your specific architecture and enable things like vectorization, auto parallel, and fast math. Most default builds including libraries do not do that.

    • Frezik@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      1 day ago

      This is all problem spectific, but I have found C code integrated via ctypes, cffi, or by a C extension is over 100x Python alone. Interestingly Python, Numba, and Numpy together which is a more pythonic solution can get to those speeds too.

      Of course you did. Those are changing the semantics of the language. For example, things like Numpy store arrays more like how C does it than Python. That makes all the difference, not merely compiling to native code.

      • flatbield@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 day ago

        You can get about 10x by compiling Python using PyPy. So compiling is not nothing. Using Numpy alone is about 5x which surprised me. There is a lot of missleading stuff out there about how to make Python fast. Lot of people say CPython is pretty fast or that using a binary library like numpy is fast. No CPython is very slow and libraries are not always that fast.

        Edit: Another compiler is Numba which is more specialized. It can get 30x on some code without numpy. Again compiling can help.