One can write efficient python code, but it is very hard to beat build-in functions which are written by C.
To speed up the python program, rewrite them in C as python module will increase the performance w/o any doubt.
But that’s not the only way as we have JIT tech nowadays. pypy and numba(llvm) are the shinning stars in this area.
Pure python code for monte carlo PI simulation
1 | def monte_carlo_pi(samples): |
Load 3rd party dynamic library in python (No need for 3rd party source code)
Like JNA solution in java, python has its own implementation: ctypes or cffi
The 3rd party library may NOT aware it will be ran in python environment.
3rd party shared library check
In windows world, we can use VC toolset dumpbin to check exported functions:
if you forget to add __declspec(dllexport)
before the function definition,
the function cannot be used on other program…
1 | C:\> dumpbin pi.dll /EXPORTS |
And in GCC world, we can use nm to check exported functions
1 | $ nm libpi.so -D |
together with the .h file, it will form the regular C world for programmers
With above background knowledge, we can check if the shared library be good to use in python