People who are extremely performance conscious might not like the fact that CapyPDF ships as a shared library with a C API that hides all internal implementation details. This has several potential sources of slowdown:
- Function calls can not be inlined to callers
- Shared library function calls are indirect and thus slower for the CPU to invoke
- All objects require a memory allocation, they can't be stored on the caller's heap
I have not optimized CapyPDF's performance all that much, so let's see if these things are an issue in practice. As a test we'll use the Python bindings to generate the following PDF document 100 times and measuring the execution time.
Running the test tells us that we can create the document roughly 9 times a second on a single core. This does not seem like a lot, but that's because the measurement was done on a Raspberry Pi 3B+. On a i7-1185G7 laptop processor the performance shoots up to 100 times a second.
For comparison I created the same program in native code and the Python version is roughly 10% slower. More interestingly 40 % of the time is not spent on generating the PDF file itself at all, but is instead taken by font subsetting.
A further thing to note is that in PDF any data stream can be compressed, typically with zlib. This test does not use compression at all. If the document had images, compressing the raw pixel data could easily take 10 to 100x longer than all other parts combined.
PDF does support embedding some image formats (e.g. jpegs) directly, but most of the time you need to generate raw pixel data and compress it yourself. In this case compression is easily the dominant term.
A "type hidden" C API does have some overhead, but in this particular case it is almost certainly unmeasurably small. You probably don't want to create a general array or hash map in this way if you need blazing speeds, but for this kind of high level functionality it is unlikely to be a performance bottleneck.