We have a lot of workflows in which we join pdf files given via a manifest.
On our freeflow core machine (server has the recommended specs) it takes extremely long, like 12 hours for joining like 1k pdf files.
On our freeflow processmanager (which runs on an old server) it takes like an hour max.
The output from freeflow processmanager is a lot of MB's
The output from core is a pretty compressed pdf file, so we prefer that, but in our production environment we now miss deadlines because of the extreme slow 'join' in freeflow core.
Is there a way to resolve this issue, i soon have to join 10k pdfs a day for a month long.
Thanks in advance,
What is the input size for the PDFs? I had a similar issue in the past, but the solution was to optimize and preflight the files which were then stored in a repository (To decrease size and put them on the same SSD box that FFCore was on), then use MAX to combine in the proper order on a per-request basis. Prepress operators would simply have to drop the premade MAX file into the proper folder and FFCore would fire away on the rest. It would end up producing a 300mb file for print, but that was sized down with quality retained from a 1200mb file.
It's worth mentioning: If you're joining a lot of large files, you're going to eat up processor power. It could be the box you have FFCore running on? It needs some good power behind it.
Personally, I'd escalate and as always check that your workflow as a whole for the project is injecting as much efficiency as possible before it reaches FFCore as well.
What version of FFCore are you using?
Also, what are the specs for the FFCore server?
I would also ensure that you are running v220.127.116.11. We put in a number of substantial performance improvements for job group handling.