We've installed the latest version of Core on a brand new HP server with 28 cores and 96 GB of memory. We produce a very high volume of print work and were hoping to be able to use Core allow us to power through the processing of it more quickly. This is what we're experiencing, though:
We can put one job in to Core, it'll process for, say, 3 minutes, and Core will spit it out. During this time the CPU/memory/HDD usage is barely touched at all.
If we put 10 jobs in at once, it takes Core 30 minutes to output *anything* at all. They don't trickle out, one every three minutes let's say, but rather we get nothing at all until we get all 10, and it takes far too long. During the entire 30 minutes the CPU stays at less than 5%, no memory is used, and hardly any HDD I/O is used.
I feel like this must be a configuration issue. Does anyone have any input?
By default, FreeFlow Core will process n concurrent jobs with n being the number of CPU Cores on the server.
Job processing performance itself is dependent upon a number of factors, including CPU speed, number of CPUs, RAM and storage type / IO (i.e., HDD vs. SATA vs. SSD). The complexity of the workflow itself, and number of jobs in the system can also be contributing factors.
In addition, SQL Server (SQLS) can have a significant impact on performance as well.
Windows Server Power Options
The selected power plan (Control Panel > Hardware > Power Options) can have a significant impact on job processing performance. For optimal job processing performance, select the High Performance power plan.
SQLS Memory Consumption
Unless you specifically configure it to do otherwise, SQL Server will gradually consume all available memory for caching purposes. However, SQL Server will release the memory back to the OS when needed.
While the above is true in theory, in practice SQLS does not reliably release memory used for caching under heavy load – especially when Auto Close is disabled. This may starve FreeFlow Core of memory, which will cause processing to become serialized so that jobs are processed one at a time. In extreme cases, this will affect performance.
Ideally, the system should have enough RAM to allow SQLS and FreeFlow Core to function at their peak performance. However, SQLS caches single use DB queries (there's no way to change this behavior), which simply uses more RAM and does not improve performance.
In most cases of sustained, high volume job processing scenarios, SQLS memory consumption should be capped. The trick, however, is to cap SQLS memory at a value that does not negatively impact SQLS performance. In internal testing, that number appears to be somewhere above 3GB. However, the optimal value will be different for every customer. Finding the right value for your environment will require either trial an error or capping memory at a high number.
If you continue to experience problems with performance, ensure that the FreeFlow Core application and SQLS are isolated from all other applications (FreeFlow Core should not be installed co-resident with other applications on the same server, if possible), and that unrealted processes running on the server are idle - e.g., if you are running network-dependent anti-virus or network monitoring applications, which run transparently in the background.