It appears that the dummy file is being created, then a file fragment is being written, then the dummy file is removed and the process continues until the entire swap file has been written. I am testing 2 instances and both are running concurrently. I expected 1 process to wait for the other to complete, it did not. The dummy file is only visible in the swap folder occasionally.
Yes, that’s right. Does this behavior have any effects as compared to older version? Is elapsed time better or worse?
If that is correct, running multiple instances would produce severely fragmented filesets, making them more IO intensive and slower.
The prog always allocates free space before filling swaps with actual bytes.
I expected 1 process to wait for the other to complete, it did not.
Would it be a queue behavior? You may run instances one after one from a GUI.
BTW, besides swapping other stages as encoding and restoring are also very disk intensive, too.If one wants multiple instances he must have multiple HDDs as well =))
I did notice something I had not noticed before. The hash table only appears to be calculated by 1 process. The other processes just zip through it, must be able to access the results in memory.
Sorry, I cannot understand what you mean.