You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, chdman has four default algorithms: lzma, deflate, flac, and huffman.
Among them, lzma encoding has the highest compression rate, but the slowest decoding speed. The newer zstd has the fastest decoding speed, which can be 5 to 10 times faster than lzma(Source), but zstd encoding is not enabled under the default parameters of chdman.
The current practice of chdman is to give priority to file size, that is, compress each hunk with each selected algorithm, and then blindly select the smallest size to keep, which usually results in most of the final chd files being lzma,even if the zstd algorithm is specified at the same time, because of difficult to surpass lzma in compression size, and its high decompression speed and low CPU overhead characteristics don't get a chance to show.The current official recommendation is if want to use zstd ,while disabling lzma.
In addition, some data hunks that are difficult to compress, such as mpeg2 videos, and data that has been compressed and packaged in the disk image, will be compressed again even if the size is only slightly smaller after chdman compression. This leads to meaningless decompression system overhead when load chds . (Lzma decompression CPU overhead is expensive). Especially for some handheld or mobile devices with high battery requirements. And the file size cannot be reduced much.
Is there any way to use the high compression rate of lzma and the high speed and low CPU overhead of zstd at the same time? and reduce unnecessary secondary decompression system overhead as much as possible?
Here we design a threshold for the algorithm selection : 94.4272%
default (for dvd) four compression codecs: zstd, flac, huff, lzma.The first three have the same priority. After compression, the smallest one will still be retained. However, if the compressed size exceeds 94.4272% of the original size, the data hunk will not be compressed but stored directly (difficult-to-compress data will not be compressed secondary).
The priority of lzma lags behind the first three. When the size of the data hunk after lzma compression is less than 94.4272% of the best one among the first three, the lzma algorithm will be selected.
The final file size won't be much larger than with the default parameters, but basically every chunk will get a reasonable share of the algorithm, achieving a delicate balance between file size and decompression speed/CPU overhead.
As for the origin of the number 94.4272%, it comes from the three recursions of the golden section.
This is the built with the "golden threshold": chdman4f.zip
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Currently, chdman has four default algorithms: lzma, deflate, flac, and huffman.
Among them, lzma encoding has the highest compression rate, but the slowest decoding speed. The newer zstd has the fastest decoding speed, which can be 5 to 10 times faster than lzma(Source), but zstd encoding is not enabled under the default parameters of chdman.
The current practice of chdman is to give priority to file size, that is, compress each hunk with each selected algorithm, and then blindly select the smallest size to keep, which usually results in most of the final chd files being lzma,even if the zstd algorithm is specified at the same time, because of difficult to surpass lzma in compression size, and its high decompression speed and low CPU overhead characteristics don't get a chance to show.The current official recommendation is if want to use zstd ,while disabling lzma.
In addition, some data hunks that are difficult to compress, such as mpeg2 videos, and data that has been compressed and packaged in the disk image, will be compressed again even if the size is only slightly smaller after chdman compression. This leads to meaningless decompression system overhead when load chds . (Lzma decompression CPU overhead is expensive). Especially for some handheld or mobile devices with high battery requirements. And the file size cannot be reduced much.
Is there any way to use the high compression rate of lzma and the high speed and low CPU overhead of zstd at the same time? and reduce unnecessary secondary decompression system overhead as much as possible?
Here we design a threshold for the algorithm selection : 94.4272%
default (for dvd) four compression codecs: zstd, flac, huff, lzma.The first three have the same priority. After compression, the smallest one will still be retained. However, if the compressed size exceeds 94.4272% of the original size, the data hunk will not be compressed but stored directly (difficult-to-compress data will not be compressed secondary).
The priority of lzma lags behind the first three. When the size of the data hunk after lzma compression is less than 94.4272% of the best one among the first three, the lzma algorithm will be selected.
The final file size won't be much larger than with the default parameters, but basically every chunk will get a reasonable share of the algorithm, achieving a delicate balance between file size and decompression speed/CPU overhead.
As for the origin of the number 94.4272%, it comes from the three recursions of the golden section.
This is the built with the "golden threshold":
chdman4f.zip
Beta Was this translation helpful? Give feedback.
All reactions