Large Text Compression Benchmark
(mattmahoney.net)17 points by redeux 2 days ago | 5 comments
17 points by redeux 2 days ago | 5 comments
hyperpape an hour ago | prev | next |
It's worth noting that the benchmark has not been updated as frequently for the past several years, and some versions of compressors are quite far behind the current implementations (http://www.mattmahoney.net/dc/text.html#history).
The one instance I double-checked (zstd) I don't recall it making a massive difference, but it did make a difference (iirc, the current version was slightly smaller than what was listed in the benchmark).
pella 26 minutes ago | root | parent |
agree,
- in the benchmark "zstd 0.6.0" ( Apr 13, 2016 )
- vs. latest zstd v1.5.6 ( Mar 30, 2024 https://github.com/facebook/zstd/releases )
pama an hour ago | prev | next |
It would be nice to also have a competition of this type where within ressonable limits the size of the compressor does not matter and the material to be compressed is hidden and varied over time. For example up to 10GB compressor size and the dataset is a different random chunk of fineweb every week.
pmayrgundter an hour ago | prev |
The very notable thing here is that the best method uses a Transformer, and no other entry does
nick238 3 minutes ago | next |
Double your compression ratio for the low, low price of 100,000x slower decompression (zstd: 215GB, 2.2 ns/byte vs. nncp: 106GB, 230 µs/byte)!