X-Git-Url: https://notaz.gp2x.de/cgi-bin/gitweb.cgi?a=blobdiff_plain;f=deps%2Flibchdr%2Fdeps%2Fzstd-1.5.5%2Fcontrib%2FlargeNbDicts%2FREADME.md;fp=deps%2Flibchdr%2Fdeps%2Fzstd-1.5.5%2Fcontrib%2FlargeNbDicts%2FREADME.md;h=010102c904f146751cfa610b8df4f9d6b4411477;hb=648db22b0750712da893c306efcc8e4b2d3a4e3c;hp=0000000000000000000000000000000000000000;hpb=e2fb1389dc12376acb84e4993ed3b08760257252;p=pcsx_rearmed.git diff --git a/deps/libchdr/deps/zstd-1.5.5/contrib/largeNbDicts/README.md b/deps/libchdr/deps/zstd-1.5.5/contrib/largeNbDicts/README.md new file mode 100644 index 00000000..010102c9 --- /dev/null +++ b/deps/libchdr/deps/zstd-1.5.5/contrib/largeNbDicts/README.md @@ -0,0 +1,33 @@ +largeNbDicts +===================== + +`largeNbDicts` is a benchmark test tool +dedicated to the specific scenario of +dictionary decompression using a very large number of dictionaries. +When dictionaries are constantly changing, they are always "cold", +suffering from increased latency due to cache misses. + +The tool is created in a bid to investigate performance for this scenario, +and experiment mitigation techniques. + +Command line : +``` +largeNbDicts [Options] filename(s) + +Options : +-z : benchmark compression (default) +-d : benchmark decompression +-r : recursively load all files in subdirectories (default: off) +-B# : split input into blocks of size # (default: no split) +-# : use compression level # (default: 3) +-D # : use # as a dictionary (default: create one) +-i# : nb benchmark rounds (default: 6) +--nbBlocks=#: use # blocks for bench (default: one per file) +--nbDicts=# : create # dictionaries for bench (default: one per block) +-h : help (this text) + +Advanced Options (see zstd.h for documentation) : +--dedicated-dict-search +--dict-content-type=# +--dict-attach-pref=# +```