-
Notifications
You must be signed in to change notification settings - Fork 137
Tidis base benchmark
- OS: Debian 8.6
- Kernel: 3.16
- Memory: 250GB
- Processor: Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz * 48
- Disk: SATA (SSD is recommended)
-
One pd server (run with docker)
-
One tikv server (run with docker), configuration as follows tikv.conf
log-level = "info" [server] addr = "ip:20160" [storage] data-dir = "tikv" [pd] endpoints = ["ip:2379"] [metric] interval = "1500s" address = "" job = "tikv" [raftstore] sync-log = false region-max-size = "384MB" region-split-size = "256MB" region-split-check-diff = "32MB" [rocksdb] max-manifest-file-size = "20MB" [rocksdb.defaultcf] block-size = "64KB" compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"] write-buffer-size = "128MB" max-write-buffer-number = 5 level0-slowdown-writes-trigger = 20 level0-stop-writes-trigger = 36 max-bytes-for-level-base = "512MB" target-file-size-base = "32MB" [rocksdb.writecf] compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"] write-buffer-size = "128MB" max-write-buffer-number = 5 min-write-buffer-number-to-merge = 1 max-bytes-for-level-base = "512MB" target-file-size-base = "32MB" [raftdb] compaction-readahead-size = "2MB" [raftdb.defaultcf] compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"] write-buffer-size = "128MB" max-write-buffer-number = 5 min-write-buffer-number-to-merge = 1 max-bytes-for-level-base = "512MB" target-file-size-base = "32MB" block-cache-size = "256MB"
-
One tidis server with default configuration
bin/tidis-server -backend ip:2379
- Let's start with one client concurrency
- GET
redis-benchmark -p 7379 -t GET -r 100000000 -n 10000 -c 1
====== GET ======
10000 requests completed in 5.64 seconds
1 parallel clients
3 bytes payload
keep alive: 1
99.76% <= 1 milliseconds
100.00% <= 2 milliseconds
1773.05 requests per second
- SET
redis-benchmark -p 7379 -t SET -r 100000000 -n 1000 -c 1
====== SET ======
1000 requests completed in 2.23 seconds
1 parallel clients
3 bytes payload
keep alive: 1
0.10% <= 1 milliseconds
12.80% <= 2 milliseconds
99.30% <= 3 milliseconds
99.90% <= 4 milliseconds
100.00% <= 8 milliseconds
448.83 requests per second
If you do this, you will find tidis is quite slow.
Why? The biggest reason is every write would create a transaction, and the transaction timestamp must be obtain from pd servers, and the request must be encoded/decoded and make a rpc request to tikv server and tikv server use raft for replication and twe-phase commit for distributed transaction, which cause the latancy higher. So throughput of one connection seems quit small.
- Start with 50 concurrency
- GET
redis-benchmark -p 7379 -t GET -r 100000000 -n 10000 -c 50
====== GET ======
10000 requests completed in 0.52 seconds
50 parallel clients
3 bytes payload
keep alive: 1
28.86% <= 1 milliseconds
43.97% <= 2 milliseconds
67.82% <= 3 milliseconds
82.37% <= 4 milliseconds
88.53% <= 5 milliseconds
93.77% <= 6 milliseconds
96.76% <= 7 milliseconds
98.05% <= 8 milliseconds
98.98% <= 9 milliseconds
99.44% <= 10 milliseconds
99.71% <= 11 milliseconds
99.87% <= 12 milliseconds
99.93% <= 13 milliseconds
99.99% <= 14 milliseconds
100.00% <= 15 milliseconds
19379.85 requests per second
- SET
redis-benchmark -p 7379 -t SET -r 100000000 -n 10000 -c 50
====== SET ======
10000 requests completed in 1.21 seconds
50 parallel clients
3 bytes payload
keep alive: 1
0.01% <= 1 milliseconds
0.03% <= 2 milliseconds
0.70% <= 3 milliseconds
9.10% <= 4 milliseconds
31.07% <= 5 milliseconds
57.32% <= 6 milliseconds
77.68% <= 7 milliseconds
89.09% <= 8 milliseconds
94.78% <= 9 milliseconds
97.63% <= 10 milliseconds
98.49% <= 11 milliseconds
98.86% <= 12 milliseconds
99.05% <= 13 milliseconds
99.09% <= 14 milliseconds
99.12% <= 15 milliseconds
99.23% <= 16 milliseconds
99.29% <= 17 milliseconds
99.45% <= 18 milliseconds
99.60% <= 19 milliseconds
99.80% <= 20 milliseconds
99.93% <= 21 milliseconds
99.99% <= 22 milliseconds
100.00% <= 24 milliseconds
8271.30 requests per second
- Let's write batch in one transaction
- One client with batch 10 writes in one transaction
redis-benchmark -p 7379 -t SET -r 100000000 -n 10000 -c 1 -T 10
====== SET ======
10000 requests completed in 2.54 seconds
1 parallel clients
3 bytes payload
keep alive: 1
3933.91 requests per second
- 50 concurrent clients with batch 10 writes in one transaction
redis-benchmark -p 7379 -t SET -r 100000000 -n 100000 -c 50 -T 10
====== SET ======
100000 requests completed in 1.55 seconds
50 parallel clients
3 bytes payload
keep alive: 1
64516.13 requests per second
- 1000 concurrent clients with batch 100 writes in one transaction
redis-benchmark -p 7379 -t SET -r 100000000 -n 1000000 -c 1000 -T 100
====== SET ======
1000000 requests completed in 10.19 seconds
1000 parallel clients
3 bytes payload
keep alive: 1
89.90% <= 1 milliseconds
89.94% <= 4 milliseconds
89.96% <= 5 milliseconds
89.99% <= 7 milliseconds
90.00% <= 21 milliseconds
90.01% <= 23 milliseconds
90.02% <= 24 milliseconds
90.03% <= 25 milliseconds
90.06% <= 26 milliseconds
90.15% <= 27 milliseconds
90.29% <= 28 milliseconds
90.52% <= 29 milliseconds
90.95% <= 30 milliseconds
91.53% <= 31 milliseconds
92.44% <= 32 milliseconds
94.50% <= 33 milliseconds
96.27% <= 34 milliseconds
97.32% <= 35 milliseconds
98.27% <= 36 milliseconds
98.87% <= 37 milliseconds
99.16% <= 38 milliseconds
99.31% <= 39 milliseconds
99.51% <= 40 milliseconds
99.58% <= 41 milliseconds
99.64% <= 42 milliseconds
99.73% <= 43 milliseconds
99.83% <= 44 milliseconds
99.90% <= 45 milliseconds
99.92% <= 46 milliseconds
99.94% <= 47 milliseconds
99.95% <= 48 milliseconds
99.98% <= 49 milliseconds
99.99% <= 50 milliseconds
100.00% <= 50 milliseconds
98183.60 requests per second
This is a base benchmark for now, more scenarios bench data need be added.