Redis : Configuration et performance sous Fédora & Redhat

Je fais un petit article pour mettre mes configurations de Redis (de l’anglais REmote DIctionary Server qui peut-être traduit par « serveur de dictionnaire distant ») sous Linux, afin de partager et aussi de corriger si nécessaire. Redis est développé en C AINSI comme tous les bons logiciels, il fait partie des NoSQL ( https://fr.wikipedia.org/wiki/NoSQL ). Les plus intéressant à connaitre sont : REDIS, MongoDB ( https://www.mongodb.com/fr ) , CouchDB ( http://couchdb.apache.org ) .

Le site officiel de Redis est : https://redis.io , la dernière version stable est la 4.0.1 (Jul 24 CEST 2017). La Release Note est disponible : https://raw.githubusercontent.com/antirez/redis/4.0/00-RELEASENOTES . La version que j’utilise est la 2.8.4 (13 Jan CEST 2014). La Release Note est disponible : https://raw.githubusercontent.com/antirez/redis/2.8/00-RELEASENOTES . Sur Twitter il faut suivre : @redisfeed .

Configuration et performance sous Redhat 7.2 (64b) :

Voici donc ma configuration de Redis sous Redhat 7.2 :

[root@]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.2 (Maipo)

Ensuite la configuration du service :

[root@]# cat /etc/systemd/system/multi-user.target.wants/redis.service

[Unit]
Description=A persistent key-value database
After=network.target
[Service]
Type=forking
User=root
Group=root
PIDFile=/var/run/redis_6379.pid
ExecStart=/usr/local/bin/redis-server /etc/redis/6379.conf
ExecStop=/usr/local/bin/redis-cli shutdown
LimitCORE=infinity
Restart=always
RestartSec=50
TimeoutStartSec=30

[Install]
WantedBy=multi-user.target

La configuration de Redis :

[root@]# cat /etc/redis/6379.conf

aof-rewrite-incremental-fsync yes
daemonize yes
pidfile /var/run/redis_6379.pid
port 6379
maxclients 6000
timeout 0
tcp-keepalive 0
loglevel notice
logfile /var/log/redis_6379.log
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis/6379
slave-serve-stale-data yes
slave-read-only yes
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

La configuration de sysctl.conf ( l’OS ) :

[root@]# cat /etc/sysctl.conf

net.ipv4.ip_forward = 0
vm.overcommit_memory = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_tw_reuse = 1
net.netfilter.nf_conntrack_max = 1048576
net.ipv4.tcp_max_tw_buckets = 2000000
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
fs.file-max = 100000
net.ipv4.ip_local_port_range = 1025 65535
net.ipv4.tcp_syncookies = 1
vm.swappiness = 10
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 40960
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_max_syn_backlog = 999999999
net.core.netdev_max_backlog = 25000
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216

Voici la version de Redis que j’utilise: 2.8.4

                _._
          _.-``__ ''-._
      _.-``    `.  `_.  ''-._           Redis 2.8.4 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._
(    '      ,       .-`  | `,    )     Running in stand alone mode
|`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
|    `-._   `._    /     _.-'    |     PID: 28796
  `-._    `-._  `-./  _.-'    _.-'
|`-._`-._    `-.__.-'    _.-'_.-'|
|    `-._`-._        _.-'_.-'    |           http://redis.io
  `-._    `-._`-.__.-'_.-'    _.-'
|`-._`-._    `-.__.-'    _.-'_.-'|
|    `-._`-._        _.-'_.-'    |
  `-._    `-._`-.__.-'_.-'    _.-'
      `-._    `-.__.-'    _.-'
          `-._        _.-'
              `-.__.-'


[28796] 06 Sep 09:26:49.382 # Server started, Redis version 2.8.4
[28796] 06 Sep 09:26:49.386 * DB loaded from disk: 0.005 seconds
[28796] 06 Sep 09:26:49.386 * The server is now ready to accept connections on port 6379

Les performances de Redis dépendent des processeurs, du nombre de processeur, de la RAM, de la vitesse d’écriture sur le disque dur, de l’OS, de la configuration de Redis, de la configuration de l’OS.

[root@]# cat /proc/cpuinfo | grep "model name"
model name      : Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
[root@]# cat /proc/meminfo | head -4
MemTotal:        3785880 kB
MemFree:         3541916 kB
Buffers:               0 kB
Cached:           101008 kB
[root@]# dd if=/dev/zero of=tempfile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.678386 s, 1.6 GB/s

L’outil de benchmark de redis indique donc :

[root@]# redis-benchmark
====== PING_INLINE ======
10000 requests completed in 0.12 seconds
50 parallel clients
3 bytes payload
keep alive: 1

99.51% <= 2 milliseconds
100.00% <= 2 milliseconds
86206.90 requests per second

====== PING_BULK ======
10000 requests completed in 0.09 seconds
50 parallel clients
3 bytes payload
keep alive: 1

99.51% <= 1 milliseconds
100.00% <= 1 milliseconds
106382.98 requests per second

====== SET ======
10000 requests completed in 0.10 seconds
50 parallel clients
3 bytes payload
keep alive: 1

99.51% <= 1 milliseconds
100.00% <= 1 milliseconds
103092.78 requests per second

====== GET ======
10000 requests completed in 0.09 seconds
50 parallel clients
3 bytes payload
keep alive: 1

99.51% <= 1 milliseconds
100.00% <= 1 milliseconds
106382.98 requests per second

====== INCR ======
10000 requests completed in 0.10 seconds
50 parallel clients
3 bytes payload
keep alive: 1

99.51% <= 1 milliseconds
100.00% <= 1 milliseconds
103092.78 requests per second

====== LPUSH ======
10000 requests completed in 0.09 seconds
50 parallel clients
3 bytes payload
keep alive: 1

99.51% <= 1 milliseconds
100.00% <= 1 milliseconds
106382.98 requests per second

====== LPOP ======
10000 requests completed in 0.10 seconds
50 parallel clients
3 bytes payload
keep alive: 1

99.51% <= 1 milliseconds
100.00% <= 1 milliseconds
99009.90 requests per second

====== SADD ======
10000 requests completed in 0.09 seconds
50 parallel clients
3 bytes payload
keep alive: 1

99.51% <= 1 milliseconds
100.00% <= 1 milliseconds
105263.16 requests per second

====== SPOP ======
10000 requests completed in 0.10 seconds
50 parallel clients
3 bytes payload
keep alive: 1

99.51% <= 1 milliseconds
100.00% <= 1 milliseconds
101010.10 requests per second

====== LPUSH (needed to benchmark LRANGE) ======
10000 requests completed in 0.09 seconds
50 parallel clients
3 bytes payload
keep alive: 1

99.51% <= 1 milliseconds
100.00% <= 1 milliseconds
107526.88 requests per second

====== LRANGE_100 (first 100 elements) ======
10000 requests completed in 0.22 seconds
50 parallel clients
3 bytes payload
keep alive: 1

96.40% <= 1 milliseconds
99.99% <= 2 milliseconds
100.00% <= 2 milliseconds
44444.45 requests per second

====== LRANGE_300 (first 300 elements) ======
10000 requests completed in 0.54 seconds
50 parallel clients
3 bytes payload
keep alive: 1

2.04% <= 1 milliseconds
66.86% <= 2 milliseconds
99.63% <= 3 milliseconds
100.00% <= 3 milliseconds
18518.52 requests per second

====== LRANGE_500 (first 450 elements) ======
10000 requests completed in 0.81 seconds
50 parallel clients
3 bytes payload
keep alive: 1

0.01% <= 1 milliseconds
27.58% <= 2 milliseconds
68.13% <= 3 milliseconds
96.15% <= 4 milliseconds
99.91% <= 5 milliseconds
100.00% <= 5 milliseconds
12406.95 requests per second

====== LRANGE_600 (first 600 elements) ======
10000 requests completed in 1.02 seconds
50 parallel clients
3 bytes payload
keep alive: 1

0.01% <= 1 milliseconds
12.17% <= 2 milliseconds
42.39% <= 3 milliseconds
73.16% <= 4 milliseconds
97.02% <= 5 milliseconds
99.87% <= 6 milliseconds
100.00% <= 6 milliseconds
9765.62 requests per second

====== MSET (10 keys) ======
10000 requests completed in 0.13 seconds
50 parallel clients
3 bytes payload
keep alive: 1

99.51% <= 1 milliseconds
100.00% <= 1 milliseconds
76923.08 requests per second

Configuration et performance sous Fédora 17 (64b) :

Voici donc les mêmes informations sur Fédora 17 :

[root@]# cat /etc/redhat-release
Fedora release 17 (Beefy Miracle)

Configuration du service ( à noter le Type qui est « simple », et le ExecStop qui est un simple « kill »:

[root@]# cat /usr/lib/systemd/system/redis_6379.service
[Unit]
Description=A persistent key-value database
After=network.target

[Service]
Type=simple
User=root
Group=root
Environment=TERM=linux
Environment=LANG= LANGUAGE= LC_CTYPE= LC_NUMERIC= LC_TIME= LC_COLLATE= LC_MONETARY= LC_MESSAGES= LC_PAPER= LC_NAME= LC_ADDRESS= LC_TELEPHONE= LC_MEASUREMENT= LC_IDENTIFICATION=
WorkingDirectory=/tmp/
RootDirectoryStartOnly=yes

PIDFile=/var/run/redis_6379.pid
ExecStart=/usr/local/bin/redis-server /etc/redis/6379.conf --loglevel verbose
ExecStop=/bin/kill -15 $MAINPID
LimitCORE=infinity
Restart=always
RestartSec=50

[Install]
WantedBy=multi-user.target

Configuration de Redis :

[root@]# cat /etc/redis/6379.conf | grep -v "^#"  | grep -v "^$"
daemonize yes
pidfile /var/run/redis_6379.pid
port 6379
timeout 0
tcp-keepalive 0
loglevel notice
logfile /var/log/redis_6379.log
databases 16
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis/6379
slave-serve-stale-data yes
slave-read-only yes
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

Configuration de l’OS :

[root@]# cat /etc/sysctl.conf | grep -v "^#"  | grep -v "^$"
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
kernel.shmmax = 134217728
vm.swappiness = 10
vm.overcommit_memory = 1
vm.dirty_background_ratio = 5
vm.dirty_ratio = 50
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = "20000 65535"
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 4096
net.ipv4.tcp_max_syn_backlog = 1024
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_tw_recycle = 1
kernel.sched_migration_cost_ns = 5000000
kernel.sched_autogroup_enabled = 0

Mémoire :

[root@ ]# cat /proc/meminfo | head -4
MemTotal:        2051388 kB
MemFree:           64672 kB
Buffers:            5140 kB
Cached:          1483944 kB

Nombre de processeur et modèle :

[root@]# cat /proc/cpuinfo | grep "model name"
model name      : Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
model name      : Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
model name      : Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
model name      : Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz

Performance du disque dur :

[root@]# dd if=/dev/zero of=tempfile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.24509 s, 862 MB/s

Les performances :

[root@]# redis-benchmark
====== PING_INLINE ======
  10000 requests completed in 0.07 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
135135.14 requests per second
====== PING_BULK ======
  10000 requests completed in 0.07 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
140845.06 requests per second
====== SET ======
  10000 requests completed in 0.07 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
142857.14 requests per second
====== GET ======
  10000 requests completed in 0.08 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.73% <= 1 milliseconds
100.00% <= 1 milliseconds
131578.95 requests per second

====== INCR ======
  10000 requests completed in 0.09 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.91% <= 1 milliseconds
100.00% <= 1 milliseconds
114942.53 requests per second
====== LPUSH ======
  10000 requests completed in 0.07 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.98% <= 1 milliseconds
100.00% <= 1 milliseconds
147058.81 requests per second
====== LPOP ======
  10000 requests completed in 0.09 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.99% <= 1 milliseconds
100.00% <= 1 milliseconds
117647.05 requests per second
====== SADD ======
  10000 requests completed in 0.07 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
142857.14 requests per second
====== SPOP ======
  10000 requests completed in 0.07 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
142857.14 requests per second
====== LPUSH (needed to benchmark LRANGE) ======
  10000 requests completed in 0.07 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
144927.55 requests per second
====== LRANGE_100 (first 100 elements) ======
  10000 requests completed in 0.17 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

98.87% <= 1 milliseconds
100.00% <= 1 milliseconds
59523.81 requests per second
====== LRANGE_300 (first 300 elements) ======
  10000 requests completed in 0.47 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

14.92% <= 1 milliseconds
99.27% <= 2 milliseconds
100.00% <= 2 milliseconds
21413.28 requests per second
====== LRANGE_500 (first 450 elements) ======
  10000 requests completed in 0.65 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

0.15% <= 1 milliseconds
93.53% <= 2 milliseconds
99.51% <= 3 milliseconds
99.96% <= 4 milliseconds
100.00% <= 4 milliseconds
15337.42 requests per second
====== LRANGE_600 (first 600 elements) ======
  10000 requests completed in 0.95 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

0.10% <= 1 milliseconds
35.34% <= 2 milliseconds
84.92% <= 3 milliseconds
92.32% <= 4 milliseconds
99.36% <= 5 milliseconds
99.97% <= 6 milliseconds
100.00% <= 6 milliseconds
10537.41 requests per second

====== MSET (10 keys) ======
  10000 requests completed in 0.10 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.70% <= 1 milliseconds
99.92% <= 2 milliseconds
99.97% <= 3 milliseconds
100.00% <= 3 milliseconds
97087.38 requests per second

 

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Time limit is exhausted. Please reload CAPTCHA.