Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Aarch64 Pika服务在客户端断开连接后客户端就无法再次连接 #1480

Closed
elihe999 opened this issue May 10, 2023 · 5 comments
Closed

Comments

@elihe999
Copy link

Pika 3.4.1
Linux host 4.19.90-24.4.v2101.ky10.aarch64 #1 SMP Mon May 24 14:45:37 CST 2021 aarch64 aarch64 aarch64 GNU/Linux

使用redis客户端软件AnotherRedisDesktop可以正常连接Pika。但是只要有任意一个连接断开,服务就出错了,没有办法再次连接上pika服务,其他连接还在(在anotherredisdesktop中建立多个连接标签),在输出重定向文件中没有看见新的输出信息。

再次尝试调试,使用代码去连接,也是和anotherredisdesktop上的操作类似,通过php cli方式连接pika后,断开也是无法再次连接(read error on connection to xxx:6379 ... 当然,这是我代码里的报错),anotherredisdesktop 的报错提示是“Redis Client On Error: Error: write ECONNABORTED Config right?“

然后使用redis-cli进入连接,尝试排查问题,也是可以进入,随便写了一个key,然后进行查询:

127.0.0.1:6379> keys *
Error: Server closed the connection
not connected> keys *
"dfadfafadfaf"
"name"
127.0.0.1:6379> keys *
Error: Server closed the connection
not connected> keys *
"dfadfafadfaf"
"name"
127.0.0.1:6379> keys *
Error: Server closed the connection
not connected> keys *
"dfadfafadfaf"
"name"

127.0.0.1:6379> set test 123
Error: Server closed the connection
not connected> set test 123
OK
127.0.0.1:6379> keys *
Error: Server closed the connection
not connected> keys *
"dfadfafadfaf"
"name"
"test"
127.0.0.1:6379> set test2 sksksksks
Error: Broken pipe
not connected> set test2 sksksksks
OK
127.0.0.1:6379> set test2 sksksksks1111
Error: Server closed the connection
not connected> set test2 sksksksks1111
OK
127.0.0.1:6379> get test2
Error: Server closed the connection
not connected> get test2
"sksksksks1111"

异常情况就是使用redis命令时,一次正常(可以看到随便写的key),下一次报”Error: Server closed the connection“,变成not connected了,再一下命令又回到连接异常。X86的尝试了没这个问题。

安装pika的过程:Aarch64的pika二进制文件编译时,protobuf通过离线rpm包安装了,slash编译安装了,安装时libglog.so.1是拷贝到离线环境,然后ldd libglog,/sbin/ldconfig 这些建立连接使得pika可以运行起来。

@elihe999
Copy link
Author

# Pika port
port : 6379
# Thread Number
thread-num : 1

thread-pool-size : 12

sync-thread-num : 6

log-path : /data/redis/log

db-path : /data/redis/db/
# Pika write-buffer-size
write-buffer-size : 268435456
# size of one block in arena memory allocation.
# If <= 0, a proper value is automatically calculated
# (usually 1/8 of writer-buffer-size, rounded up to a multiple of 4KB)
arena-block-size :
# Pika timeout
timeout : 60
# Requirepass
requirepass : password
# Masterauth
# masterauth :
# Userpass
# userpass :
# User Blacklist
userblacklist :
# if this option is set to 'classic', that means pika support multiple DB, in
# this mode, option databases enable
# if this option is set to 'sharding', that means pika support multiple Table, you
# can specify slot num for each table, in this mode, option default-slot-num enable
# Pika instance mode [classic | sharding]
instance-mode : classic
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases' - 1, limited in [1, 8]
databases : 8
# default slot number each table in sharding mode
default-slot-num : 1024
# replication num defines how many followers in a single raft group, only [0, 1, 2, 3, 4] is valid
replication-num : 0
# consensus level defines how many confirms does leader get, before commit this log to client,
#                 only [0, ...replicaiton-num] is valid
consensus-level : 0
# Dump Prefix
dump-prefix :
# daemonize  [yes | no]
daemonize : yes
# Dump Path
dump-path : /data/redis/dump/
# Expire-dump-days
dump-expire : 0
# pidfile Path
pidfile : /data/redis/pika.pid
# Max Connection
maxclients : 20000
# the per file size of sst to compact, default is 20M
target-file-size-base : 20971520
# Expire-logs-days
expire-logs-days : 7
# Expire-logs-nums
expire-logs-nums : 10
# Root-connection-num
root-connection-num : 2
# Slowlog-write-errorlog
slowlog-write-errorlog : no
# Slowlog-log-slower-than
slowlog-log-slower-than : 10000
# Slowlog-max-len
slowlog-max-len : 128
# Pika db sync path
db-sync-path : /data/redis/dbsync/
# db sync speed(MB) max is set to 1024MB, min is set to 0, and if below 0 or above 1024, the value will be adjust to 1024
db-sync-speed : -1
# The slave priority
slave-priority : 100
# network interface
#network-interface : eth1
# replication
#slaveof : master-ip:master-port
# CronTask, format 1: start-end/ratio, like 02-04/60, pika will check to schedule compaction between 2 to 4 o'clock everyday
#                   if the freesize/disksize > 60%.
#           format 2: week/start-end/ratio, like 3/02-04/60, pika will check to schedule compaction between 2 to 4 o'clock
#                   every wednesday, if the freesize/disksize > 60%.
#           NOTICE: if compact-interval is set, compact-cron will be mask and disable.
#
#compact-cron : 3/02-04/60
# Compact-interval, format: interval/ratio, like 6/60, pika will check to schedule compaction every 6 hours,
#                           if the freesize/disksize > 60%. NOTICE:compact-interval is prior than compact-cron;
#compact-interval :
# the size of flow control window while sync binlog between master and slave.Default is 9000 and the maximum is 90000.
sync-window-size : 9000
# max value of connection read buffer size: configurable value 67108864(64MB) or 268435456(256MB) or 536870912(512MB)
#                                           default value is 268435456(256MB)
#                                           NOTICE: master and slave should share exactly the same value
max-conn-rbuf-size : 268435456
###################
## Critical Settings
###################
# write_binlog  [yes | no]
write-binlog : no
# binlog file size: default is 100M,  limited in [1K, 2G]
binlog-file-size : 104857600
# Automatically triggers a small compaction according statistics
# Use the cache to store up to 'max-cache-statistic-keys' keys
# if 'max-cache-statistic-keys' set to '0', that means turn off the statistics function
# it also doesn't automatically trigger a small compact feature
max-cache-statistic-keys : 0
# When 'delete' or 'overwrite' a specific multi-data structure key 'small-compaction-threshold' times,
# a small compact is triggered automatically, default is 5000, limited in [1, 100000]
small-compaction-threshold : 5000
# If the total size of all live memtables of all the DBs exceeds
# the limit, a flush will be triggered in the next DB to which the next write
# is issued.
max-write-buffer-size : 10737418240
# The maximum number of write buffers that are built up in memory for one ColumnFamily in DB.
# The default and the minimum number is 2, so that when 1 write buffer
# is being flushed to storage, new writes can continue to the other write buffer.
# If max-write-buffer-number > 3, writing will be slowed down
# if we are writing to the last write buffer allowed.
max-write-buffer-number : 2
# Limit some command response size, like Scan, Keys*
max-client-response-size : 1073741824
# Compression type supported [snappy, zlib, lz4, zstd]
compression : snappy
# max-background-flushes: default is 1, limited in [1, 4]
max-background-flushes : 1
# max-background-compactions: default is 2, limited in [1, 8]
max-background-compactions : 2
# maximum value of Rocksdb cached open file descriptors
max-cache-files : 5000
# max_bytes_for_level_multiplier: default is 10, you can change it to 5
max-bytes-for-level-multiplier : 10
protected-mode: no

@AlexStocks
Copy link
Contributor

最新的 unstable 分支的代码尝试下,会不会有这个问题?

@wanghenshui
Copy link
Collaborator

@AlexStocks 把 3.4.0 3.4.1的tag删掉,总有人用,这俩版本一堆问题

@AlexStocks
Copy link
Contributor

@AlexStocks 把 3.4.0 3.4.1的tag删掉,总有人用,这俩版本一堆问题

要不把内容标记为不稳定,不推荐使用算了,删除总归不好吧?

@wanghenshui
Copy link
Collaborator

使用3.3.6或者unstable分支版本

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants