mirror of
https://github.com/fatedier/frp.git
synced 2026-05-15 08:05:49 -06:00
[GH-ISSUE #4223] 版本0.58.0的客户端SSH频发性断连 #3325
Labels
No labels
In Progress
WIP
WaitingForInfo
bug
doc
duplicate
easy
enhancement
future
help wanted
invalid
lifecycle/stale
need-issue-template
need-usage-help
no plan
proposal
pull-request
question
todo
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/frp#3325
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @islercn on GitHub (May 17, 2024).
Original GitHub issue: https://github.com/fatedier/frp/issues/4223
Bug Description
frps部署在阿里云,frpc部署在多台内网Ubuntu22.04主机,服务端和客户端都是0.58.0版。自升级到0.58.0版本后,在外网ssh内网机器很快就会断连,最长应该不会超过5分钟,这时如果再连,还能连上,但很快又自动断连了。尝试服务器维持0.58.0版本不变,仅将客户端降为0.57.0版本后问题消失。
frpc Version
0.58.0
frps Version
0.58.0
System Architecture
ubuntu 22.04
Configurations
服务器设置
bindAddr = "0.0.0.0"
bindPort = 10000
auth.method = "token"
auth.token = "Pass5678"
客户端设置
serverAddr = "服务器IP"
serverPort = 10000
transport.tls.enable = true
auth.method = "token"
auth.token = "Pass5678"
proxies
name = "ssh-001"
type = "tcp"
localIP = "127.0.0.1"
localPort = 2222
remotePort = 10001
Logs
2024-05-17 14:42:30.781 [I] [sub/root.go:142] start frpc service for config file [/home/xxx/frpc.toml]
2024-05-17 14:42:30.781 [I] [client/service.go:294] try to connect to server...
2024-05-17 14:42:30.802 [I] [client/service.go:286] [240bfa9ecca5d40f] login to server success, get run id [240bfa9ecca5d40f]
2024-05-17 14:42:30.802 [I] [proxy/proxy_manager.go:173] [240bfa9ecca5d40f] proxy added: [ssh-001]
2024-05-17 14:42:30.802 [T] [proxy/proxy_wrapper.go:200] [240bfa9ecca5d40f] [ssh-001] change status from [new] to [wait start]
2024-05-17 14:42:30.808 [I] [client/control.go:168] [240bfa9ecca5d40f] [ssh-001] start proxy success
2024-05-17 14:42:33.599 [D] [proxy/proxy_wrapper.go:260] [240bfa9ecca5d40f] [ssh-001] start a new work connection, localAddr: localip:53416 remoteAddr: serverip:10000
2024-05-17 14:42:33.599 [T] [proxy/proxy.go:144] [240bfa9ecca5d40f] [ssh-001] handle tcp work connection, useEncryption: false, useCompression: false
2024-05-17 14:42:33.599 [D] [proxy/proxy.go:210] [240bfa9ecca5d40f] [ssh-001] join connections, localConn(l[127.0.0.1:44216] r[127.0.0.1:2222]) workConn(l[localip:53416] r[serverip:10000])
2024-05-17 14:42:56.595 [D] [proxy/proxy_wrapper.go:260] [240bfa9ecca5d40f] [ssh-001] start a new work connection, localAddr: localip:53416 remoteAddr: serverip:10000
2024-05-17 14:42:56.595 [T] [proxy/proxy.go:144] [240bfa9ecca5d40f] [ssh-001] handle tcp work connection, useEncryption: false, useCompression: false
2024-05-17 14:42:56.595 [D] [proxy/proxy.go:210] [240bfa9ecca5d40f] [ssh-001] join connections, localConn(l[127.0.0.1:44928] r[127.0.0.1:2222]) workConn(l[localip:53416] r[serverip:10000])
2024-05-17 14:42:59.044 [D] [proxy/proxy.go:222] [240bfa9ecca5d40f] [ssh-001] join connections closed
2024-05-17 14:42:59.044 [T] [proxy/proxy.go:224] [240bfa9ecca5d40f] [ssh-001] join connections errors: [writeto tcp 127.0.0.1:44928->127.0.0.1:2222: read tcp 127.0.0.1:44928->127.0.0.1:2222: use of closed network connection]
2024-05-17 14:44:00.858 [T] [client/control.go:145] [240bfa9ecca5d40f] work connection closed before response StartWorkConn message: EOF
2024-05-17 14:44:00.858 [I] [client/service.go:294] [240bfa9ecca5d40f] try to connect to server...
2024-05-17 14:44:00.858 [D] [proxy/proxy.go:222] [240bfa9ecca5d40f] [ssh-001] join connections closed
2024-05-17 14:44:00.858 [T] [proxy/proxy.go:224] [240bfa9ecca5d40f] [ssh-001] join connections errors: [writeto tcp 127.0.0.1:44216->127.0.0.1:2222: read tcp 127.0.0.1:44216->127.0.0.1:2222: use of closed network connection]
2024-05-17 14:44:00.880 [I] [client/service.go:286] [240bfa9ecca5d40f] login to server success, get run id [240bfa9ecca5d40f]
2024-05-17 14:44:00.880 [I] [proxy/proxy_manager.go:173] [240bfa9ecca5d40f] proxy added: [ssh-001]
2024-05-17 14:44:00.880 [T] [proxy/proxy_wrapper.go:200] [240bfa9ecca5d40f] [ssh-001] change status from [new] to [wait start]
2024-05-17 14:44:00.886 [I] [client/control.go:168] [240bfa9ecca5d40f] [ssh-001] start proxy success
2024-05-17 14:44:15.044 [D] [proxy/proxy_wrapper.go:260] [240bfa9ecca5d40f] [ssh-001] start a new work connection, localAddr: localip:33750 remoteAddr: serverip:10000
2024-05-17 14:44:15.044 [T] [proxy/proxy.go:144] [240bfa9ecca5d40f] [ssh-001] handle tcp work connection, useEncryption: false, useCompression: false
2024-05-17 14:44:15.044 [D] [proxy/proxy.go:210] [240bfa9ecca5d40f] [ssh-001] join connections, localConn(l[127.0.0.1:42170] r[127.0.0.1:2222]) workConn(l[localip:33750] r[serverip:10000])
2024-05-17 14:44:18.990 [D] [proxy/proxy_wrapper.go:260] [240bfa9ecca5d40f] [ssh-001] start a new work connection, localAddr: localip:33750 remoteAddr: serverip:10000
2024-05-17 14:44:18.990 [T] [proxy/proxy.go:144] [240bfa9ecca5d40f] [ssh-001] handle tcp work connection, useEncryption: false, useCompression: false
2024-05-17 14:44:18.990 [D] [proxy/proxy.go:210] [240bfa9ecca5d40f] [ssh-001] join connections, localConn(l[127.0.0.1:33576] r[127.0.0.1:2222]) workConn(l[localip:33750] r[serverip:10000])
2024-05-17 14:44:19.054 [D] [proxy/proxy.go:222] [240bfa9ecca5d40f] [ssh-001] join connections closed
2024-05-17 14:44:19.054 [T] [proxy/proxy.go:224] [240bfa9ecca5d40f] [ssh-001] join connections errors: [writeto tcp 127.0.0.1:42170->127.0.0.1:2222: read tcp 127.0.0.1:42170->127.0.0.1:2222: use of closed network connection]
2024-05-17 14:44:21.191 [D] [proxy/proxy.go:222] [240bfa9ecca5d40f] [ssh-001] join connections closed
2024-05-17 14:44:21.191 [T] [proxy/proxy.go:224] [240bfa9ecca5d40f] [ssh-001] join connections errors: [writeto tcp 127.0.0.1:33576->127.0.0.1:2222: read tcp 127.0.0.1:33576->127.0.0.1:2222: use of closed network connection]
Steps to reproduce
Affected area
@swanbylei commented on GitHub (May 18, 2024):
遇到同样的问题。frps服务端0.58.0,frpc为Windows版0.58.0。连接间歇性断开连接。使用旧版本的frpc,0.53.2版无任何问题。
@MrLiuGangQiang commented on GitHub (May 19, 2024):
同样的问题 我还以为自己没配好 换成0.57.0就稳如老狗0.58.0就马上断
@fatedier commented on GitHub (May 20, 2024):
请尝试分别单独做如下配置修改来进行验证测试:
ServerAliveInterval 60之类的配置。transport.tcpMuxKeepaliveInterval = 30的配置。transport.heartbeatInterval = 30的配置。@xingtongsf commented on GitHub (May 23, 2024):
我的问题不知道是不是相同的。我的客户端使用的是tiny-frpc.因为tinyfrpc使用的是ssh连接服务器,每次能正常代理20秒左右吧,直接断线了。再重启又能代理20秒左右。服务器端位0.58,客户端没有配置transport.tcpMuxKeepaliveInterval = 30之类的选项
@fatedier commented on GitHub (May 23, 2024):
@xingtongsf 你的问题请提交到 tiny-frpc 的 repo。
@YuxuanZuo commented on GitHub (May 28, 2024):
这里有差不多的问题,代理的是TCP协议的Minecraft Java版,客户端和服务端版本均为0.58.0,frpc增加配置transport.heartbeatInterval = 30后问题修复
@fatedier commented on GitHub (May 29, 2024):
@YuxuanZuo 其他的配置修改也请一并测试一下,方便定位具体的原因。
@YuxuanZuo commented on GitHub (May 29, 2024):
单独添加
transport.tcpMuxKeepaliveInterval = 30配置没有效果,单独添加transport.heartbeatInterval = 10或二者都可以修复该问题。根据我对服务端trace日志的观察,大概每隔十几秒钟就会有heartbeat timeout日志,应该是客户端没有按时发送心跳包@fatedier commented on GitHub (May 30, 2024):
@YuxuanZuo 完整的配置可以贴一下吗?release notes 里有说明这种情况应该只发生在 frps 是旧版本的情况下,确认服务端已经更新到了最新版本,我本地无法复现。
@fatedier commented on GitHub (May 30, 2024):
@YuxuanZuo 如果使用旧的 INI 格式的话,默认值可能会有问题,但是看你的描述你应该用的不是 INI ?你这个可能是单独的问题,可以另外提交 issue 跟进,INI 的问题我会修复掉。
这个 issue 没人继续反馈 ssh 的问题的话我会关闭掉。
@YuxuanZuo commented on GitHub (May 30, 2024):
确认了一下,frps确实用的ini配置,经修改后全部恢复正常,感谢作者大大耐心解答!
@islercn commented on GitHub (Jun 7, 2024):
前一阵是网络不好,屋里温度比较高时,丢包率能干到3%,最近调整了下网络,很少再出现丢包了,这个问题也没再出现过。。所以感觉是丢包导致的心跳包丢失?所以能否自适应调整心跳包频率,或者增加重传机制?
@fatedier commented on GitHub (Jun 8, 2024):
@islercn 可以自行尝试 kcp 或者 quic。
@MaxKingPor commented on GitHub (Jun 8, 2024):
try to connect to server日志一分钟一次服务端添加
之后就没出现过(测试阶段没出现过 大概2个小时)
@focuseyes360 commented on GitHub (Jan 20, 2025):
1.现象:
公网云服务器linux系统上部署frps,同一路由器下挂三个相同配置的linux设备frpc,同样网络情况下,前两个设备稳如狗,第三个出现幺蛾子了,尝试了v0.48.0 、v0.57.0、v0.60.0 版本,过几分钟到半小时必定在frps server日志上出现frps [service.go:450] accept new mux stream error: keepalive timeout,然后断线重连成功,平均每小时断线重连2次左右。前面这两个版本尝试了issue中提到的修改keepmuxalive 和keepheartalive参数都没有效果。
【frps】日志:
2025/01/20 12:58:44 [D] [control.go:493] [8fd40d4957c2f938] receive heartbeat
2025/01/20 12:58:57 [D] [control.go:493] [8ad2c6c0839b61a4] receive heartbeat
2025/01/20 12:59:07 [D] [control.go:493] [6e811ef4307be7d7] receive heartbeat
2025/01/20 12:59:14 [D] [control.go:493] [8fd40d4957c2f938] receive heartbeat
2025/01/20 12:59:27 [D] [control.go:493] [8ad2c6c0839b61a4] receive heartbeat
2025/01/20 12:59:37 [D] [control.go:493] [6e811ef4307be7d7] receive heartbeat
2025/01/20 12:59:44 [D] [control.go:493] [8fd40d4957c2f938] receive heartbeat
2025/01/20 12:59:57 [D] [service.go:450] Accept new mux stream error: keepalive timeout
2025/01/20 12:59:57 [D] [control.go:334] [6879c9f0ff6df0a4] control connection closed
2025/01/20 12:59:57 [I] [control.go:306] [6879c9f0ff6df0a4] control writer is closing
2025/01/20 12:59:57 [I] [proxy.go:98] [6879c9f0ff6df0a4] [box-19.id] proxy closing
2025/01/20 12:59:57 [D] [proxy.go:326] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] join connections closed
2025/01/20 12:59:57 [W] [proxy.go:186] [6879c9f0ff6df0a4] [box-19.id] listener is closed: accept tcp [::]:7025: use of closed network connection
2025/01/20 12:59:57 [I] [proxy.go:98] [6879c9f0ff6df0a4] [box-19.web] proxy closing
2025/01/20 12:59:57 [W] [proxy.go:186] [6879c9f0ff6df0a4] [box-19.web] listener is closed: accept tcp [::]:7023: use of closed network connection
2025/01/20 12:59:57 [I] [proxy.go:98] [6879c9f0ff6df0a4] [box-19.asd] proxy closing
2025/01/20 12:59:57 [W] [proxy.go:186] [6879c9f0ff6df0a4] [box-19.asd] listener is closed: accept tcp [::]:7024: use of closed network connection
2025/01/20 12:59:57 [I] [proxy.go:98] [6879c9f0ff6df0a4] [box-19.bm] proxy closing
2025/01/20 12:59:57 [W] [proxy.go:186] [6879c9f0ff6df0a4] [box-19.bm] listener is closed: accept tcp [::]:7026: use of closed network connection
2025/01/20 12:59:57 [I] [proxy.go:98] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] proxy closing
2025/01/20 12:59:57 [I] [control.go:395] [6879c9f0ff6df0a4] client exit success
2025/01/20 12:59:57 [W] [proxy.go:186] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] listener is closed: listener closed
2025/01/20 12:59:57 [D] [control.go:493] [8ad2c6c0839b61a4] receive heartbeat
2025/01/20 12:59:57 [T] [service.go:423] start check TLS connection...
2025/01/20 12:59:57 [T] [service.go:432] check TLS connection success, isTLS: true custom: false
2025/01/20 12:59:57 [I] [service.go:500] [6879c9f0ff6df0a4] client login info: ip [49.65.xx.xx:5421] version [0.60.0] hostname [] os [linux] arch [amd64]
2025/01/20 12:59:57 [I] [tcp.go:66] [6879c9f0ff6df0a4] [box-19.asd] tcp proxy listen port [7024]
2025/01/20 12:59:57 [I] [control.go:464] [6879c9f0ff6df0a4] new proxy [box-19.asd] type [tcp] success
2025/01/20 12:59:57 [D] [control.go:218] [6879c9f0ff6df0a4] new work connection registered
2025/01/20 12:59:57 [I] [stcp.go:36] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] stcp proxy custom listen success
2025/01/20 12:59:57 [I] [control.go:464] [6879c9f0ff6df0a4] new proxy [box-19.stcp-home-xxx] type [stcp] success
2025/01/20 12:59:57 [I] [tcp.go:66] [6879c9f0ff6df0a4] [box-19.id] tcp proxy listen port [7025]
2025/01/20 12:59:57 [I] [control.go:464] [6879c9f0ff6df0a4] new proxy [box-19.id] type [tcp] success
2025/01/20 12:59:57 [I] [tcp.go:66] [6879c9f0ff6df0a4] [box-19.bm] tcp proxy listen port [7026]
2025/01/20 12:59:57 [I] [control.go:464] [6879c9f0ff6df0a4] new proxy [box-19.bm] type [tcp] success
2025/01/20 12:59:57 [I] [tcp.go:66] [6879c9f0ff6df0a4] [box-19.web] tcp proxy listen port [7023]
2025/01/20 12:59:57 [I] [control.go:464] [6879c9f0ff6df0a4] new proxy [box-19.web] type [tcp] success
【frpc】日志:
2025-01-20 12:59:57.229 [I] [client/service.go:295] [6879c9f0ff6df0a4] try to connect to server...
2025-01-20 12:59:57.229 [D] [proxy/proxy.go:222] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] join connections closed
2025-01-20 12:59:57.314 [I] [client/service.go:287] [6879c9f0ff6df0a4] login to server success, get run id [6879c9f0ff6df0a4]
2025-01-20 12:59:57.314 [I] [proxy/proxy_manager.go:173] [6879c9f0ff6df0a4] proxy added: [box-19.web box-19.as box-19.ids box-19.bm box-19.stcp-home-xxx]
2025-01-20 12:59:57.342 [I] [client/control.go:168] [6879c9f0ff6df0a4] [box-19.asd] start proxy success
2025-01-20 12:59:57.343 [I] [client/control.go:168] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] start proxy success
2025-01-20 12:59:57.343 [I] [client/control.go:168] [6879c9f0ff6df0a4] [box-19.id] start proxy success
2025-01-20 12:59:57.343 [I] [client/control.go:168] [6879c9f0ff6df0a4] [box-19.bm] start proxy success
2025-01-20 12:59:57.372 [I] [client/control.go:168] [6879c9f0ff6df0a4] [box-19.web] start proxy success
3.烦请大大们帮忙看看到底是什么问题导致frps Accept new mux stream error: keepalive timeout,多谢!!!
@focuseyes360 commented on GitHub (Jan 22, 2025):
花了几天,出问题的设备上无线wifi网卡网关与有线网卡接入的路由器网关相同,导致设备路由表转发间歇性出现问题。
@HellowBoy commented on GitHub (Feb 7, 2025):
我是这样解决的,目前测试没问题了。
1、客户端和服务器端关闭端口复用,下面只给出了服务端的配置,客户端也有相应的设置
transport.tcpMux = false
transport.tcpMuxKeepaliveInterval = 60
transport.heartbeatTimeout = 60
2、部分客户端插件注册间隔时间设置为0。如openwrt中
大家可以试试,有用点赞呢
@WLyKan commented on GitHub (Sep 30, 2025):
我之前ssh连上后隔几十秒一直自动断开,通过配置解决了
@qk-antares commented on GitHub (Nov 25, 2025):
我用的最新的0.65.0和0.62.1的版本依然有上述问题,也是在frpc添加
transport.heartbeatInterval = 10后解决