[GH-ISSUE #4223] 版本0.58.0的客户端SSH频发性断连 #3325

Closed
opened 2026-05-05 14:08:39 -06:00 by gitea-mirror · 19 comments
Owner

Originally created by @islercn on GitHub (May 17, 2024).
Original GitHub issue: https://github.com/fatedier/frp/issues/4223

Bug Description

frps部署在阿里云,frpc部署在多台内网Ubuntu22.04主机,服务端和客户端都是0.58.0版。自升级到0.58.0版本后,在外网ssh内网机器很快就会断连,最长应该不会超过5分钟,这时如果再连,还能连上,但很快又自动断连了。尝试服务器维持0.58.0版本不变,仅将客户端降为0.57.0版本后问题消失。

frpc Version

0.58.0

frps Version

0.58.0

System Architecture

ubuntu 22.04

Configurations

服务器设置
bindAddr = "0.0.0.0"
bindPort = 10000
auth.method = "token"
auth.token = "Pass5678"

客户端设置
serverAddr = "服务器IP"
serverPort = 10000
transport.tls.enable = true
auth.method = "token"
auth.token = "Pass5678"

proxies
name = "ssh-001"
type = "tcp"
localIP = "127.0.0.1"
localPort = 2222
remotePort = 10001

Logs

2024-05-17 14:42:30.781 [I] [sub/root.go:142] start frpc service for config file [/home/xxx/frpc.toml]
2024-05-17 14:42:30.781 [I] [client/service.go:294] try to connect to server...
2024-05-17 14:42:30.802 [I] [client/service.go:286] [240bfa9ecca5d40f] login to server success, get run id [240bfa9ecca5d40f]
2024-05-17 14:42:30.802 [I] [proxy/proxy_manager.go:173] [240bfa9ecca5d40f] proxy added: [ssh-001]
2024-05-17 14:42:30.802 [T] [proxy/proxy_wrapper.go:200] [240bfa9ecca5d40f] [ssh-001] change status from [new] to [wait start]
2024-05-17 14:42:30.808 [I] [client/control.go:168] [240bfa9ecca5d40f] [ssh-001] start proxy success
2024-05-17 14:42:33.599 [D] [proxy/proxy_wrapper.go:260] [240bfa9ecca5d40f] [ssh-001] start a new work connection, localAddr: localip:53416 remoteAddr: serverip:10000
2024-05-17 14:42:33.599 [T] [proxy/proxy.go:144] [240bfa9ecca5d40f] [ssh-001] handle tcp work connection, useEncryption: false, useCompression: false
2024-05-17 14:42:33.599 [D] [proxy/proxy.go:210] [240bfa9ecca5d40f] [ssh-001] join connections, localConn(l[127.0.0.1:44216] r[127.0.0.1:2222]) workConn(l[localip:53416] r[serverip:10000])
2024-05-17 14:42:56.595 [D] [proxy/proxy_wrapper.go:260] [240bfa9ecca5d40f] [ssh-001] start a new work connection, localAddr: localip:53416 remoteAddr: serverip:10000
2024-05-17 14:42:56.595 [T] [proxy/proxy.go:144] [240bfa9ecca5d40f] [ssh-001] handle tcp work connection, useEncryption: false, useCompression: false
2024-05-17 14:42:56.595 [D] [proxy/proxy.go:210] [240bfa9ecca5d40f] [ssh-001] join connections, localConn(l[127.0.0.1:44928] r[127.0.0.1:2222]) workConn(l[localip:53416] r[serverip:10000])
2024-05-17 14:42:59.044 [D] [proxy/proxy.go:222] [240bfa9ecca5d40f] [ssh-001] join connections closed
2024-05-17 14:42:59.044 [T] [proxy/proxy.go:224] [240bfa9ecca5d40f] [ssh-001] join connections errors: [writeto tcp 127.0.0.1:44928->127.0.0.1:2222: read tcp 127.0.0.1:44928->127.0.0.1:2222: use of closed network connection]
2024-05-17 14:44:00.858 [T] [client/control.go:145] [240bfa9ecca5d40f] work connection closed before response StartWorkConn message: EOF
2024-05-17 14:44:00.858 [I] [client/service.go:294] [240bfa9ecca5d40f] try to connect to server...
2024-05-17 14:44:00.858 [D] [proxy/proxy.go:222] [240bfa9ecca5d40f] [ssh-001] join connections closed
2024-05-17 14:44:00.858 [T] [proxy/proxy.go:224] [240bfa9ecca5d40f] [ssh-001] join connections errors: [writeto tcp 127.0.0.1:44216->127.0.0.1:2222: read tcp 127.0.0.1:44216->127.0.0.1:2222: use of closed network connection]
2024-05-17 14:44:00.880 [I] [client/service.go:286] [240bfa9ecca5d40f] login to server success, get run id [240bfa9ecca5d40f]
2024-05-17 14:44:00.880 [I] [proxy/proxy_manager.go:173] [240bfa9ecca5d40f] proxy added: [ssh-001]
2024-05-17 14:44:00.880 [T] [proxy/proxy_wrapper.go:200] [240bfa9ecca5d40f] [ssh-001] change status from [new] to [wait start]
2024-05-17 14:44:00.886 [I] [client/control.go:168] [240bfa9ecca5d40f] [ssh-001] start proxy success
2024-05-17 14:44:15.044 [D] [proxy/proxy_wrapper.go:260] [240bfa9ecca5d40f] [ssh-001] start a new work connection, localAddr: localip:33750 remoteAddr: serverip:10000
2024-05-17 14:44:15.044 [T] [proxy/proxy.go:144] [240bfa9ecca5d40f] [ssh-001] handle tcp work connection, useEncryption: false, useCompression: false
2024-05-17 14:44:15.044 [D] [proxy/proxy.go:210] [240bfa9ecca5d40f] [ssh-001] join connections, localConn(l[127.0.0.1:42170] r[127.0.0.1:2222]) workConn(l[localip:33750] r[serverip:10000])
2024-05-17 14:44:18.990 [D] [proxy/proxy_wrapper.go:260] [240bfa9ecca5d40f] [ssh-001] start a new work connection, localAddr: localip:33750 remoteAddr: serverip:10000
2024-05-17 14:44:18.990 [T] [proxy/proxy.go:144] [240bfa9ecca5d40f] [ssh-001] handle tcp work connection, useEncryption: false, useCompression: false
2024-05-17 14:44:18.990 [D] [proxy/proxy.go:210] [240bfa9ecca5d40f] [ssh-001] join connections, localConn(l[127.0.0.1:33576] r[127.0.0.1:2222]) workConn(l[localip:33750] r[serverip:10000])
2024-05-17 14:44:19.054 [D] [proxy/proxy.go:222] [240bfa9ecca5d40f] [ssh-001] join connections closed
2024-05-17 14:44:19.054 [T] [proxy/proxy.go:224] [240bfa9ecca5d40f] [ssh-001] join connections errors: [writeto tcp 127.0.0.1:42170->127.0.0.1:2222: read tcp 127.0.0.1:42170->127.0.0.1:2222: use of closed network connection]
2024-05-17 14:44:21.191 [D] [proxy/proxy.go:222] [240bfa9ecca5d40f] [ssh-001] join connections closed
2024-05-17 14:44:21.191 [T] [proxy/proxy.go:224] [240bfa9ecca5d40f] [ssh-001] join connections errors: [writeto tcp 127.0.0.1:33576->127.0.0.1:2222: read tcp 127.0.0.1:33576->127.0.0.1:2222: use of closed network connection]

Steps to reproduce

  1. 服务器客户端升级至0.58.0,配置好后运行
  2. 通过frp的线路,使用ssh连接客户端
  3. 大约1-3分钟后ssh自动断开

Affected area

  • Docs
  • Installation
  • Performance and Scalability
  • Security
  • User Experience
  • Test and Release
  • Developer Infrastructure
  • Client Plugin
  • Server Plugin
  • Extensions
  • Others
Originally created by @islercn on GitHub (May 17, 2024). Original GitHub issue: https://github.com/fatedier/frp/issues/4223 ### Bug Description frps部署在阿里云,frpc部署在多台内网Ubuntu22.04主机,服务端和客户端都是0.58.0版。自升级到0.58.0版本后,在外网ssh内网机器很快就会断连,最长应该不会超过5分钟,这时如果再连,还能连上,但很快又自动断连了。尝试服务器维持0.58.0版本不变,仅将客户端降为0.57.0版本后问题消失。 ### frpc Version 0.58.0 ### frps Version 0.58.0 ### System Architecture ubuntu 22.04 ### Configurations 服务器设置 bindAddr = "0.0.0.0" bindPort = 10000 auth.method = "token" auth.token = "Pass5678" 客户端设置 serverAddr = "服务器IP" serverPort = 10000 transport.tls.enable = true auth.method = "token" auth.token = "Pass5678" [[proxies]] name = "ssh-001" type = "tcp" localIP = "127.0.0.1" localPort = 2222 remotePort = 10001 ### Logs 2024-05-17 14:42:30.781 [I] [sub/root.go:142] start frpc service for config file [/home/xxx/frpc.toml] 2024-05-17 14:42:30.781 [I] [client/service.go:294] try to connect to server... 2024-05-17 14:42:30.802 [I] [client/service.go:286] [240bfa9ecca5d40f] login to server success, get run id [240bfa9ecca5d40f] 2024-05-17 14:42:30.802 [I] [proxy/proxy_manager.go:173] [240bfa9ecca5d40f] proxy added: [ssh-001] 2024-05-17 14:42:30.802 [T] [proxy/proxy_wrapper.go:200] [240bfa9ecca5d40f] [ssh-001] change status from [new] to [wait start] 2024-05-17 14:42:30.808 [I] [client/control.go:168] [240bfa9ecca5d40f] [ssh-001] start proxy success 2024-05-17 14:42:33.599 [D] [proxy/proxy_wrapper.go:260] [240bfa9ecca5d40f] [ssh-001] start a new work connection, localAddr: localip:53416 remoteAddr: serverip:10000 2024-05-17 14:42:33.599 [T] [proxy/proxy.go:144] [240bfa9ecca5d40f] [ssh-001] handle tcp work connection, useEncryption: false, useCompression: false 2024-05-17 14:42:33.599 [D] [proxy/proxy.go:210] [240bfa9ecca5d40f] [ssh-001] join connections, localConn(l[127.0.0.1:44216] r[127.0.0.1:2222]) workConn(l[localip:53416] r[serverip:10000]) 2024-05-17 14:42:56.595 [D] [proxy/proxy_wrapper.go:260] [240bfa9ecca5d40f] [ssh-001] start a new work connection, localAddr: localip:53416 remoteAddr: serverip:10000 2024-05-17 14:42:56.595 [T] [proxy/proxy.go:144] [240bfa9ecca5d40f] [ssh-001] handle tcp work connection, useEncryption: false, useCompression: false 2024-05-17 14:42:56.595 [D] [proxy/proxy.go:210] [240bfa9ecca5d40f] [ssh-001] join connections, localConn(l[127.0.0.1:44928] r[127.0.0.1:2222]) workConn(l[localip:53416] r[serverip:10000]) 2024-05-17 14:42:59.044 [D] [proxy/proxy.go:222] [240bfa9ecca5d40f] [ssh-001] join connections closed 2024-05-17 14:42:59.044 [T] [proxy/proxy.go:224] [240bfa9ecca5d40f] [ssh-001] join connections errors: [writeto tcp 127.0.0.1:44928->127.0.0.1:2222: read tcp 127.0.0.1:44928->127.0.0.1:2222: use of closed network connection] 2024-05-17 14:44:00.858 [T] [client/control.go:145] [240bfa9ecca5d40f] work connection closed before response StartWorkConn message: EOF 2024-05-17 14:44:00.858 [I] [client/service.go:294] [240bfa9ecca5d40f] try to connect to server... 2024-05-17 14:44:00.858 [D] [proxy/proxy.go:222] [240bfa9ecca5d40f] [ssh-001] join connections closed 2024-05-17 14:44:00.858 [T] [proxy/proxy.go:224] [240bfa9ecca5d40f] [ssh-001] join connections errors: [writeto tcp 127.0.0.1:44216->127.0.0.1:2222: read tcp 127.0.0.1:44216->127.0.0.1:2222: use of closed network connection] 2024-05-17 14:44:00.880 [I] [client/service.go:286] [240bfa9ecca5d40f] login to server success, get run id [240bfa9ecca5d40f] 2024-05-17 14:44:00.880 [I] [proxy/proxy_manager.go:173] [240bfa9ecca5d40f] proxy added: [ssh-001] 2024-05-17 14:44:00.880 [T] [proxy/proxy_wrapper.go:200] [240bfa9ecca5d40f] [ssh-001] change status from [new] to [wait start] 2024-05-17 14:44:00.886 [I] [client/control.go:168] [240bfa9ecca5d40f] [ssh-001] start proxy success 2024-05-17 14:44:15.044 [D] [proxy/proxy_wrapper.go:260] [240bfa9ecca5d40f] [ssh-001] start a new work connection, localAddr: localip:33750 remoteAddr: serverip:10000 2024-05-17 14:44:15.044 [T] [proxy/proxy.go:144] [240bfa9ecca5d40f] [ssh-001] handle tcp work connection, useEncryption: false, useCompression: false 2024-05-17 14:44:15.044 [D] [proxy/proxy.go:210] [240bfa9ecca5d40f] [ssh-001] join connections, localConn(l[127.0.0.1:42170] r[127.0.0.1:2222]) workConn(l[localip:33750] r[serverip:10000]) 2024-05-17 14:44:18.990 [D] [proxy/proxy_wrapper.go:260] [240bfa9ecca5d40f] [ssh-001] start a new work connection, localAddr: localip:33750 remoteAddr: serverip:10000 2024-05-17 14:44:18.990 [T] [proxy/proxy.go:144] [240bfa9ecca5d40f] [ssh-001] handle tcp work connection, useEncryption: false, useCompression: false 2024-05-17 14:44:18.990 [D] [proxy/proxy.go:210] [240bfa9ecca5d40f] [ssh-001] join connections, localConn(l[127.0.0.1:33576] r[127.0.0.1:2222]) workConn(l[localip:33750] r[serverip:10000]) 2024-05-17 14:44:19.054 [D] [proxy/proxy.go:222] [240bfa9ecca5d40f] [ssh-001] join connections closed 2024-05-17 14:44:19.054 [T] [proxy/proxy.go:224] [240bfa9ecca5d40f] [ssh-001] join connections errors: [writeto tcp 127.0.0.1:42170->127.0.0.1:2222: read tcp 127.0.0.1:42170->127.0.0.1:2222: use of closed network connection] 2024-05-17 14:44:21.191 [D] [proxy/proxy.go:222] [240bfa9ecca5d40f] [ssh-001] join connections closed 2024-05-17 14:44:21.191 [T] [proxy/proxy.go:224] [240bfa9ecca5d40f] [ssh-001] join connections errors: [writeto tcp 127.0.0.1:33576->127.0.0.1:2222: read tcp 127.0.0.1:33576->127.0.0.1:2222: use of closed network connection] ### Steps to reproduce 1. 服务器客户端升级至0.58.0,配置好后运行 2. 通过frp的线路,使用ssh连接客户端 3. 大约1-3分钟后ssh自动断开 ### Affected area - [ ] Docs - [ ] Installation - [ ] Performance and Scalability - [ ] Security - [X] User Experience - [ ] Test and Release - [ ] Developer Infrastructure - [ ] Client Plugin - [ ] Server Plugin - [ ] Extensions - [ ] Others
Author
Owner

@swanbylei commented on GitHub (May 18, 2024):

遇到同样的问题。frps服务端0.58.0,frpc为Windows版0.58.0。连接间歇性断开连接。使用旧版本的frpc,0.53.2版无任何问题。

<!-- gh-comment-id:2118806004 --> @swanbylei commented on GitHub (May 18, 2024): 遇到同样的问题。frps服务端0.58.0,frpc为Windows版0.58.0。连接间歇性断开连接。使用旧版本的frpc,0.53.2版无任何问题。
Author
Owner

@MrLiuGangQiang commented on GitHub (May 19, 2024):

同样的问题 我还以为自己没配好 换成0.57.0就稳如老狗0.58.0就马上断

<!-- gh-comment-id:2119250781 --> @MrLiuGangQiang commented on GitHub (May 19, 2024): 同样的问题 我还以为自己没配好 换成0.57.0就稳如老狗0.58.0就马上断
Author
Owner

@fatedier commented on GitHub (May 20, 2024):

请尝试分别单独做如下配置修改来进行验证测试:

  • frpc 和 frps 连接之后,不进行任何请求和访问,持续观察一段时间两端是否有连接断开的情况。
  • ssh 启用 keepalive,例如 ServerAliveInterval 60 之类的配置。
  • frpc 增加 transport.tcpMuxKeepaliveInterval = 30 的配置。
  • frpc 增加 transport.heartbeatInterval = 30 的配置。
<!-- gh-comment-id:2119599836 --> @fatedier commented on GitHub (May 20, 2024): 请尝试分别单独做如下配置修改来进行验证测试: * frpc 和 frps 连接之后,不进行任何请求和访问,持续观察一段时间两端是否有连接断开的情况。 * ssh 启用 keepalive,例如 `ServerAliveInterval 60` 之类的配置。 * frpc 增加 `transport.tcpMuxKeepaliveInterval = 30` 的配置。 * frpc 增加 `transport.heartbeatInterval = 30` 的配置。
Author
Owner

@xingtongsf commented on GitHub (May 23, 2024):

我的问题不知道是不是相同的。我的客户端使用的是tiny-frpc.因为tinyfrpc使用的是ssh连接服务器,每次能正常代理20秒左右吧,直接断线了。再重启又能代理20秒左右。服务器端位0.58,客户端没有配置transport.tcpMuxKeepaliveInterval = 30之类的选项

<!-- gh-comment-id:2126178784 --> @xingtongsf commented on GitHub (May 23, 2024): 我的问题不知道是不是相同的。我的客户端使用的是tiny-frpc.因为tinyfrpc使用的是ssh连接服务器,每次能正常代理20秒左右吧,直接断线了。再重启又能代理20秒左右。服务器端位0.58,客户端没有配置transport.tcpMuxKeepaliveInterval = 30之类的选项
Author
Owner

@fatedier commented on GitHub (May 23, 2024):

@xingtongsf 你的问题请提交到 tiny-frpc 的 repo。

<!-- gh-comment-id:2126181091 --> @fatedier commented on GitHub (May 23, 2024): @xingtongsf 你的问题请提交到 tiny-frpc 的 repo。
Author
Owner

@YuxuanZuo commented on GitHub (May 28, 2024):

请尝试分别单独做如下配置修改来进行验证测试:

  • frpc 和 frps 连接之后,不进行任何请求和访问,持续观察一段时间两端是否有连接断开的情况。
  • ssh 启用 keepalive,例如 ServerAliveInterval 60 之类的配置。
  • frpc 增加 transport.tcpMuxKeepaliveInterval = 30 的配置。
  • frpc 增加 transport.heartbeatInterval = 30 的配置。

这里有差不多的问题,代理的是TCP协议的Minecraft Java版,客户端和服务端版本均为0.58.0,frpc增加配置transport.heartbeatInterval = 30后问题修复

<!-- gh-comment-id:2134796086 --> @YuxuanZuo commented on GitHub (May 28, 2024): > 请尝试分别单独做如下配置修改来进行验证测试: > > * frpc 和 frps 连接之后,不进行任何请求和访问,持续观察一段时间两端是否有连接断开的情况。 > * ssh 启用 keepalive,例如 `ServerAliveInterval 60` 之类的配置。 > * frpc 增加 `transport.tcpMuxKeepaliveInterval = 30` 的配置。 > * frpc 增加 `transport.heartbeatInterval = 30` 的配置。 这里有差不多的问题,代理的是TCP协议的Minecraft Java版,客户端和服务端版本均为0.58.0,frpc增加配置transport.heartbeatInterval = 30后问题修复
Author
Owner

@fatedier commented on GitHub (May 29, 2024):

@YuxuanZuo 其他的配置修改也请一并测试一下,方便定位具体的原因。

<!-- gh-comment-id:2136451963 --> @fatedier commented on GitHub (May 29, 2024): @YuxuanZuo 其他的配置修改也请一并测试一下,方便定位具体的原因。
Author
Owner

@YuxuanZuo commented on GitHub (May 29, 2024):

@YuxuanZuo 其他的配置修改也请一并测试一下,方便定位具体的原因。

单独添加 transport.tcpMuxKeepaliveInterval = 30配置没有效果,单独添加transport.heartbeatInterval = 10或二者都可以修复该问题。根据我对服务端trace日志的观察,大概每隔十几秒钟就会有heartbeat timeout日志,应该是客户端没有按时发送心跳包

<!-- gh-comment-id:2137536570 --> @YuxuanZuo commented on GitHub (May 29, 2024): > @YuxuanZuo 其他的配置修改也请一并测试一下,方便定位具体的原因。 单独添加 `transport.tcpMuxKeepaliveInterval = 30`配置没有效果,单独添加`transport.heartbeatInterval = 10`或二者都可以修复该问题。根据我对服务端trace日志的观察,大概每隔十几秒钟就会有`heartbeat timeout`日志,应该是客户端没有按时发送心跳包
Author
Owner

@fatedier commented on GitHub (May 30, 2024):

@YuxuanZuo 完整的配置可以贴一下吗?release notes 里有说明这种情况应该只发生在 frps 是旧版本的情况下,确认服务端已经更新到了最新版本,我本地无法复现。

<!-- gh-comment-id:2138502426 --> @fatedier commented on GitHub (May 30, 2024): @YuxuanZuo 完整的配置可以贴一下吗?release notes 里有说明这种情况应该只发生在 frps 是旧版本的情况下,确认服务端已经更新到了最新版本,我本地无法复现。
Author
Owner

@fatedier commented on GitHub (May 30, 2024):

@YuxuanZuo 如果使用旧的 INI 格式的话,默认值可能会有问题,但是看你的描述你应该用的不是 INI ?你这个可能是单独的问题,可以另外提交 issue 跟进,INI 的问题我会修复掉。

这个 issue 没人继续反馈 ssh 的问题的话我会关闭掉。

<!-- gh-comment-id:2138534504 --> @fatedier commented on GitHub (May 30, 2024): @YuxuanZuo 如果使用旧的 INI 格式的话,默认值可能会有问题,但是看你的描述你应该用的不是 INI ?你这个可能是单独的问题,可以另外提交 issue 跟进,INI 的问题我会修复掉。 这个 issue 没人继续反馈 ssh 的问题的话我会关闭掉。
Author
Owner

@YuxuanZuo commented on GitHub (May 30, 2024):

@YuxuanZuo 如果使用旧的 INI 格式的话,默认值可能会有问题,但是看你的描述你应该用的不是 INI ?你这个可能是单独的问题,可以另外提交 issue 跟进,INI 的问题我会修复掉。

这个 issue 没人继续反馈 ssh 的问题的话我会关闭掉。

确认了一下,frps确实用的ini配置,经修改后全部恢复正常,感谢作者大大耐心解答!

<!-- gh-comment-id:2139616154 --> @YuxuanZuo commented on GitHub (May 30, 2024): > @YuxuanZuo 如果使用旧的 INI 格式的话,默认值可能会有问题,但是看你的描述你应该用的不是 INI ?你这个可能是单独的问题,可以另外提交 issue 跟进,INI 的问题我会修复掉。 > > 这个 issue 没人继续反馈 ssh 的问题的话我会关闭掉。 确认了一下,frps确实用的ini配置,经修改后全部恢复正常,感谢作者大大耐心解答!
Author
Owner

@islercn commented on GitHub (Jun 7, 2024):

前一阵是网络不好,屋里温度比较高时,丢包率能干到3%,最近调整了下网络,很少再出现丢包了,这个问题也没再出现过。。所以感觉是丢包导致的心跳包丢失?所以能否自适应调整心跳包频率,或者增加重传机制?

<!-- gh-comment-id:2154664411 --> @islercn commented on GitHub (Jun 7, 2024): 前一阵是网络不好,屋里温度比较高时,丢包率能干到3%,最近调整了下网络,很少再出现丢包了,这个问题也没再出现过。。所以感觉是丢包导致的心跳包丢失?所以能否自适应调整心跳包频率,或者增加重传机制?
Author
Owner

@fatedier commented on GitHub (Jun 8, 2024):

@islercn 可以自行尝试 kcp 或者 quic。

<!-- gh-comment-id:2156057197 --> @fatedier commented on GitHub (Jun 8, 2024): @islercn 可以自行尝试 kcp 或者 quic。
Author
Owner

@MaxKingPor commented on GitHub (Jun 8, 2024):

  • 0.58.1 client 端也会出现 try to connect to server 日志一分钟一次
    服务端添加
[transport]
tcpKeepalive = 10

之后就没出现过(测试阶段没出现过 大概2个小时)

<!-- gh-comment-id:2156058223 --> @MaxKingPor commented on GitHub (Jun 8, 2024): * 0.58.1 client 端也会出现 `try to connect to server` 日志一分钟一次 服务端添加 ``` toml [transport] tcpKeepalive = 10 ``` 之后就没出现过(测试阶段没出现过 大概2个小时)
Author
Owner

@focuseyes360 commented on GitHub (Jan 20, 2025):

Image

1.现象:
公网云服务器linux系统上部署frps,同一路由器下挂三个相同配置的linux设备frpc,同样网络情况下,前两个设备稳如狗,第三个出现幺蛾子了,尝试了v0.48.0 、v0.57.0、v0.60.0 版本,过几分钟到半小时必定在frps server日志上出现frps [service.go:450] accept new mux stream error: keepalive timeout,然后断线重连成功,平均每小时断线重连2次左右。前面这两个版本尝试了issue中提到的修改keepmuxalive 和keepheartalive参数都没有效果。

  1. 出现断线问题时的日志:
    【frps】日志:
    2025/01/20 12:58:44 [D] [control.go:493] [8fd40d4957c2f938] receive heartbeat
    2025/01/20 12:58:57 [D] [control.go:493] [8ad2c6c0839b61a4] receive heartbeat
    2025/01/20 12:59:07 [D] [control.go:493] [6e811ef4307be7d7] receive heartbeat
    2025/01/20 12:59:14 [D] [control.go:493] [8fd40d4957c2f938] receive heartbeat
    2025/01/20 12:59:27 [D] [control.go:493] [8ad2c6c0839b61a4] receive heartbeat
    2025/01/20 12:59:37 [D] [control.go:493] [6e811ef4307be7d7] receive heartbeat
    2025/01/20 12:59:44 [D] [control.go:493] [8fd40d4957c2f938] receive heartbeat
    2025/01/20 12:59:57 [D] [service.go:450] Accept new mux stream error: keepalive timeout
    2025/01/20 12:59:57 [D] [control.go:334] [6879c9f0ff6df0a4] control connection closed
    2025/01/20 12:59:57 [I] [control.go:306] [6879c9f0ff6df0a4] control writer is closing
    2025/01/20 12:59:57 [I] [proxy.go:98] [6879c9f0ff6df0a4] [box-19.id] proxy closing
    2025/01/20 12:59:57 [D] [proxy.go:326] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] join connections closed
    2025/01/20 12:59:57 [W] [proxy.go:186] [6879c9f0ff6df0a4] [box-19.id] listener is closed: accept tcp [::]:7025: use of closed network connection
    2025/01/20 12:59:57 [I] [proxy.go:98] [6879c9f0ff6df0a4] [box-19.web] proxy closing
    2025/01/20 12:59:57 [W] [proxy.go:186] [6879c9f0ff6df0a4] [box-19.web] listener is closed: accept tcp [::]:7023: use of closed network connection
    2025/01/20 12:59:57 [I] [proxy.go:98] [6879c9f0ff6df0a4] [box-19.asd] proxy closing
    2025/01/20 12:59:57 [W] [proxy.go:186] [6879c9f0ff6df0a4] [box-19.asd] listener is closed: accept tcp [::]:7024: use of closed network connection
    2025/01/20 12:59:57 [I] [proxy.go:98] [6879c9f0ff6df0a4] [box-19.bm] proxy closing
    2025/01/20 12:59:57 [W] [proxy.go:186] [6879c9f0ff6df0a4] [box-19.bm] listener is closed: accept tcp [::]:7026: use of closed network connection
    2025/01/20 12:59:57 [I] [proxy.go:98] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] proxy closing
    2025/01/20 12:59:57 [I] [control.go:395] [6879c9f0ff6df0a4] client exit success
    2025/01/20 12:59:57 [W] [proxy.go:186] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] listener is closed: listener closed
    2025/01/20 12:59:57 [D] [control.go:493] [8ad2c6c0839b61a4] receive heartbeat
    2025/01/20 12:59:57 [T] [service.go:423] start check TLS connection...
    2025/01/20 12:59:57 [T] [service.go:432] check TLS connection success, isTLS: true custom: false
    2025/01/20 12:59:57 [I] [service.go:500] [6879c9f0ff6df0a4] client login info: ip [49.65.xx.xx:5421] version [0.60.0] hostname [] os [linux] arch [amd64]
    2025/01/20 12:59:57 [I] [tcp.go:66] [6879c9f0ff6df0a4] [box-19.asd] tcp proxy listen port [7024]
    2025/01/20 12:59:57 [I] [control.go:464] [6879c9f0ff6df0a4] new proxy [box-19.asd] type [tcp] success
    2025/01/20 12:59:57 [D] [control.go:218] [6879c9f0ff6df0a4] new work connection registered
    2025/01/20 12:59:57 [I] [stcp.go:36] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] stcp proxy custom listen success
    2025/01/20 12:59:57 [I] [control.go:464] [6879c9f0ff6df0a4] new proxy [box-19.stcp-home-xxx] type [stcp] success
    2025/01/20 12:59:57 [I] [tcp.go:66] [6879c9f0ff6df0a4] [box-19.id] tcp proxy listen port [7025]
    2025/01/20 12:59:57 [I] [control.go:464] [6879c9f0ff6df0a4] new proxy [box-19.id] type [tcp] success
    2025/01/20 12:59:57 [I] [tcp.go:66] [6879c9f0ff6df0a4] [box-19.bm] tcp proxy listen port [7026]
    2025/01/20 12:59:57 [I] [control.go:464] [6879c9f0ff6df0a4] new proxy [box-19.bm] type [tcp] success
    2025/01/20 12:59:57 [I] [tcp.go:66] [6879c9f0ff6df0a4] [box-19.web] tcp proxy listen port [7023]
    2025/01/20 12:59:57 [I] [control.go:464] [6879c9f0ff6df0a4] new proxy [box-19.web] type [tcp] success

【frpc】日志:
2025-01-20 12:59:57.229 [I] [client/service.go:295] [6879c9f0ff6df0a4] try to connect to server...
2025-01-20 12:59:57.229 [D] [proxy/proxy.go:222] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] join connections closed
2025-01-20 12:59:57.314 [I] [client/service.go:287] [6879c9f0ff6df0a4] login to server success, get run id [6879c9f0ff6df0a4]
2025-01-20 12:59:57.314 [I] [proxy/proxy_manager.go:173] [6879c9f0ff6df0a4] proxy added: [box-19.web box-19.as box-19.ids box-19.bm box-19.stcp-home-xxx]
2025-01-20 12:59:57.342 [I] [client/control.go:168] [6879c9f0ff6df0a4] [box-19.asd] start proxy success
2025-01-20 12:59:57.343 [I] [client/control.go:168] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] start proxy success
2025-01-20 12:59:57.343 [I] [client/control.go:168] [6879c9f0ff6df0a4] [box-19.id] start proxy success
2025-01-20 12:59:57.343 [I] [client/control.go:168] [6879c9f0ff6df0a4] [box-19.bm] start proxy success
2025-01-20 12:59:57.372 [I] [client/control.go:168] [6879c9f0ff6df0a4] [box-19.web] start proxy success

3.烦请大大们帮忙看看到底是什么问题导致frps Accept new mux stream error: keepalive timeout,多谢!!!

<!-- gh-comment-id:2601770039 --> @focuseyes360 commented on GitHub (Jan 20, 2025): ![Image](https://github.com/user-attachments/assets/2f4dd3cf-49e9-4509-bfac-6a85c7c55122) 1.现象: 公网云服务器linux系统上部署frps,同一路由器下挂三个相同配置的linux设备frpc,同样网络情况下,前两个设备稳如狗,第三个出现幺蛾子了,尝试了v0.48.0 、v0.57.0、v0.60.0 版本,过几分钟到半小时必定在frps server日志上出现frps [service.go:450] accept new mux stream error: keepalive timeout,然后断线重连成功,平均每小时断线重连2次左右。前面这两个版本尝试了issue中提到的修改keepmuxalive 和keepheartalive参数都没有效果。 2. 出现断线问题时的日志: 【frps】日志: 2025/01/20 12:58:44 [D] [control.go:493] [8fd40d4957c2f938] receive heartbeat 2025/01/20 12:58:57 [D] [control.go:493] [8ad2c6c0839b61a4] receive heartbeat 2025/01/20 12:59:07 [D] [control.go:493] [6e811ef4307be7d7] receive heartbeat 2025/01/20 12:59:14 [D] [control.go:493] [8fd40d4957c2f938] receive heartbeat 2025/01/20 12:59:27 [D] [control.go:493] [8ad2c6c0839b61a4] receive heartbeat 2025/01/20 12:59:37 [D] [control.go:493] [6e811ef4307be7d7] receive heartbeat 2025/01/20 12:59:44 [D] [control.go:493] [8fd40d4957c2f938] receive heartbeat 2025/01/20 12:59:57 [D] [service.go:450] Accept new mux stream error: keepalive timeout 2025/01/20 12:59:57 [D] [control.go:334] [6879c9f0ff6df0a4] control connection closed 2025/01/20 12:59:57 [I] [control.go:306] [6879c9f0ff6df0a4] control writer is closing 2025/01/20 12:59:57 [I] [proxy.go:98] [6879c9f0ff6df0a4] [box-19.id] proxy closing 2025/01/20 12:59:57 [D] [proxy.go:326] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] join connections closed 2025/01/20 12:59:57 [W] [proxy.go:186] [6879c9f0ff6df0a4] [box-19.id] listener is closed: accept tcp [::]:7025: use of closed network connection 2025/01/20 12:59:57 [I] [proxy.go:98] [6879c9f0ff6df0a4] [box-19.web] proxy closing 2025/01/20 12:59:57 [W] [proxy.go:186] [6879c9f0ff6df0a4] [box-19.web] listener is closed: accept tcp [::]:7023: use of closed network connection 2025/01/20 12:59:57 [I] [proxy.go:98] [6879c9f0ff6df0a4] [box-19.asd] proxy closing 2025/01/20 12:59:57 [W] [proxy.go:186] [6879c9f0ff6df0a4] [box-19.asd] listener is closed: accept tcp [::]:7024: use of closed network connection 2025/01/20 12:59:57 [I] [proxy.go:98] [6879c9f0ff6df0a4] [box-19.bm] proxy closing 2025/01/20 12:59:57 [W] [proxy.go:186] [6879c9f0ff6df0a4] [box-19.bm] listener is closed: accept tcp [::]:7026: use of closed network connection 2025/01/20 12:59:57 [I] [proxy.go:98] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] proxy closing 2025/01/20 12:59:57 [I] [control.go:395] [6879c9f0ff6df0a4] client exit success 2025/01/20 12:59:57 [W] [proxy.go:186] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] listener is closed: listener closed 2025/01/20 12:59:57 [D] [control.go:493] [8ad2c6c0839b61a4] receive heartbeat 2025/01/20 12:59:57 [T] [service.go:423] start check TLS connection... 2025/01/20 12:59:57 [T] [service.go:432] check TLS connection success, isTLS: true custom: false 2025/01/20 12:59:57 [I] [service.go:500] [6879c9f0ff6df0a4] client login info: ip [49.65.xx.xx:5421] version [0.60.0] hostname [] os [linux] arch [amd64] 2025/01/20 12:59:57 [I] [tcp.go:66] [6879c9f0ff6df0a4] [box-19.asd] tcp proxy listen port [7024] 2025/01/20 12:59:57 [I] [control.go:464] [6879c9f0ff6df0a4] new proxy [box-19.asd] type [tcp] success 2025/01/20 12:59:57 [D] [control.go:218] [6879c9f0ff6df0a4] new work connection registered 2025/01/20 12:59:57 [I] [stcp.go:36] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] stcp proxy custom listen success 2025/01/20 12:59:57 [I] [control.go:464] [6879c9f0ff6df0a4] new proxy [box-19.stcp-home-xxx] type [stcp] success 2025/01/20 12:59:57 [I] [tcp.go:66] [6879c9f0ff6df0a4] [box-19.id] tcp proxy listen port [7025] 2025/01/20 12:59:57 [I] [control.go:464] [6879c9f0ff6df0a4] new proxy [box-19.id] type [tcp] success 2025/01/20 12:59:57 [I] [tcp.go:66] [6879c9f0ff6df0a4] [box-19.bm] tcp proxy listen port [7026] 2025/01/20 12:59:57 [I] [control.go:464] [6879c9f0ff6df0a4] new proxy [box-19.bm] type [tcp] success 2025/01/20 12:59:57 [I] [tcp.go:66] [6879c9f0ff6df0a4] [box-19.web] tcp proxy listen port [7023] 2025/01/20 12:59:57 [I] [control.go:464] [6879c9f0ff6df0a4] new proxy [box-19.web] type [tcp] success 【frpc】日志: 2025-01-20 12:59:57.229 [I] [client/service.go:295] [6879c9f0ff6df0a4] try to connect to server... 2025-01-20 12:59:57.229 [D] [proxy/proxy.go:222] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] join connections closed 2025-01-20 12:59:57.314 [I] [client/service.go:287] [6879c9f0ff6df0a4] login to server success, get run id [6879c9f0ff6df0a4] 2025-01-20 12:59:57.314 [I] [proxy/proxy_manager.go:173] [6879c9f0ff6df0a4] proxy added: [box-19.web box-19.as box-19.ids box-19.bm box-19.stcp-home-xxx] 2025-01-20 12:59:57.342 [I] [client/control.go:168] [6879c9f0ff6df0a4] [box-19.asd] start proxy success 2025-01-20 12:59:57.343 [I] [client/control.go:168] [6879c9f0ff6df0a4] [box-19.stcp-home-xxx] start proxy success 2025-01-20 12:59:57.343 [I] [client/control.go:168] [6879c9f0ff6df0a4] [box-19.id] start proxy success 2025-01-20 12:59:57.343 [I] [client/control.go:168] [6879c9f0ff6df0a4] [box-19.bm] start proxy success 2025-01-20 12:59:57.372 [I] [client/control.go:168] [6879c9f0ff6df0a4] [box-19.web] start proxy success 3.烦请大大们帮忙看看到底是什么问题导致frps Accept new mux stream error: keepalive timeout,多谢!!!
Author
Owner

@focuseyes360 commented on GitHub (Jan 22, 2025):

花了几天,出问题的设备上无线wifi网卡网关与有线网卡接入的路由器网关相同,导致设备路由表转发间歇性出现问题。

<!-- gh-comment-id:2606255409 --> @focuseyes360 commented on GitHub (Jan 22, 2025): 花了几天,出问题的设备上无线wifi网卡网关与有线网卡接入的路由器网关相同,导致设备路由表转发间歇性出现问题。
Author
Owner

@HellowBoy commented on GitHub (Feb 7, 2025):

我是这样解决的,目前测试没问题了。
1、客户端和服务器端关闭端口复用,下面只给出了服务端的配置,客户端也有相应的设置
transport.tcpMux = false
transport.tcpMuxKeepaliveInterval = 60

transport.heartbeatTimeout = 60

2、部分客户端插件注册间隔时间设置为0。如openwrt中

大家可以试试,有用点赞呢

<!-- gh-comment-id:2641982710 --> @HellowBoy commented on GitHub (Feb 7, 2025): 我是这样解决的,目前测试没问题了。 1、客户端和服务器端关闭端口复用,下面只给出了服务端的配置,客户端也有相应的设置 transport.tcpMux = false transport.tcpMuxKeepaliveInterval = 60 transport.heartbeatTimeout = 60 2、部分客户端插件注册间隔时间设置为0。如openwrt中 大家可以试试,有用点赞呢
Author
Owner

@WLyKan commented on GitHub (Sep 30, 2025):

我之前ssh连上后隔几十秒一直自动断开,通过配置解决了

[transport]
tcpMuxKeepaliveInterval = 60
heartbeatInterval = 60
<!-- gh-comment-id:3349623307 --> @WLyKan commented on GitHub (Sep 30, 2025): 我之前ssh连上后隔几十秒一直自动断开,通过配置解决了 ``` [transport] tcpMuxKeepaliveInterval = 60 heartbeatInterval = 60 ```
Author
Owner

@qk-antares commented on GitHub (Nov 25, 2025):

我用的最新的0.65.0和0.62.1的版本依然有上述问题,也是在frpc添加transport.heartbeatInterval = 10后解决

<!-- gh-comment-id:3574038450 --> @qk-antares commented on GitHub (Nov 25, 2025): 我用的最新的0.65.0和0.62.1的版本依然有上述问题,也是在frpc添加`transport.heartbeatInterval = 10`后解决
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/frp#3325
No description provided.