mirror of
https://github.com/fatedier/frp.git
synced 2026-05-15 08:05:49 -06:00
[GH-ISSUE #3543] 代理端持续登录不上,导致frps疑似出现内存泄漏 #2824
Labels
No labels
In Progress
WIP
WaitingForInfo
bug
doc
duplicate
easy
enhancement
future
help wanted
invalid
lifecycle/stale
need-issue-template
need-usage-help
no plan
proposal
pull-request
question
todo
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/frp#2824
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @itshaungmu on GitHub (Jul 24, 2023).
Original GitHub issue: https://github.com/fatedier/frp/issues/3543
Bug Description
frp客户端一直在尝试登录到frps服务端,frps服务端显示登录成功,但是客户端持续显示登录失败。frps服务端缓慢增长直至触发oom。此时连接代理不可用,kill客户端后代理注册恢复正常。
frpc Version
0.48.0
frps Version
0.48.0
System Architecture
amd64
Configurations
[common]
server_addr = x.x.x.x
server_port = 5000
tls_enable = true
tcp_mux = true
[api]
local_port = 443
type = tcp
local_ip = 127.0.0.1
[web]
local_port = 80
type = tcp
local_ip = 127.0.0.1
Logs
客户端反复出现:
服务端:反复出现 Replaced by client , 无proxy注册成功信息
Steps to reproduce
目前无发现复现手段
Affected area
@fatedier commented on GitHub (Jul 24, 2023):
看不出问题
@itshaungmu commented on GitHub (Jul 24, 2023):
有没有可能是这里持续阻塞,导致登录请求超时,所以客户端是显示超时,服务端不停产生goroutine导致内存上涨。

@itshaungmu commented on GitHub (Jul 24, 2023):
这样就复现了相同的日志情况
@fatedier commented on GitHub (Jul 24, 2023):
通过 pprof_enable 配置为 true 可以开启 prof 接口,然后查看一些相关的内存信息
@fatedier commented on GitHub (Jul 24, 2023):
主要是本地如何构造这个场景l来测试
@itshaungmu commented on GitHub (Jul 25, 2023):
客户端目前超时默认是10s,服务端在LOGIN的过程处理需要在10s内完成,如果超过时间就会复现此问题。在Login的过程目前看 svr.pluginManager.Login 插件调用是同步的,我在服务端有使用Login插件。且在插件接口 sleep 60 再次复现了问题。
@itshaungmu commented on GitHub (Jul 25, 2023):
服务端出现一个日志:[control.go:315] [f038074a851789aa] write message to control connection error: stream closed


问题可能出现在这里,,
ctl.sendCh的大小为10,此时设置了poolCount 为20,ctl.writer内出现异常退出,导致 ctl.sendCh <- &msg.ReqWorkConn{} 持续阻塞,导致后续的login全部阻塞在等待这个control退出。
怀疑是 sendCh 和 readCh 的值没有与 poolCount 设置保持一致导致的
测试将 sendCh 和 readCh 设置大小为 poolCount 时,sleep后代理可以正常注册,control stop正常。
@fatedier commented on GitHub (Jul 25, 2023):
看起来这两个问题组合起来,确实会触发这个 bug