[GH-ISSUE #693] frp 连接后端的时候是启用短链接吗 有大量的TIME_WAIT产生 #542

Closed
opened 2026-05-05 12:21:11 -06:00 by gitea-mirror · 5 comments
Owner

Originally created by @bhzhu203 on GitHub (Apr 6, 2018).
Original GitHub issue: https://github.com/fatedier/frp/issues/693

Issue is only used for submiting bug report and documents typo. If there are same issues or answers can be found in documents, we will close it directly.
(为了节约时间,提高处理问题的效率,不按照格式填写的 issue 将会直接关闭。)

Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST

What version of frp are you using (./frpc -v or ./frps -v)?
0.14.0

What operating system and processor architecture are you using (go env)?

CentOS release 6.8 /1 CPU core
Configures you used:
[common]
server_addr = 172.16.0.79
server_port = 8000
protocol = kcp

[ldxq-mqtt1]
type = tcp
local_ip = 127.0.0.1
local_port = 1883
remote_port = 1883
use_compression = true

Steps to reproduce the issue:

转发EMQTT服务(服务为长连接)维护 300~400个长连接

Describe the results you received:
产生2000多个TIME_WAIT,自编译linux 4.15.15 (带BBR功能),新内核已经没有 net.ipv4.tcp_tw_recycle = 1 故无法快速回收TIME_WAIT

Describe the results you expected:
原先2.6.32老内核是有net.ipv4.tcp_tw_recycle = 1 功能,所以frp这个TIME_WAIT
大量产生的问题没有凸显出来 ,TIME_WAIT基本可以快速回收和重用到1~5。

为了使用BBR功能 故必须编译使用新内核

Additional information you deem important (e.g. issue happens only occasionally):

Can you point out what caused this issue (optional)

frp连接后端的时候 可能使用的是短连接

Originally created by @bhzhu203 on GitHub (Apr 6, 2018). Original GitHub issue: https://github.com/fatedier/frp/issues/693 Issue is only used for submiting bug report and documents typo. If there are same issues or answers can be found in documents, we will close it directly. (为了节约时间,提高处理问题的效率,不按照格式填写的 issue 将会直接关闭。) Use the commands below to provide key information from your environment: You do NOT have to include this information if this is a FEATURE REQUEST **What version of frp are you using (./frpc -v or ./frps -v)?** 0.14.0 **What operating system and processor architecture are you using (`go env`)?** CentOS release 6.8 /1 CPU core **Configures you used:** [common] server_addr = 172.16.0.79 server_port = 8000 protocol = kcp [ldxq-mqtt1] type = tcp local_ip = 127.0.0.1 local_port = 1883 remote_port = 1883 use_compression = true **Steps to reproduce the issue:** 转发EMQTT服务(服务为长连接)维护 300~400个长连接 **Describe the results you received:** 产生2000多个TIME_WAIT,自编译linux 4.15.15 (带BBR功能),新内核已经没有 net.ipv4.tcp_tw_recycle = 1 故无法快速回收TIME_WAIT **Describe the results you expected:** 原先2.6.32老内核是有net.ipv4.tcp_tw_recycle = 1 功能,所以frp这个TIME_WAIT 大量产生的问题没有凸显出来 ,TIME_WAIT基本可以快速回收和重用到1~5。 为了使用BBR功能 故必须编译使用新内核 **Additional information you deem important (e.g. issue happens only occasionally):** **Can you point out what caused this issue (optional)** frp连接后端的时候 可能使用的是短连接
Author
Owner

@fatedier commented on GitHub (Apr 9, 2018):

请仔细检查相关的服务使用姿势,连接的创建与否和 frp 无关。

<!-- gh-comment-id:379812325 --> @fatedier commented on GitHub (Apr 9, 2018): 请仔细检查相关的服务使用姿势,连接的创建与否和 frp 无关。
Author
Owner

@bhzhu203 commented on GitHub (Apr 16, 2018):

frp支持 epoll吗

另外问一下 pool_count = 500 这样设置合理吗 比如说有400左右个长链接

<!-- gh-comment-id:381593807 --> @bhzhu203 commented on GitHub (Apr 16, 2018): frp支持 epoll吗 另外问一下 pool_count = 500 这样设置合理吗 比如说有400左右个长链接
Author
Owner

@bhzhu203 commented on GitHub (Apr 16, 2018):

没有任何外界的连接情况下 frp和emqtt服务之间有100个以上的time_wait连接,这个是谁的问题?

netstat -n | grep TIME_WAIT
tcp 0 0 127.0.0.1:41060 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41068 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41214 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41132 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40852 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41280 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41244 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40992 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40816 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41240 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41216 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40860 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40996 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41024 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41064 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40968 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40888 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41094 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40814 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41184 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40850 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40962 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40920 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41104 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40778 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41220 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41032 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40970 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41134 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41130 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41136 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40886 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40824 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41170 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41186 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41140 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41176 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41096 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41270 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40806 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41276 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40894 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40990 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40892 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40960 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40932 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40926 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41206 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40854 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40994 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40808 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41180 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41212 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41174 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40934 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41172 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41204 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40776 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41102 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41030 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41058 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41106 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41000 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40890 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41210 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41062 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40858 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41236 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41098 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:41242 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40930 127.0.0.1:1883 TIME_WAIT
tcp 0 0 127.0.0.1:40966 127.0.0.1:1883 TIME_WAIT

<!-- gh-comment-id:381626311 --> @bhzhu203 commented on GitHub (Apr 16, 2018): 没有任何外界的连接情况下 frp和emqtt服务之间有100个以上的time_wait连接,这个是谁的问题? netstat -n | grep TIME_WAIT tcp 0 0 127.0.0.1:41060 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41068 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41214 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41132 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40852 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41280 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41244 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40992 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40816 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41240 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41216 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40860 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40996 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41024 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41064 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40968 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40888 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41094 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40814 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41184 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40850 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40962 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40920 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41104 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40778 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41220 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41032 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40970 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41134 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41130 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41136 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40886 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40824 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41170 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41186 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41140 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41176 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41096 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41270 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40806 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41276 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40894 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40990 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40892 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40960 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40932 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40926 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41206 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40854 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40994 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40808 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41180 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41212 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41174 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40934 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41172 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41204 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40776 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41102 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41030 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41058 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41106 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41000 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40890 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41210 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41062 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40858 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41236 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41098 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:41242 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40930 127.0.0.1:1883 TIME_WAIT tcp 0 0 127.0.0.1:40966 127.0.0.1:1883 TIME_WAIT
Author
Owner

@fatedier commented on GitHub (Apr 19, 2018):

frp 只是一个 forwarder,当你从客户端发起一次连接的建立时,frpc 会帮助建立这个连接,当客户端主动断开连接时,frpc 会断开和这个内网服务的连接。
如有疑问,请再仔细阅读下代码,了解下详细的架构和组件的功能。

<!-- gh-comment-id:382761163 --> @fatedier commented on GitHub (Apr 19, 2018): frp 只是一个 forwarder,当你从客户端发起一次连接的建立时,frpc 会帮助建立这个连接,当客户端主动断开连接时,frpc 会断开和这个内网服务的连接。 如有疑问,请再仔细阅读下代码,了解下详细的架构和组件的功能。
Author
Owner

@Xeath commented on GitHub (May 23, 2018):

这已经很明显是你使用的软件问题,而不是 frp 的问题。

<!-- gh-comment-id:391224916 --> @Xeath commented on GitHub (May 23, 2018): 这已经很明显是你使用的软件问题,而不是 `frp` 的问题。
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/frp#542
No description provided.