[GH-ISSUE #752] accept4: too many open files大访问量时frps无响应,重启frps才恢复正常 #588

Closed
opened 2026-05-05 12:22:53 -06:00 by gitea-mirror · 11 comments
Owner

Originally created by @immortalt on GitHub (May 7, 2018).
Original GitHub issue: https://github.com/fatedier/frp/issues/752

Issue is only used for submiting bug report and documents typo. If there are same issues or answers can be found in documents, we will close it directly.
(为了节约时间,提高处理问题的效率,不按照格式填写的 issue 将会直接关闭。)

Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST

What version of frp are you using (./frpc -v or ./frps -v)?
1.8

What operating system and processor architecture are you using (go env)?
Ubuntu 16.04.4 LTS X64

Configures you used:
为了安全,关键信息用xxx替代

[common]
server_addr = xxx.xxx.xxx.xxx
server_port = xxxx
privilege_token = xxxxxxxxxxxxxxxxxxxxxxxxx

[aaa]
type = tcp
local_ip = 127.0.0.1
local_port = xxxx
remote_port = xxxx

[bbb]
type = https
local_ip = 127.0.0.1
local_port = xxxx
subdomain = xxxx
use_encryption = true
use_compression = true

[ccc]
type = http
local_ip = 127.0.0.1
local_port = xxx
subdomain = xxxxxx
use_encryption = true
use_compression = true

一共10个通道,就不一一列举,总之只涉及到http、https和tcp类型,格式如上

Steps to reproduce the issue:
1.正常启动frpc和frps
2.大量用户访问http服务,大约一个白天时间
3.期间可能也伴随着小量https或tcp传输

Describe the results you received:
晚上测试发现frps虽然正在运行却无法访问,查看log报错为
2018/05/03 22:36:43 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
不得不killall frps关闭,然后重新打开frps。
frpc设置了每隔一分钟检测,断线重连。所以frps重启后就可以自动连接正常。
脚本如下:

#!/bin/sh
# 计算进程数
num=`ps -ax | grep frpc | grep -v grep | grep -v check | wc -l`
if [ $num -ge 1 ];then
echo "frpc is running"
else
echo "stoping all frpc"
killall frpc
echo "starting frpc"
/volume1/frp/frpc -c /volume1/frp/frpc.ini &
fi

如果frpc连接失败就会退出。每分钟执行一次脚本。如果frpc退出了进程数量小于1,就会启动frpc
Describe the results you expected:
frps正常运行而无需定时重启。

Additional information you deem important (e.g. issue happens only occasionally):
这个问题让人头疼。希望可以解决。

Can you point out what caused this issue (optional)
我怀疑是不是并发数量有限导致的问题?比如我同时可能有几百到几千的http访问。

Originally created by @immortalt on GitHub (May 7, 2018). Original GitHub issue: https://github.com/fatedier/frp/issues/752 Issue is only used for submiting bug report and documents typo. If there are same issues or answers can be found in documents, we will close it directly. (为了节约时间,提高处理问题的效率,不按照格式填写的 issue 将会直接关闭。) Use the commands below to provide key information from your environment: You do NOT have to include this information if this is a FEATURE REQUEST **What version of frp are you using (./frpc -v or ./frps -v)?** 1.8 **What operating system and processor architecture are you using (`go env`)?** Ubuntu 16.04.4 LTS X64 **Configures you used:** 为了安全,关键信息用xxx替代 ``` [common] server_addr = xxx.xxx.xxx.xxx server_port = xxxx privilege_token = xxxxxxxxxxxxxxxxxxxxxxxxx [aaa] type = tcp local_ip = 127.0.0.1 local_port = xxxx remote_port = xxxx [bbb] type = https local_ip = 127.0.0.1 local_port = xxxx subdomain = xxxx use_encryption = true use_compression = true [ccc] type = http local_ip = 127.0.0.1 local_port = xxx subdomain = xxxxxx use_encryption = true use_compression = true ``` 一共10个通道,就不一一列举,总之只涉及到http、https和tcp类型,格式如上 **Steps to reproduce the issue:** 1.正常启动frpc和frps 2.大量用户访问http服务,大约一个白天时间 3.期间可能也伴随着小量https或tcp传输 **Describe the results you received:** 晚上测试发现frps虽然正在运行却无法访问,查看log报错为 2018/05/03 22:36:43 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s 不得不killall frps关闭,然后重新打开frps。 frpc设置了每隔一分钟检测,断线重连。所以frps重启后就可以自动连接正常。 脚本如下: ``` #!/bin/sh # 计算进程数 num=`ps -ax | grep frpc | grep -v grep | grep -v check | wc -l` if [ $num -ge 1 ];then echo "frpc is running" else echo "stoping all frpc" killall frpc echo "starting frpc" /volume1/frp/frpc -c /volume1/frp/frpc.ini & fi ``` 如果frpc连接失败就会退出。每分钟执行一次脚本。如果frpc退出了进程数量小于1,就会启动frpc **Describe the results you expected:** frps正常运行而无需定时重启。 **Additional information you deem important (e.g. issue happens only occasionally):** 这个问题让人头疼。希望可以解决。 **Can you point out what caused this issue (optional)** 我怀疑是不是并发数量有限导致的问题?比如我同时可能有几百到几千的http访问。
Author
Owner

@immortalt commented on GitHub (May 7, 2018):

是不是文件句柄有泄露?还是说默认的限制太小了?
会不会可以通过这个文章的方法解决?
我不是go程序员,我也不确定这个文章行不行。
go语言实现服务器接收http请求以及出现泄漏时的解决方案
http://www.bubuko.com/infodetail-2294580.html

<!-- gh-comment-id:387114080 --> @immortalt commented on GitHub (May 7, 2018): 是不是文件句柄有泄露?还是说默认的限制太小了? 会不会可以通过这个文章的方法解决? 我不是go程序员,我也不确定这个文章行不行。 go语言实现服务器接收http请求以及出现泄漏时的解决方案 http://www.bubuko.com/infodetail-2294580.html
Author
Owner

@fatedier commented on GitHub (May 7, 2018):

建议通过网络工具,排查下 frps 相关的连接。
还有一些信息,访问 http 服务客户端的并发数,qps,长连接还是短连接,请求平均响应时间多少。
另外再查下服务器的资源限制,uname -a

<!-- gh-comment-id:387121810 --> @fatedier commented on GitHub (May 7, 2018): 建议通过网络工具,排查下 frps 相关的连接。 还有一些信息,访问 http 服务客户端的并发数,qps,长连接还是短连接,请求平均响应时间多少。 另外再查下服务器的资源限制,`uname -a`
Author
Owner

@immortalt commented on GitHub (May 8, 2018):

Linux frp 2.6.32-042stab127.2 #1 SMP Thu Jan 4 16:41:44 MSK 2018 x86_64 x86_64 x86_64 GNU/Linux

<!-- gh-comment-id:387258097 --> @immortalt commented on GitHub (May 8, 2018): Linux frp 2.6.32-042stab127.2 #1 SMP Thu Jan 4 16:41:44 MSK 2018 x86_64 x86_64 x86_64 GNU/Linux
Author
Owner

@fatedier commented on GitHub (May 8, 2018):

写错了,ulimit -a,另外用 netstat or ss 调试下连接的状态。

<!-- gh-comment-id:387262675 --> @fatedier commented on GitHub (May 8, 2018): 写错了,`ulimit -a`,另外用 `netstat` or `ss` 调试下连接的状态。
Author
Owner

@immortalt commented on GitHub (May 8, 2018):

core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 256958
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 256958
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

<!-- gh-comment-id:387265967 --> @immortalt commented on GitHub (May 8, 2018): core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 256958 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 256958 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
Author
Owner

@fatedier commented on GitHub (May 8, 2018):

建议先参考搜索引擎上的资料,尝试对服务器的系统参数做一些调优。

<!-- gh-comment-id:387270094 --> @fatedier commented on GitHub (May 8, 2018): 建议先参考搜索引擎上的资料,尝试对服务器的系统参数做一些调优。
Author
Owner

@bob4jcom commented on GitHub (May 8, 2018):

ulimit -a修改下下面这个系统参数试试
open files (-n) 1024 改为
open files (-n) 655350
1024太小

<!-- gh-comment-id:387316590 --> @bob4jcom commented on GitHub (May 8, 2018): ulimit -a修改下下面这个系统参数试试 open files (-n) 1024 改为 open files (-n) 655350 1024太小
Author
Owner

@immortalt commented on GitHub (May 8, 2018):

@bob4jcom 谢谢建议。我已修改。等我测试一两天看看问题是否修复。
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 256958
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 655350
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 256958
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

<!-- gh-comment-id:387323604 --> @immortalt commented on GitHub (May 8, 2018): @bob4jcom 谢谢建议。我已修改。等我测试一两天看看问题是否修复。 core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 256958 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 655350 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 256958 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
Author
Owner

@immortalt commented on GitHub (May 12, 2018):

测试通过,问题似乎已经解决

<!-- gh-comment-id:388534804 --> @immortalt commented on GitHub (May 12, 2018): 测试通过,问题似乎已经解决
Author
Owner

@likev commented on GitHub (Sep 29, 2018):

I have the same "too many open files; retrying in 1s" problem, is it possible to limit the max connections that frps open to frpc ?

<!-- gh-comment-id:425630172 --> @likev commented on GitHub (Sep 29, 2018): I have the same "too many open files; retrying in 1s" problem, is it possible to limit the max connections that frps open to frpc ?
Author
Owner

@BosonMao commented on GitHub (Jan 7, 2020):

ps -ef | grep frps
找到pid
cat /proc/pid//limits
你可以看到打开文件数受到的限制,然后进一步解决!

<!-- gh-comment-id:571516641 --> @BosonMao commented on GitHub (Jan 7, 2020): ps -ef | grep frps 找到pid cat /proc/pid//limits 你可以看到打开文件数受到的限制,然后进一步解决!
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/frp#588
No description provided.