mirror of
https://github.com/fatedier/frp.git
synced 2026-05-15 08:05:49 -06:00
[GH-ISSUE #752] accept4: too many open files大访问量时frps无响应,重启frps才恢复正常 #588
Labels
No labels
In Progress
WIP
WaitingForInfo
bug
doc
duplicate
easy
enhancement
future
help wanted
invalid
lifecycle/stale
need-issue-template
need-usage-help
no plan
proposal
pull-request
question
todo
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/frp#588
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @immortalt on GitHub (May 7, 2018).
Original GitHub issue: https://github.com/fatedier/frp/issues/752
Issue is only used for submiting bug report and documents typo. If there are same issues or answers can be found in documents, we will close it directly.
(为了节约时间,提高处理问题的效率,不按照格式填写的 issue 将会直接关闭。)
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
What version of frp are you using (./frpc -v or ./frps -v)?
1.8
What operating system and processor architecture are you using (
go env)?Ubuntu 16.04.4 LTS X64
Configures you used:
为了安全,关键信息用xxx替代
一共10个通道,就不一一列举,总之只涉及到http、https和tcp类型,格式如上
Steps to reproduce the issue:
1.正常启动frpc和frps
2.大量用户访问http服务,大约一个白天时间
3.期间可能也伴随着小量https或tcp传输
Describe the results you received:
晚上测试发现frps虽然正在运行却无法访问,查看log报错为
2018/05/03 22:36:43 http: Accept error: accept tcp [::]:80: accept4: too many open files; retrying in 1s
不得不killall frps关闭,然后重新打开frps。
frpc设置了每隔一分钟检测,断线重连。所以frps重启后就可以自动连接正常。
脚本如下:
如果frpc连接失败就会退出。每分钟执行一次脚本。如果frpc退出了进程数量小于1,就会启动frpc
Describe the results you expected:
frps正常运行而无需定时重启。
Additional information you deem important (e.g. issue happens only occasionally):
这个问题让人头疼。希望可以解决。
Can you point out what caused this issue (optional)
我怀疑是不是并发数量有限导致的问题?比如我同时可能有几百到几千的http访问。
@immortalt commented on GitHub (May 7, 2018):
是不是文件句柄有泄露?还是说默认的限制太小了?
会不会可以通过这个文章的方法解决?
我不是go程序员,我也不确定这个文章行不行。
go语言实现服务器接收http请求以及出现泄漏时的解决方案
http://www.bubuko.com/infodetail-2294580.html
@fatedier commented on GitHub (May 7, 2018):
建议通过网络工具,排查下 frps 相关的连接。
还有一些信息,访问 http 服务客户端的并发数,qps,长连接还是短连接,请求平均响应时间多少。
另外再查下服务器的资源限制,
uname -a@immortalt commented on GitHub (May 8, 2018):
Linux frp 2.6.32-042stab127.2 #1 SMP Thu Jan 4 16:41:44 MSK 2018 x86_64 x86_64 x86_64 GNU/Linux
@fatedier commented on GitHub (May 8, 2018):
写错了,
ulimit -a,另外用netstatorss调试下连接的状态。@immortalt commented on GitHub (May 8, 2018):
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 256958
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 256958
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
@fatedier commented on GitHub (May 8, 2018):
建议先参考搜索引擎上的资料,尝试对服务器的系统参数做一些调优。
@bob4jcom commented on GitHub (May 8, 2018):
ulimit -a修改下下面这个系统参数试试
open files (-n) 1024 改为
open files (-n) 655350
1024太小
@immortalt commented on GitHub (May 8, 2018):
@bob4jcom 谢谢建议。我已修改。等我测试一两天看看问题是否修复。
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 256958
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 655350
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 256958
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
@immortalt commented on GitHub (May 12, 2018):
测试通过,问题似乎已经解决
@likev commented on GitHub (Sep 29, 2018):
I have the same "too many open files; retrying in 1s" problem, is it possible to limit the max connections that frps open to frpc ?
@BosonMao commented on GitHub (Jan 7, 2020):
ps -ef | grep frps
找到pid
cat /proc/pid//limits
你可以看到打开文件数受到的限制,然后进一步解决!