[GH-ISSUE #1025] frps dashboard 无响应 #812

Closed
opened 2026-05-05 12:31:04 -06:00 by gitea-mirror · 11 comments
Owner

Originally created by @guyskk on GitHub (Dec 28, 2018).
Original GitHub issue: https://github.com/fatedier/frp/issues/1025

Issue is only used for submiting bug report and documents typo. If there are same issues or answers can be found in documents, we will close it directly.
(为了节约时间,提高处理问题的效率,不按照格式填写的 issue 将会直接关闭。)

Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST

What version of frp are you using (./frpc -v or ./frps -v)?

0.22.0

What operating system and processor architecture are you using (go env)?

Ubuntu 16.04 x86_64 GNU/Linux

Configures you used:

frps.ini

[common]
bind_port = 7000
kcp_bind_port = 7000
bind_udp_port = 7001
token = abcabc
dashboard_port = 7500
dashboard_user = admin
dashboard_pwd = admin
allow_ports = 2000-2999,8000-16000

frpc.ini

[common]
server_addr = xx.xx.xx.xx
server_port = 7000
token = abcabc

[ssh]
type = tcp
local_ip = 127.0.0.1
local_port = 22
remote_port = 2022
use_encryption = true
use_compression = true

frpc-second.ini

[common]
server_addr = xx.xx.xx.xx
server_port = 7000
token = abcabc

[ssh_second]
type = tcp
local_ip = 127.0.0.1
local_port = 22
remote_port = 2033
use_encryption = true
use_compression = true

[range:tcp_8000_8999]
type = tcp
local_ip = 127.0.0.1
local_port = 8000-8999
remote_port = 8000-8999

Steps to reproduce the issue:

  1. 运行frps
  2. 运行frpc, frpc-second
  3. 访问量不大,网络延迟较高,dashboard很少访问,1~2天后dashboard无响应。

Describe the results you received:

运行1~2天后,dashboard无响应。
frps日志:

Dec 27 19:03:41 x-server frps[1368]: 2018/12/27 19:03:41 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s
Dec 27 19:03:43 x-server frps[1368]: 2018/12/27 19:03:43 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s
Dec 27 19:03:44 x-server frps[1368]: 2018/12/27 19:03:44 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s
Dec 27 19:03:46 x-server frps[1368]: 2018/12/27 19:03:46 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s
Dec 27 19:03:47 x-server frps[1368]: 2018/12/27 19:03:47 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s
Dec 27 19:03:49 x-server frps[1368]: 2018/12/27 19:03:49 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s
Dec 27 19:03:50 x-server frps[1368]: 2018/12/27 19:03:50 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s
Dec 27 19:03:52 x-server frps[1368]: 2018/12/27 19:03:52 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s
Dec 27 19:03:53 x-server frps[1368]: 2018/12/27 19:03:53 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s
Dec 27 19:03:55 x-server frps[1368]: 2018/12/27 19:03:55 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s
Dec 27 19:03:56 x-server frps[1368]: 2018/12/27 19:03:56 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s
Dec 27 19:03:58 x-server frps[1368]: 2018/12/27 19:03:58 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s
Dec 27 19:03:59 x-server frps[1368]: 2018/12/27 19:03:59 [W] [control.go:332] [f849a6d61085b767] new proxy [tcp_8000_8999_104] error: port unavailable
Dec 27 19:03:59 x-server frps[1368]: 2018/12/27 19:03:59 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s
Dec 27 19:04:01 x-server frps[1368]: 2018/12/27 19:04:01 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s
Dec 27 19:04:02 x-server frps[1368]: 2018/12/27 19:04:02 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s
Dec 27 19:04:04 x-server frps[1368]: 2018/12/27 19:04:04 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s
Dec 27 19:04:06 x-server frps[1368]: 2018/12/27 19:04:06 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1

Describe the results you expected:

dashboard正常响应。

Additional information you deem important (e.g. issue happens only occasionally):

frps共打开5135个文件描述符:

$ sudo lsof -n|awk '{print $2}'|sort|uniq -c |sort -nr|more
   5135 1368
    497 2460
    380 1378

$ cat /proc/sys/fs/file-max
201234

$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 7887
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 102400
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 7887
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

将frpc-second重启后,frps不再retrying,但只要再访问一次 dashboard,frps便不断输出 retrying 日志。

Can you point out what caused this issue (optional)

Originally created by @guyskk on GitHub (Dec 28, 2018). Original GitHub issue: https://github.com/fatedier/frp/issues/1025 Issue is only used for submiting bug report and documents typo. If there are same issues or answers can be found in documents, we will close it directly. (为了节约时间,提高处理问题的效率,不按照格式填写的 issue 将会直接关闭。) Use the commands below to provide key information from your environment: You do NOT have to include this information if this is a FEATURE REQUEST **What version of frp are you using (./frpc -v or ./frps -v)?** 0.22.0 **What operating system and processor architecture are you using (`go env`)?** Ubuntu 16.04 x86_64 GNU/Linux **Configures you used:** frps.ini ``` [common] bind_port = 7000 kcp_bind_port = 7000 bind_udp_port = 7001 token = abcabc dashboard_port = 7500 dashboard_user = admin dashboard_pwd = admin allow_ports = 2000-2999,8000-16000 ``` frpc.ini ``` [common] server_addr = xx.xx.xx.xx server_port = 7000 token = abcabc [ssh] type = tcp local_ip = 127.0.0.1 local_port = 22 remote_port = 2022 use_encryption = true use_compression = true ``` frpc-second.ini ``` [common] server_addr = xx.xx.xx.xx server_port = 7000 token = abcabc [ssh_second] type = tcp local_ip = 127.0.0.1 local_port = 22 remote_port = 2033 use_encryption = true use_compression = true [range:tcp_8000_8999] type = tcp local_ip = 127.0.0.1 local_port = 8000-8999 remote_port = 8000-8999 ``` **Steps to reproduce the issue:** 1. 运行frps 2. 运行frpc, frpc-second 3. 访问量不大,网络延迟较高,dashboard很少访问,1~2天后dashboard无响应。 **Describe the results you received:** 运行1~2天后,dashboard无响应。 frps日志: ``` Dec 27 19:03:41 x-server frps[1368]: 2018/12/27 19:03:41 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s Dec 27 19:03:43 x-server frps[1368]: 2018/12/27 19:03:43 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s Dec 27 19:03:44 x-server frps[1368]: 2018/12/27 19:03:44 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s Dec 27 19:03:46 x-server frps[1368]: 2018/12/27 19:03:46 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s Dec 27 19:03:47 x-server frps[1368]: 2018/12/27 19:03:47 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s Dec 27 19:03:49 x-server frps[1368]: 2018/12/27 19:03:49 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s Dec 27 19:03:50 x-server frps[1368]: 2018/12/27 19:03:50 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s Dec 27 19:03:52 x-server frps[1368]: 2018/12/27 19:03:52 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s Dec 27 19:03:53 x-server frps[1368]: 2018/12/27 19:03:53 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s Dec 27 19:03:55 x-server frps[1368]: 2018/12/27 19:03:55 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s Dec 27 19:03:56 x-server frps[1368]: 2018/12/27 19:03:56 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s Dec 27 19:03:58 x-server frps[1368]: 2018/12/27 19:03:58 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s Dec 27 19:03:59 x-server frps[1368]: 2018/12/27 19:03:59 [W] [control.go:332] [f849a6d61085b767] new proxy [tcp_8000_8999_104] error: port unavailable Dec 27 19:03:59 x-server frps[1368]: 2018/12/27 19:03:59 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s Dec 27 19:04:01 x-server frps[1368]: 2018/12/27 19:04:01 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s Dec 27 19:04:02 x-server frps[1368]: 2018/12/27 19:04:02 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s Dec 27 19:04:04 x-server frps[1368]: 2018/12/27 19:04:04 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1s Dec 27 19:04:06 x-server frps[1368]: 2018/12/27 19:04:06 http: Accept error: accept tcp [::]:7500: accept4: too many open files; retrying in 1 ``` **Describe the results you expected:** dashboard正常响应。 **Additional information you deem important (e.g. issue happens only occasionally):** frps共打开5135个文件描述符: ``` $ sudo lsof -n|awk '{print $2}'|sort|uniq -c |sort -nr|more 5135 1368 497 2460 380 1378 $ cat /proc/sys/fs/file-max 201234 $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 7887 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 102400 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 7887 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ``` 将frpc-second重启后,frps不再retrying,但只要再访问一次 dashboard,frps便不断输出 retrying 日志。 **Can you point out what caused this issue (optional)**
Author
Owner

@fatedier commented on GitHub (Dec 28, 2018):

检查是谁在连你的 dashboard,或把 dashboard 设置为仅内网可连接。

<!-- gh-comment-id:450282350 --> @fatedier commented on GitHub (Dec 28, 2018): 检查是谁在连你的 dashboard,或把 dashboard 设置为仅内网可连接。
Author
Owner

@guyskk commented on GitHub (Dec 28, 2018):

检查过了,没有其他人连dashboard,dashboard也只能内网访问

<!-- gh-comment-id:450291170 --> @guyskk commented on GitHub (Dec 28, 2018): 检查过了,没有其他人连dashboard,dashboard也只能内网访问
Author
Owner

@fatedier commented on GitHub (Dec 28, 2018):

通过工具检查,这个日志每一条错误信息表示有一个新的连接建立请求。

<!-- gh-comment-id:450292375 --> @fatedier commented on GitHub (Dec 28, 2018): 通过工具检查,这个日志每一条错误信息表示有一个新的连接建立请求。
Author
Owner

@fatedier commented on GitHub (Dec 28, 2018):

你既然会使用命令行工具,简单排查下到底建立了哪些连接应该很容易,先自行分析下。
否则没有环境,没有详细信息,简单的一句话完全没有办法判断。

<!-- gh-comment-id:450292701 --> @fatedier commented on GitHub (Dec 28, 2018): 你既然会使用命令行工具,简单排查下到底建立了哪些连接应该很容易,先自行分析下。 否则没有环境,没有详细信息,简单的一句话完全没有办法判断。
Author
Owner

@guyskk commented on GitHub (Dec 28, 2018):

OK,我重启frps暂时解决了问题,我下次再仔细分析,看能不能复现。

<!-- gh-comment-id:450298394 --> @guyskk commented on GitHub (Dec 28, 2018): OK,我重启frps暂时解决了问题,我下次再仔细分析,看能不能复现。
Author
Owner

@guyskk commented on GitHub (Jan 2, 2019):

上次重启时,我打开了frps debug日志,现在问题再次出现了,dashboard无法访问,反向代理的ssh能访问,但比正常情况慢几倍。

CPU使用情况:
capture-frps-htop

网络连接情况,排除了约1000个8xxx端口监听:

# lsof -nP -p 209691| grep -v '*:8'
COMMAND    PID USER   FD      TYPE             DEVICE SIZE/OFF       NODE NAME
frps    209691 root  cwd       DIR                8,1     4096          2 /
frps    209691 root  rtd       DIR                8,1     4096          2 /
frps    209691 root  txt       REG                8,1 10235584     264515 /home/kanhuang/opt/frp_0.22.0_linux_amd64/frps
frps    209691 root    0r      CHR                1,3      0t0          6 /dev/null
frps    209691 root    1u     unix 0xffff91eaf7e77000      0t0 3962629522 type=STREAM
frps    209691 root    2u     unix 0xffff91eaf7e77000      0t0 3962629522 type=STREAM
frps    209691 root    3w      REG                8,1   431873     310179 /home/kanhuang/opt/frp_0.22.0_linux_amd64/frps.log
frps    209691 root    4u  a_inode               0,13        0       9584 [eventpoll]
frps    209691 root    5u     IPv6         3962629560      0t0        TCP *:7000 (LISTEN)
frps    209691 root    6u     IPv6         3962629564      0t0        UDP *:7000
frps    209691 root    7u     IPv6         3962629565      0t0        UDP *:7001
frps    209691 root    8u     IPv6         3962629566      0t0        TCP *:7500 (LISTEN)
frps    209691 root    9u     IPv6         3962629569      0t0        TCP 10.176.13.48:7000->10.225.193.232:41324 (ESTABLISHED)
frps    209691 root   10u     IPv6         3962629571      0t0        TCP 10.176.13.48:7000->10.225.193.232:41326 (ESTABLISHED)
frps    209691 root   11u     IPv6         3962629574      0t0        TCP *:2022 (LISTEN)
frps    209691 root  640u     IPv6         3962631506      0t0        TCP *:2033 (LISTEN)
frps    209691 root 1013u     IPv6         4006801967      0t0        TCP 10.176.13.48:2022->10.225.230.96:54915 (ESTABLISHED)
frps    209691 root 1014u     IPv6         3962632660      0t0        TCP 10.176.13.48:2022->10.225.230.96:59925 (ESTABLISHED)
frps    209691 root 1015u     IPv6         3989545839      0t0        TCP 10.176.13.48:2022->10.225.230.96:58997 (ESTABLISHED)
frps    209691 root 1016u     IPv6         3989595420      0t0        TCP 10.176.13.48:2022->10.225.230.96:65299 (ESTABLISHED)
frps    209691 root 1017u     IPv6         3990161547      0t0        TCP 10.176.13.48:2022->10.225.224.192:63267 (ESTABLISHED)
frps    209691 root 1018u     IPv6         4006802035      0t0        TCP 10.176.13.48:2022->10.225.230.96:54922 (ESTABLISHED)
frps    209691 root 1019u     IPv6          278448551      0t0        TCP 10.176.13.48:2022->10.225.229.218:65303 (ESTABLISHED)
frps    209691 root 1020u     IPv6          126296052      0t0        TCP 10.176.13.48:8901->10.225.230.96:56262 (ESTABLISHED)
frps    209691 root 1021u     IPv6          528445611      0t0        TCP 10.176.13.48:2022->10.225.230.96:63088 (ESTABLISHED)
frps    209691 root 1022u     IPv6          126392700      0t0        TCP 10.176.13.48:2022->10.225.224.192:49257 (ESTABLISHED)
frps    209691 root 1023u     IPv6          280939021      0t0        TCP 10.176.13.48:2022->10.225.230.96:63063 (ESTABLISHED)

日志:

# ls -lh frps*.log
-r--r----- 1 root root 224K Dec 27 23:59 frps.2018-12-27.log
-r--r----- 1 root root 540K Dec 28 23:59 frps.2018-12-28.log
-r--r----- 1 root root 440K Dec 29 23:59 frps.2018-12-29.log
-r--r----- 1 root root 1.6M Dec 30 23:59 frps.2018-12-30.log
-r--r----- 1 root root 440K Dec 31 23:59 frps.2018-12-31.log
-rw-rw---- 1 root root 426K Jan  1 21:25 frps.log

没有任何报错:

# cat frps*.log | grep '[E]'
无内容
# cat frps*.log | grep '[W]'
无内容

日志内容(去除心跳日志):

# cat frps*.log | grep -v 'receive heartbeat'
2019/01/01 19:43:22 [D] [proxy.go:686] [d62a88fd15dc1bf9] [tcp_8000_8999_901] join connections closed
2019/01/01 19:43:22 [D] [proxy.go:686] [7f9c1972bdd6702c] [ssh] join connections closed
2019/01/01 19:43:22 [D] [proxy.go:122] [7f9c1972bdd6702c] [ssh] get a user connection [10.225.229.218:65154]
2019/01/01 19:43:22 [D] [proxy.go:122] [7f9c1972bdd6702c] [ssh] get a user connection [10.225.229.218:65256]
2019/01/01 19:43:22 [D] [control.go:135] [d62a88fd15dc1bf9] new work connection registered
2019/01/01 19:43:23 [D] [control.go:135] [d62a88fd15dc1bf9] new work connection registered
2019/01/01 19:43:23 [D] [control.go:135] [7f9c1972bdd6702c] new work connection registered
2019/01/01 19:43:23 [I] [proxy.go:87] [7f9c1972bdd6702c] [ssh] get a new work connection: [10.225.193.232:41324]
2019/01/01 19:43:23 [D] [proxy.go:678] [7f9c1972bdd6702c] [ssh] join connections, workConn(l[10.176.13.48:7000] r[10.225.193.232:41324]) userConn(l[10.176.13.48:2022] r[10.225.229.218:65256])
2019/01/01 19:43:23 [D] [control.go:135] [7f9c1972bdd6702c] new work connection registered
2019/01/01 19:43:23 [I] [proxy.go:87] [7f9c1972bdd6702c] [ssh] get a new work connection: [10.225.193.232:41324]
2019/01/01 19:43:23 [D] [proxy.go:678] [7f9c1972bdd6702c] [ssh] join connections, workConn(l[10.176.13.48:7000] r[10.225.193.232:41324]) userConn(l[10.176.13.48:2022] r[10.225.229.218:65154])
2019/01/01 19:43:23 [D] [control.go:135] [7f9c1972bdd6702c] new work connection registered
2019/01/01 19:43:23 [D] [control.go:135] [d62a88fd15dc1bf9] new work connection registered
2019/01/01 19:43:23 [D] [proxy.go:686] [7f9c1972bdd6702c] [ssh] join connections closed
2019/01/01 19:43:23 [D] [proxy.go:122] [7f9c1972bdd6702c] [ssh] get a user connection [10.225.229.218:65303]
2019/01/01 19:43:23 [D] [control.go:162] [7f9c1972bdd6702c] get work connection from pool
2019/01/01 19:43:23 [I] [proxy.go:87] [7f9c1972bdd6702c] [ssh] get a new work connection: [10.225.193.232:41324]
2019/01/01 19:43:23 [D] [proxy.go:678] [7f9c1972bdd6702c] [ssh] join connections, workConn(l[10.176.13.48:7000] r[10.225.193.232:41324]) userConn(l[10.176.13.48:2022] r[10.225.229.218:65303])
2019/01/01 19:43:23 [D] [control.go:135] [7f9c1972bdd6702c] new work connection registered
2019/01/01 19:43:23 [D] [control.go:135] [7f9c1972bdd6702c] new work connection registered
2019/01/01 19:43:23 [D] [control.go:135] [7f9c1972bdd6702c] new work connection registered
2019/01/01 19:43:23 [D] [control.go:135] [7f9c1972bdd6702c] new work connection registered
2019/01/01 19:43:28 [D] [proxy.go:686] [7f9c1972bdd6702c] [ssh] join connections closed
2019/01/01 19:43:28 [D] [proxy.go:122] [7f9c1972bdd6702c] [ssh] get a user connection [10.225.230.96:63063]
2019/01/01 19:43:28 [D] [control.go:162] [7f9c1972bdd6702c] get work connection from pool
2019/01/01 19:43:28 [I] [proxy.go:87] [7f9c1972bdd6702c] [ssh] get a new work connection: [10.225.193.232:41324]
2019/01/01 19:43:28 [D] [proxy.go:678] [7f9c1972bdd6702c] [ssh] join connections, workConn(l[10.176.13.48:7000] r[10.225.193.232:41324]) userConn(l[10.176.13.48:2022] r[10.225.230.96:63063])
2019/01/01 19:43:28 [D] [control.go:135] [7f9c1972bdd6702c] new work connection registered
2019/01/01 19:51:44 [D] [proxy.go:686] [d62a88fd15dc1bf9] [tcp_8000_8999_901] join connections closed
2019/01/01 19:51:44 [D] [proxy.go:122] [7f9c1972bdd6702c] [ssh] get a user connection [10.225.230.96:63088]
2019/01/01 19:51:44 [D] [control.go:162] [7f9c1972bdd6702c] get work connection from pool
2
......省略大量类似内容

现在 frps 还在运行,如果需要其他信息我可以继续提供,我不知道如何解决了,多谢!

<!-- gh-comment-id:450789942 --> @guyskk commented on GitHub (Jan 2, 2019): 上次重启时,我打开了frps debug日志,现在问题再次出现了,dashboard无法访问,反向代理的ssh能访问,但比正常情况慢几倍。 CPU使用情况: ![capture-frps-htop](https://user-images.githubusercontent.com/6367792/50581105-59bb3a80-0e91-11e9-84d5-963b37a9d834.PNG) 网络连接情况,排除了约1000个8xxx端口监听: ``` # lsof -nP -p 209691| grep -v '*:8' COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME frps 209691 root cwd DIR 8,1 4096 2 / frps 209691 root rtd DIR 8,1 4096 2 / frps 209691 root txt REG 8,1 10235584 264515 /home/kanhuang/opt/frp_0.22.0_linux_amd64/frps frps 209691 root 0r CHR 1,3 0t0 6 /dev/null frps 209691 root 1u unix 0xffff91eaf7e77000 0t0 3962629522 type=STREAM frps 209691 root 2u unix 0xffff91eaf7e77000 0t0 3962629522 type=STREAM frps 209691 root 3w REG 8,1 431873 310179 /home/kanhuang/opt/frp_0.22.0_linux_amd64/frps.log frps 209691 root 4u a_inode 0,13 0 9584 [eventpoll] frps 209691 root 5u IPv6 3962629560 0t0 TCP *:7000 (LISTEN) frps 209691 root 6u IPv6 3962629564 0t0 UDP *:7000 frps 209691 root 7u IPv6 3962629565 0t0 UDP *:7001 frps 209691 root 8u IPv6 3962629566 0t0 TCP *:7500 (LISTEN) frps 209691 root 9u IPv6 3962629569 0t0 TCP 10.176.13.48:7000->10.225.193.232:41324 (ESTABLISHED) frps 209691 root 10u IPv6 3962629571 0t0 TCP 10.176.13.48:7000->10.225.193.232:41326 (ESTABLISHED) frps 209691 root 11u IPv6 3962629574 0t0 TCP *:2022 (LISTEN) frps 209691 root 640u IPv6 3962631506 0t0 TCP *:2033 (LISTEN) frps 209691 root 1013u IPv6 4006801967 0t0 TCP 10.176.13.48:2022->10.225.230.96:54915 (ESTABLISHED) frps 209691 root 1014u IPv6 3962632660 0t0 TCP 10.176.13.48:2022->10.225.230.96:59925 (ESTABLISHED) frps 209691 root 1015u IPv6 3989545839 0t0 TCP 10.176.13.48:2022->10.225.230.96:58997 (ESTABLISHED) frps 209691 root 1016u IPv6 3989595420 0t0 TCP 10.176.13.48:2022->10.225.230.96:65299 (ESTABLISHED) frps 209691 root 1017u IPv6 3990161547 0t0 TCP 10.176.13.48:2022->10.225.224.192:63267 (ESTABLISHED) frps 209691 root 1018u IPv6 4006802035 0t0 TCP 10.176.13.48:2022->10.225.230.96:54922 (ESTABLISHED) frps 209691 root 1019u IPv6 278448551 0t0 TCP 10.176.13.48:2022->10.225.229.218:65303 (ESTABLISHED) frps 209691 root 1020u IPv6 126296052 0t0 TCP 10.176.13.48:8901->10.225.230.96:56262 (ESTABLISHED) frps 209691 root 1021u IPv6 528445611 0t0 TCP 10.176.13.48:2022->10.225.230.96:63088 (ESTABLISHED) frps 209691 root 1022u IPv6 126392700 0t0 TCP 10.176.13.48:2022->10.225.224.192:49257 (ESTABLISHED) frps 209691 root 1023u IPv6 280939021 0t0 TCP 10.176.13.48:2022->10.225.230.96:63063 (ESTABLISHED) ``` 日志: ``` # ls -lh frps*.log -r--r----- 1 root root 224K Dec 27 23:59 frps.2018-12-27.log -r--r----- 1 root root 540K Dec 28 23:59 frps.2018-12-28.log -r--r----- 1 root root 440K Dec 29 23:59 frps.2018-12-29.log -r--r----- 1 root root 1.6M Dec 30 23:59 frps.2018-12-30.log -r--r----- 1 root root 440K Dec 31 23:59 frps.2018-12-31.log -rw-rw---- 1 root root 426K Jan 1 21:25 frps.log ``` 没有任何报错: ``` # cat frps*.log | grep '[E]' 无内容 # cat frps*.log | grep '[W]' 无内容 ``` 日志内容(去除心跳日志): ``` # cat frps*.log | grep -v 'receive heartbeat' 2019/01/01 19:43:22 [D] [proxy.go:686] [d62a88fd15dc1bf9] [tcp_8000_8999_901] join connections closed 2019/01/01 19:43:22 [D] [proxy.go:686] [7f9c1972bdd6702c] [ssh] join connections closed 2019/01/01 19:43:22 [D] [proxy.go:122] [7f9c1972bdd6702c] [ssh] get a user connection [10.225.229.218:65154] 2019/01/01 19:43:22 [D] [proxy.go:122] [7f9c1972bdd6702c] [ssh] get a user connection [10.225.229.218:65256] 2019/01/01 19:43:22 [D] [control.go:135] [d62a88fd15dc1bf9] new work connection registered 2019/01/01 19:43:23 [D] [control.go:135] [d62a88fd15dc1bf9] new work connection registered 2019/01/01 19:43:23 [D] [control.go:135] [7f9c1972bdd6702c] new work connection registered 2019/01/01 19:43:23 [I] [proxy.go:87] [7f9c1972bdd6702c] [ssh] get a new work connection: [10.225.193.232:41324] 2019/01/01 19:43:23 [D] [proxy.go:678] [7f9c1972bdd6702c] [ssh] join connections, workConn(l[10.176.13.48:7000] r[10.225.193.232:41324]) userConn(l[10.176.13.48:2022] r[10.225.229.218:65256]) 2019/01/01 19:43:23 [D] [control.go:135] [7f9c1972bdd6702c] new work connection registered 2019/01/01 19:43:23 [I] [proxy.go:87] [7f9c1972bdd6702c] [ssh] get a new work connection: [10.225.193.232:41324] 2019/01/01 19:43:23 [D] [proxy.go:678] [7f9c1972bdd6702c] [ssh] join connections, workConn(l[10.176.13.48:7000] r[10.225.193.232:41324]) userConn(l[10.176.13.48:2022] r[10.225.229.218:65154]) 2019/01/01 19:43:23 [D] [control.go:135] [7f9c1972bdd6702c] new work connection registered 2019/01/01 19:43:23 [D] [control.go:135] [d62a88fd15dc1bf9] new work connection registered 2019/01/01 19:43:23 [D] [proxy.go:686] [7f9c1972bdd6702c] [ssh] join connections closed 2019/01/01 19:43:23 [D] [proxy.go:122] [7f9c1972bdd6702c] [ssh] get a user connection [10.225.229.218:65303] 2019/01/01 19:43:23 [D] [control.go:162] [7f9c1972bdd6702c] get work connection from pool 2019/01/01 19:43:23 [I] [proxy.go:87] [7f9c1972bdd6702c] [ssh] get a new work connection: [10.225.193.232:41324] 2019/01/01 19:43:23 [D] [proxy.go:678] [7f9c1972bdd6702c] [ssh] join connections, workConn(l[10.176.13.48:7000] r[10.225.193.232:41324]) userConn(l[10.176.13.48:2022] r[10.225.229.218:65303]) 2019/01/01 19:43:23 [D] [control.go:135] [7f9c1972bdd6702c] new work connection registered 2019/01/01 19:43:23 [D] [control.go:135] [7f9c1972bdd6702c] new work connection registered 2019/01/01 19:43:23 [D] [control.go:135] [7f9c1972bdd6702c] new work connection registered 2019/01/01 19:43:23 [D] [control.go:135] [7f9c1972bdd6702c] new work connection registered 2019/01/01 19:43:28 [D] [proxy.go:686] [7f9c1972bdd6702c] [ssh] join connections closed 2019/01/01 19:43:28 [D] [proxy.go:122] [7f9c1972bdd6702c] [ssh] get a user connection [10.225.230.96:63063] 2019/01/01 19:43:28 [D] [control.go:162] [7f9c1972bdd6702c] get work connection from pool 2019/01/01 19:43:28 [I] [proxy.go:87] [7f9c1972bdd6702c] [ssh] get a new work connection: [10.225.193.232:41324] 2019/01/01 19:43:28 [D] [proxy.go:678] [7f9c1972bdd6702c] [ssh] join connections, workConn(l[10.176.13.48:7000] r[10.225.193.232:41324]) userConn(l[10.176.13.48:2022] r[10.225.230.96:63063]) 2019/01/01 19:43:28 [D] [control.go:135] [7f9c1972bdd6702c] new work connection registered 2019/01/01 19:51:44 [D] [proxy.go:686] [d62a88fd15dc1bf9] [tcp_8000_8999_901] join connections closed 2019/01/01 19:51:44 [D] [proxy.go:122] [7f9c1972bdd6702c] [ssh] get a user connection [10.225.230.96:63088] 2019/01/01 19:51:44 [D] [control.go:162] [7f9c1972bdd6702c] get work connection from pool 2 ......省略大量类似内容 ``` 现在 frps 还在运行,如果需要其他信息我可以继续提供,我不知道如何解决了,多谢!
Author
Owner

@fatedier commented on GitHub (Jan 2, 2019):

你要看的是 7500 端口的连接。
另外你开启这么多端口,需要带宽足够,尽量使用简单的配置来排除问题。

<!-- gh-comment-id:450793725 --> @fatedier commented on GitHub (Jan 2, 2019): 你要看的是 7500 端口的连接。 另外你开启这么多端口,需要带宽足够,尽量使用简单的配置来排除问题。
Author
Owner

@guyskk commented on GitHub (Jan 2, 2019):

所有连接我都列在上面了,7500端口没有连接,服务器上直接请求都没反应:

$ curl -v http://127.0.0.1:7500/
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 7500 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:7500
> User-Agent: curl/7.47.0
> Accept: */*
>
...一直卡住

另外开的 8xxx 端口,基本都是空闲的,上面列出的连接里也能看到只有一条 TCP 10.176.13.48:8901->10.225.230.96:56262 (ESTABLISHED)

<!-- gh-comment-id:450795880 --> @guyskk commented on GitHub (Jan 2, 2019): 所有连接我都列在上面了,7500端口没有连接,服务器上直接请求都没反应: ``` $ curl -v http://127.0.0.1:7500/ * Trying 127.0.0.1... * Connected to 127.0.0.1 (127.0.0.1) port 7500 (#0) > GET / HTTP/1.1 > Host: 127.0.0.1:7500 > User-Agent: curl/7.47.0 > Accept: */* > ...一直卡住 ``` 另外开的 8xxx 端口,基本都是空闲的,上面列出的连接里也能看到只有一条 `TCP 10.176.13.48:8901->10.225.230.96:56262 (ESTABLISHED)`。
Author
Owner

@fatedier commented on GitHub (Jan 2, 2019):

因为你设置的端口范围太多了,目前 dashboard 的 api 对这方面不能很好的支持,你可以将这些都去掉再尝试。这样的话,尽量不要使用获取所有数据的接口。

<!-- gh-comment-id:450797100 --> @fatedier commented on GitHub (Jan 2, 2019): 因为你设置的端口范围太多了,目前 dashboard 的 api 对这方面不能很好的支持,你可以将这些都去掉再尝试。这样的话,尽量不要使用获取所有数据的接口。
Author
Owner

@guyskk commented on GitHub (Jan 2, 2019):

好吧,我先减少端口范围

<!-- gh-comment-id:450797593 --> @guyskk commented on GitHub (Jan 2, 2019): 好吧,我先减少端口范围
Author
Owner

@guyskk commented on GitHub (Jan 9, 2019):

已解决, 减少端口范围到100个后稳定运行。

<!-- gh-comment-id:452589946 --> @guyskk commented on GitHub (Jan 9, 2019): 已解决, 减少端口范围到100个后稳定运行。
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/frp#812
No description provided.