[GH-ISSUE #1289] 高并发、大流量场景下出现断流和 404 错误 #1018

Closed
opened 2026-05-05 12:39:23 -06:00 by gitea-mirror · 12 comments
Owner

Originally created by @kasuganosoras on GitHub (Jun 15, 2019).
Original GitHub issue: https://github.com/fatedier/frp/issues/1289

Issue is only used for submiting bug report and documents typo. If there are same issues or answers can be found in documents, we will close it directly.
(为了节约时间,提高处理问题的效率,不按照格式填写的 issue 将会直接关闭。)

Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST

What version of frp are you using (./frpc -v or ./frps -v)?
0.17.0

What operating system and processor architecture are you using (go env)?
Frps 与 Frpc 操作系统均为 CentOS 7.6 Linux amd64
Frps 服务器:L5630 * 2,32GB,1Gbps Uplink
Fpc 服务器:E5-2698 v3,64GB,100Mbps Uplink

Configures you used:
Frps:

[common]
bind_addr = 0.0.0.0
bind_port = 2333
kcp_bind_port = 2333
dashboard_port = 8233
dashboard_user = <Username>
dashboard_pwd = <Password>
vhost_http_port = 80
vhost_https_port = 443
log_file = ./frps.log
log_level = info
log_max_days = 3
token = <Token>
max_pool_count = 50
tcp_mux = true
authentication_timeout = 0
bind_udp_port = 7001

Frpc:

[common]
server_addr = <Server IP>
server_port = 2333
admin_addr = 127.0.0.1
admin_port = 7401
tcp_mux = true
authentication_timeout = 0
auth_token = <Password>
token = <Token>

[example_com]
privilege_mode = true
type = http
local_ip = 127.0.0.1
local_port = 80
use_gzip = true
custom_domains = example.com

[ssl_example_com]
privilege_mode = true
type = https
local_ip = 127.0.0.1
local_port = 443
use_gzip = true
custom_domains = example.com

Steps to reproduce the issue:

  1. 启动 Frps,Frpc 连接上服务器
  2. 开始进行压力测试
  3. 一段时间后出现断流,网页访问 404

Describe the results you received:
下载断流,网页访问出现 404 错误,客户端日志出现错误:

[E] [control.go:150] [df3e6b05010cce86] work connection closed, EOF

Describe the results you expected:
正常工作

Additional information you deem important (e.g. issue happens only occasionally):
这个问题是线上环境出现的,每分钟请求量大概在 1500 左右,单日流量大约在 3.5TB 左右,客户端就会频繁出现 work connection closed, EOF 错误。

出现故障时 Frps 与 Frpc 的系统负载均正常(CPU、内存、硬盘读写),测试过使用 HaProxy 作为最外层直接反代后端 Nginx,当并发数量达到 2 万时机器依然能够正常响应请求,基本上可以排除是机器配置不够导致的问题。

尝试过在另一台服务器上使用 CC 压力测试工具对 http 映射进行压力测试,当每分钟请求数量达到 2000 时也会出现这个问题。

由于对客户端进行了二次开发,可能暂时没有办法升级到最新版本的 Frp,目前使用的版本依然是 0.17.0,希望能够在不改变版本的情况下修复问题 :)

Can you point out what caused this issue (optional)
猜测可能是 TCP_MUX 造成的,尝试过关闭这个功能,但是效果并不明显

Originally created by @kasuganosoras on GitHub (Jun 15, 2019). Original GitHub issue: https://github.com/fatedier/frp/issues/1289 Issue is only used for submiting bug report and documents typo. If there are same issues or answers can be found in documents, we will close it directly. (为了节约时间,提高处理问题的效率,不按照格式填写的 issue 将会直接关闭。) Use the commands below to provide key information from your environment: You do NOT have to include this information if this is a FEATURE REQUEST **What version of frp are you using (./frpc -v or ./frps -v)?** 0.17.0 **What operating system and processor architecture are you using (`go env`)?** Frps 与 Frpc 操作系统均为 CentOS 7.6 Linux amd64 Frps 服务器:L5630 * 2,32GB,1Gbps Uplink Fpc 服务器:E5-2698 v3,64GB,100Mbps Uplink **Configures you used:** Frps: ```ini [common] bind_addr = 0.0.0.0 bind_port = 2333 kcp_bind_port = 2333 dashboard_port = 8233 dashboard_user = <Username> dashboard_pwd = <Password> vhost_http_port = 80 vhost_https_port = 443 log_file = ./frps.log log_level = info log_max_days = 3 token = <Token> max_pool_count = 50 tcp_mux = true authentication_timeout = 0 bind_udp_port = 7001 ``` Frpc: ```ini [common] server_addr = <Server IP> server_port = 2333 admin_addr = 127.0.0.1 admin_port = 7401 tcp_mux = true authentication_timeout = 0 auth_token = <Password> token = <Token> [example_com] privilege_mode = true type = http local_ip = 127.0.0.1 local_port = 80 use_gzip = true custom_domains = example.com [ssl_example_com] privilege_mode = true type = https local_ip = 127.0.0.1 local_port = 443 use_gzip = true custom_domains = example.com ``` **Steps to reproduce the issue:** 1. 启动 Frps,Frpc 连接上服务器 2. 开始进行压力测试 3. 一段时间后出现断流,网页访问 404 **Describe the results you received:** 下载断流,网页访问出现 404 错误,客户端日志出现错误: ``` [E] [control.go:150] [df3e6b05010cce86] work connection closed, EOF ``` **Describe the results you expected:** 正常工作 **Additional information you deem important (e.g. issue happens only occasionally):** 这个问题是线上环境出现的,每分钟请求量大概在 1500 左右,单日流量大约在 3.5TB 左右,客户端就会频繁出现 `work connection closed, EOF` 错误。 出现故障时 Frps 与 Frpc 的系统负载均正常(CPU、内存、硬盘读写),测试过使用 HaProxy 作为最外层直接反代后端 Nginx,当并发数量达到 2 万时机器依然能够正常响应请求,基本上可以排除是机器配置不够导致的问题。 尝试过在另一台服务器上使用 CC 压力测试工具对 http 映射进行压力测试,当每分钟请求数量达到 2000 时也会出现这个问题。 由于对客户端进行了二次开发,可能暂时没有办法升级到最新版本的 Frp,目前使用的版本依然是 0.17.0,希望能够在不改变版本的情况下修复问题 :) **Can you point out what caused this issue (optional)** 猜测可能是 TCP_MUX 造成的,尝试过关闭这个功能,但是效果并不明显
Author
Owner

@fatedier commented on GitHub (Jun 16, 2019):

  1. work connection closed, EOF 这是用户连接断开的正常日志,不一定是错误日志。
  2. QPS 能达到多少,取决于各种资源限制,请自行通过测试验证资源瓶颈后再提问。
<!-- gh-comment-id:502462927 --> @fatedier commented on GitHub (Jun 16, 2019): 1. `work connection closed, EOF` 这是用户连接断开的正常日志,不一定是错误日志。 2. QPS 能达到多少,取决于各种资源限制,请自行通过测试验证资源瓶颈后再提问。
Author
Owner

@kasuganosoras commented on GitHub (Jun 16, 2019):

感谢回复,以下是一些补充:

  1. 测试过,使用压力测试工具对网站进行 CC 攻击,在 1000 线程的情况下,Frps 就会时不时返回 404 错误(就那个 Frps 默认的 404 页),在 1500 线程的情况下,Frps 甚至会出现假死情况,需要手动停止 Frpc 再重新运行才能恢复。
  2. 在压力测试时,无论是 Frps 和 Frpc 的 CPU 负载均正常,直接对 Frpc 后面的服务(Nginx)进行压力测试,2000 线程也能正常响应,不会出现错误。
  3. 将 Frp 换成 HaProxy,使用 2500 线程进行压测,网站也能正常响应,不会出现错误。
  4. 在出现 404 错误的时候,Frpc 日志就会出现 work connection closed, EOF

我想知道的是问题出在了哪一层,是 Frps 还是 Frpc
img

<!-- gh-comment-id:502480096 --> @kasuganosoras commented on GitHub (Jun 16, 2019): 感谢回复,以下是一些补充: 1. 测试过,使用压力测试工具对网站进行 CC 攻击,在 1000 线程的情况下,Frps 就会时不时返回 404 错误(就那个 Frps 默认的 404 页),在 1500 线程的情况下,Frps 甚至会出现假死情况,需要手动停止 Frpc 再重新运行才能恢复。 2. 在压力测试时,无论是 Frps 和 Frpc 的 CPU 负载均正常,直接对 Frpc 后面的服务(Nginx)进行压力测试,2000 线程也能正常响应,不会出现错误。 3. 将 Frp 换成 HaProxy,使用 2500 线程进行压测,网站也能正常响应,不会出现错误。 4. 在出现 404 错误的时候,Frpc 日志就会出现 `work connection closed, EOF` 我想知道的是问题出在了哪一层,是 Frps 还是 Frpc ![img](https://i.zerodream.net/22a485d4409bc2529e020221cd42f031.png)
Author
Owner

@fatedier commented on GitHub (Jun 17, 2019):

  1. work connection closed, EOF 这是用户连接断开的正常日志,不一定是错误日志。
  2. 影响 QPS、并发的因素很多,没有详细的数据,具体的环境分析,不好随便做结论,建议还是自己多测试,通过控制变量进行对比,可以有效发现具体影响瓶颈的因素在哪里。
<!-- gh-comment-id:502652827 --> @fatedier commented on GitHub (Jun 17, 2019): 1. `work connection closed, EOF` 这是用户连接断开的正常日志,不一定是错误日志。 2. 影响 QPS、并发的因素很多,没有详细的数据,具体的环境分析,不好随便做结论,建议还是自己多测试,通过控制变量进行对比,可以有效发现具体影响瓶颈的因素在哪里。
Author
Owner

@kasuganosoras commented on GitHub (Jun 26, 2019):

这两天报错又变了,这是客户端的报错

2019/06/26 13:38:47 [E] [control.go:150] [0bc4ab3884d49c00] work connection closed, broken pipe
2019/06/26 13:38:47 [E] [control.go:150] [0bc4ab3884d49c00] work connection closed, broken pipe
2019/06/26 13:38:47 [E] [control.go:150] [0bc4ab3884d49c00] work connection closed, broken pipe
2019/06/26 13:38:47 [W] [control.go:283] [0bc4ab3884d49c00] read error: broken pipe
2019/06/26 13:38:47 [I] [control.go:303] [0bc4ab3884d49c00] control writer is closing
2019/06/26 13:38:47 [I] [control.go:401] [0bc4ab3884d49c00] try to reconnect to server...
2019/06/26 13:38:47 [I] [control.go:242] [0bc4ab3884d49c00] login to server success, get run id [0bc4ab3884d49c00], server udp port [7001]

这是服务端的

2019/06/26 13:41:03 [W] [control.go:172] [7b1c5c8f7f3d0575] no work connections avaiable, control is closed
2019/06/26 13:41:03 [W] [proxy.go:83] [7b1c5c8f7f3d0575] [s2.natfrp.org:60500   7415-19010] failed to get work connection: control is closed
2019/06/26 13:41:03 [W] [control.go:172] [7b1c5c8f7f3d0575] no work connections avaiable, control is closed
2019/06/26 13:41:03 [W] [control.go:172] [7b1c5c8f7f3d0575] no work connections avaiable, control is closed
2019/06/26 13:41:03 [W] [proxy.go:83] [7b1c5c8f7f3d0575] [s2.natfrp.org:11194   7415-22761] failed to get work connection: control is closed
2019/06/26 13:41:03 [W] [proxy.go:83] [7b1c5c8f7f3d0575] [s2.natfrp.org:61701   7415-20141] failed to get work connection: control is closed
2019/06/26 13:41:03 [I] [proxy.go:72] [7b1c5c8f7f3d0575] [s2.natfrp.org:60500   7415-19010] proxy closing
2019/06/26 13:41:03 [W] [control.go:172] [7b1c5c8f7f3d0575] no work connections avaiable, control is closed
2019/06/26 13:41:03 [W] [proxy.go:83] [7b1c5c8f7f3d0575] [s2.natfrp.org:64500   7415-19011] failed to get work connection: control is closed

带来的情况就是访问时出现 Frps 404 错误页的次数更多,现在网站基本处于崩溃状态
我现在一个 Frps 大概带了700-800 个客户端左右,难道是客户端太多?

<!-- gh-comment-id:505727282 --> @kasuganosoras commented on GitHub (Jun 26, 2019): 这两天报错又变了,这是客户端的报错 ``` 2019/06/26 13:38:47 [E] [control.go:150] [0bc4ab3884d49c00] work connection closed, broken pipe 2019/06/26 13:38:47 [E] [control.go:150] [0bc4ab3884d49c00] work connection closed, broken pipe 2019/06/26 13:38:47 [E] [control.go:150] [0bc4ab3884d49c00] work connection closed, broken pipe 2019/06/26 13:38:47 [W] [control.go:283] [0bc4ab3884d49c00] read error: broken pipe 2019/06/26 13:38:47 [I] [control.go:303] [0bc4ab3884d49c00] control writer is closing 2019/06/26 13:38:47 [I] [control.go:401] [0bc4ab3884d49c00] try to reconnect to server... 2019/06/26 13:38:47 [I] [control.go:242] [0bc4ab3884d49c00] login to server success, get run id [0bc4ab3884d49c00], server udp port [7001] ``` 这是服务端的 ``` 2019/06/26 13:41:03 [W] [control.go:172] [7b1c5c8f7f3d0575] no work connections avaiable, control is closed 2019/06/26 13:41:03 [W] [proxy.go:83] [7b1c5c8f7f3d0575] [s2.natfrp.org:60500 7415-19010] failed to get work connection: control is closed 2019/06/26 13:41:03 [W] [control.go:172] [7b1c5c8f7f3d0575] no work connections avaiable, control is closed 2019/06/26 13:41:03 [W] [control.go:172] [7b1c5c8f7f3d0575] no work connections avaiable, control is closed 2019/06/26 13:41:03 [W] [proxy.go:83] [7b1c5c8f7f3d0575] [s2.natfrp.org:11194 7415-22761] failed to get work connection: control is closed 2019/06/26 13:41:03 [W] [proxy.go:83] [7b1c5c8f7f3d0575] [s2.natfrp.org:61701 7415-20141] failed to get work connection: control is closed 2019/06/26 13:41:03 [I] [proxy.go:72] [7b1c5c8f7f3d0575] [s2.natfrp.org:60500 7415-19010] proxy closing 2019/06/26 13:41:03 [W] [control.go:172] [7b1c5c8f7f3d0575] no work connections avaiable, control is closed 2019/06/26 13:41:03 [W] [proxy.go:83] [7b1c5c8f7f3d0575] [s2.natfrp.org:64500 7415-19011] failed to get work connection: control is closed ``` 带来的情况就是访问时出现 Frps 404 错误页的次数更多,现在网站基本处于崩溃状态 我现在一个 Frps 大概带了700-800 个客户端左右,难道是客户端太多?
Author
Owner

@fatedier commented on GitHub (Jun 26, 2019):

这个取决于你的带宽,以及这么多客户端的活跃程度,还有这个是和你配置的代理的数量是相关的,一个客户端配置了很多代理也是有一定影响的。

<!-- gh-comment-id:505729321 --> @fatedier commented on GitHub (Jun 26, 2019): 这个取决于你的带宽,以及这么多客户端的活跃程度,还有这个是和你配置的代理的数量是相关的,一个客户端配置了很多代理也是有一定影响的。
Author
Owner

@kasuganosoras commented on GitHub (Aug 3, 2019):

这个取决于你的带宽,以及这么多客户端的活跃程度,还有这个是和你配置的代理的数量是相关的,一个客户端配置了很多代理也是有一定影响的。

经过了我一个月的研究,我终于搞清楚这个原因了,原来我的业务架构是 NGINX + php-fpm,因为使用了 HTTP/2,所以会建立大量的长连接,而由于 php-fpm 的性能不足,出现了很多没有及时关闭的连接,最终连接数量越来越多,把 Frps 服务端都拖垮了。

现在我用 Swoole 重写了后端业务代码,回到了 HTTP/1.0(因为对应软件是轮询,没有必要做长连接),再进行了一次高并发请求测试,在 2000 线程 + 512 连接的情况下 Frps 依然毫无压力,可以证明就是因为连接数过多导致的这个错误了。

最后,非常感谢作者的帮助,谢谢 :)

<!-- gh-comment-id:517934271 --> @kasuganosoras commented on GitHub (Aug 3, 2019): > 这个取决于你的带宽,以及这么多客户端的活跃程度,还有这个是和你配置的代理的数量是相关的,一个客户端配置了很多代理也是有一定影响的。 经过了我一个月的研究,我终于搞清楚这个原因了,原来我的业务架构是 NGINX + php-fpm,因为使用了 HTTP/2,所以会建立大量的长连接,而由于 php-fpm 的性能不足,出现了很多没有及时关闭的连接,最终连接数量越来越多,把 Frps 服务端都拖垮了。 现在我用 Swoole 重写了后端业务代码,回到了 HTTP/1.0(因为对应软件是轮询,没有必要做长连接),再进行了一次高并发请求测试,在 2000 线程 + 512 连接的情况下 Frps 依然毫无压力,可以证明就是因为连接数过多导致的这个错误了。 最后,非常感谢作者的帮助,谢谢 :)
Author
Owner

@fatedier commented on GitHub (Aug 3, 2019):

@kasuganosoras 研究精神值得称赞。👍

长连接本身没什么问题,问题就在于,也许可以根据业务场景设置合适的空闲超时时间,超过的话就释放。否则,短连接轮询的方式,增加了一些成本,在 QPS 过高的情况下,可能会有大量 TIME_WAIT 的连接,也容易出问题。

<!-- gh-comment-id:517934715 --> @fatedier commented on GitHub (Aug 3, 2019): @kasuganosoras 研究精神值得称赞。:+1: 长连接本身没什么问题,问题就在于,也许可以根据业务场景设置合适的空闲超时时间,超过的话就释放。否则,短连接轮询的方式,增加了一些成本,在 QPS 过高的情况下,可能会有大量 TIME_WAIT 的连接,也容易出问题。
Author
Owner

@minringcheng commented on GitHub (Jan 1, 2021):

我也出现了这个问题,0.34.3,我都看不懂上面的这位怎么解决的,我太菜了

<!-- gh-comment-id:753315956 --> @minringcheng commented on GitHub (Jan 1, 2021): 我也出现了这个问题,0.34.3,我都看不懂上面的这位怎么解决的,我太菜了
Author
Owner

@minringcheng commented on GitHub (Jan 1, 2021):

另外测试时没用http/2也会出现断流的问题,随便一压测就断流,也是要重启frpc

<!-- gh-comment-id:753316240 --> @minringcheng commented on GitHub (Jan 1, 2021): 另外测试时没用http/2也会出现断流的问题,随便一压测就断流,也是要重启frpc
Author
Owner

@minringcheng commented on GitHub (Jan 1, 2021):

断流时日志如下,少量work connection closed before response StartWorkConn message: EOF,然后就是大量的join connections closed,后面是一长串的心跳包发送成功接收成功,这中间没其他日志,网站也不能访问

2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF
2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF
2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF
2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF
2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF
2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF
2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF
2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF
2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF
2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF
2020/12/31 08:08:22 [D] [proxy.go:778] [ce79e2a1ed243b0e] [http_tcp] join connections closed
2020/12/31 08:08:23 [D] [proxy.go:778] [ce79e2a1ed243b0e] [http_tcp] join connections closed
2020/12/31 08:08:26 [D] [proxy.go:778] [ce79e2a1ed243b0e] [http_tcp] join connections closed
2020/12/31 08:08:26 [D] [proxy.go:778] [ce79e2a1ed243b0e] [http_tcp] join connections closed
2020/12/31 08:08:27 [D] [proxy.go:778] [ce79e2a1ed243b0e] [http_tcp] join connections closed
2020/12/31 08:08:27 [D] [proxy.go:778] [ce79e2a1ed243b0e] [http_tcp] join connections closed
2020/12/31 08:08:31 [D] [proxy.go:778] [ce79e2a1ed243b0e] [http_tcp] join connections closed
2020/12/31 08:08:38 [D] [proxy.go:778] [ce79e2a1ed243b0e] [http_tcp] join connections closed
<!-- gh-comment-id:753317385 --> @minringcheng commented on GitHub (Jan 1, 2021): 断流时日志如下,少量work connection closed before response StartWorkConn message: EOF,然后就是大量的join connections closed,后面是一长串的心跳包发送成功接收成功,这中间没其他日志,网站也不能访问 ``` 2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF 2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF 2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF 2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF 2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF 2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF 2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF 2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF 2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF 2020/12/31 08:08:21 [E] [control.go:157] [ce79e2a1ed243b0e] work connection closed before response StartWorkConn message: EOF 2020/12/31 08:08:22 [D] [proxy.go:778] [ce79e2a1ed243b0e] [http_tcp] join connections closed 2020/12/31 08:08:23 [D] [proxy.go:778] [ce79e2a1ed243b0e] [http_tcp] join connections closed 2020/12/31 08:08:26 [D] [proxy.go:778] [ce79e2a1ed243b0e] [http_tcp] join connections closed 2020/12/31 08:08:26 [D] [proxy.go:778] [ce79e2a1ed243b0e] [http_tcp] join connections closed 2020/12/31 08:08:27 [D] [proxy.go:778] [ce79e2a1ed243b0e] [http_tcp] join connections closed 2020/12/31 08:08:27 [D] [proxy.go:778] [ce79e2a1ed243b0e] [http_tcp] join connections closed 2020/12/31 08:08:31 [D] [proxy.go:778] [ce79e2a1ed243b0e] [http_tcp] join connections closed 2020/12/31 08:08:38 [D] [proxy.go:778] [ce79e2a1ed243b0e] [http_tcp] join connections closed ```
Author
Owner

@fatedier commented on GitHub (Jan 5, 2021):

@qianyuqianhe 有可以方便其他人本地复现的例子吗?

<!-- gh-comment-id:754364575 --> @fatedier commented on GitHub (Jan 5, 2021): @qianyuqianhe 有可以方便其他人本地复现的例子吗?
Author
Owner

@minringcheng commented on GitHub (Jan 6, 2021):

@qianyuqianhe 有可以方便其他人本地复现的例子吗?

没有,这是最麻烦的地方,刚刚又出现了,上周末用frp前我还在局域网里压测了一下都是正常的,我得再找找原因,局域网压测的时候是http,线上是http2的https,这是一个区别,结合上面老哥讲的,准备从这边着手分析下

<!-- gh-comment-id:755382385 --> @minringcheng commented on GitHub (Jan 6, 2021): > @qianyuqianhe 有可以方便其他人本地复现的例子吗? 没有,这是最麻烦的地方,刚刚又出现了,上周末用frp前我还在局域网里压测了一下都是正常的,我得再找找原因,局域网压测的时候是http,线上是http2的https,这是一个区别,结合上面老哥讲的,准备从这边着手分析下
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/frp#1018
No description provided.