[GH-ISSUE #3019] Heartbeat seems to block http request #2412

Closed
opened 2026-05-05 13:33:05 -06:00 by gitea-mirror · 2 comments
Owner

Originally created by @N1v0k on GitHub (Jul 15, 2022).
Original GitHub issue: https://github.com/fatedier/frp/issues/3019

Bug Description

I have a local simple http server.
FRPC forwards the requests to this http server.

I noticed that a lot of my requests took around 29s (sometimes shorter), chrome said it was TTFB, so it was waiting for the first byte.

I have control over the http server and I can see when the first byte reaches the http server.

essentially what I seeing is this:

2022/07/15 17:05:13 [D] [proxy_wrapper.go:231] [1f2a8fdeee79bb1b] [test-proxy] start a new work connection, localAddr: 10.0.1.17:58084 remoteAddr: ??.??.??.??:21
2022/07/15 17:05:13 [T] [proxy.go:734] [1f2a8fdeee79bb1b] [test-proxy] handle tcp work connection, use_encryption: true, use_compression: false

2022/07/15 17:05:13 [D] [proxy.go:799] [1f2a8fdeee79bb1b] [test-proxy] join connections, localConn(l[127.0.0.1:45896] r[127.0.0.1:4205]) workConn(l[10.0.1.17:58084] r[??.??.??.??:21])

2022/07/15 17:05:39 [D] [control.go:360] [1f2a8fdeee79bb1b] send heartbeat to server
2022/07/15 17:05:39 [D] [control.go:391] [1f2a8fdeee79bb1b] receive heartbeat from server

my http server: 
2022-07-15T17:05:40.194Z: receive data from incoming 387

As you can see in the timestamps, there is about a 30s delay.
I investigated this and came to the conclusion that the hearbeat may block the request forwarding.

I then set heartbeat_interval = 1 and now the requests are working immediately:

2022/07/15 17:45:11 [D] [proxy_wrapper.go:231] [1f2a8fdeee79bb1b] [test-proxy] start a new work connection, localAddr: 10.0.1.17:58084 remoteAddr: ??.??.??.??:21
2022/07/15 17:45:11 [T] [proxy.go:734] [1f2a8fdeee79bb1b] [test-proxy] handle tcp work connection, use_encryption: true, use_compression: false

2022/07/15 17:45:11 [D] [proxy.go:799] [1f2a8fdeee79bb1b] [test-proxy] join connections, localConn(l[127.0.0.1:45896] r[127.0.0.1:4205]) workConn(l[10.0.1.17:58084] r[??.??.??.??:21])

2022/07/15 17:45:11 [D] [control.go:360] [1f2a8fdeee79bb1b] send heartbeat to server
2022/07/15 17:45:12 [D] [control.go:391] [1f2a8fdeee79bb1b] receive heartbeat from server

my server: 
2022-07-15T17:45:12.214Z: receive data from incoming 387

If i disable heartbeats completely, FRPS just returns a 404-not found page after a while

frpc Version

0.44.0

frps Version

0.44.0

System Architecture

linux/amd64,

Configurations

frpc.ini

[common]
server_addr = my.proxy-server.com
server_port = 21
log_level = trace
token=acbdefgh
disable_log_color = true
heartbeat_timeout = -1
heartbeat_interval = 1

[test-proxy]
type = http
custom_domains = my.proxy-actual.com
use_encryption = true
locations = /target1
http_user = user
http_pwd = pass
local_port = 4205

frps.ini

[common]
bind_addr = 0.0.0.0
bind_port = 21
bind_udp_port = 20
kcp_bind_port = 21

vhost_http_port = 81
vhost_https_port = 444

dashboard_addr = 0.0.0.0
dashboard_port = 7521

dashboard_user = user
dashboard_pwd = pass

enable_prometheus = false
log_level = trace
log_max_days = 3
disable_log_color = false
detailed_errors_to_client = true
authentication_method = token
authenticate_heartbeats = false

authenticate_new_work_conns = false
token = abcdefgh
allow_ports = 2000-3000,3001,3003,4000-50000
max_ports_per_client = 0
tls_only = false
tls_enable = false
tcp_keepalive = 60000
udp_packet_size = 1500
pprof_enable = false

Logs

No response

Steps to reproduce

...

Affected area

  • Docs
  • Installation
  • Performance and Scalability
  • Security
  • User Experience
  • Test and Release
  • Developer Infrastructure
  • Client Plugin
  • Server Plugin
  • Extensions
  • Others
Originally created by @N1v0k on GitHub (Jul 15, 2022). Original GitHub issue: https://github.com/fatedier/frp/issues/3019 ### Bug Description I have a local simple http server. FRPC forwards the requests to this http server. I noticed that a lot of my requests took around 29s (sometimes shorter), chrome said it was TTFB, so it was waiting for the first byte. I have control over the http server and I can see when the first byte reaches the http server. essentially what I seeing is this: ``` 2022/07/15 17:05:13 [D] [proxy_wrapper.go:231] [1f2a8fdeee79bb1b] [test-proxy] start a new work connection, localAddr: 10.0.1.17:58084 remoteAddr: ??.??.??.??:21 2022/07/15 17:05:13 [T] [proxy.go:734] [1f2a8fdeee79bb1b] [test-proxy] handle tcp work connection, use_encryption: true, use_compression: false 2022/07/15 17:05:13 [D] [proxy.go:799] [1f2a8fdeee79bb1b] [test-proxy] join connections, localConn(l[127.0.0.1:45896] r[127.0.0.1:4205]) workConn(l[10.0.1.17:58084] r[??.??.??.??:21]) 2022/07/15 17:05:39 [D] [control.go:360] [1f2a8fdeee79bb1b] send heartbeat to server 2022/07/15 17:05:39 [D] [control.go:391] [1f2a8fdeee79bb1b] receive heartbeat from server my http server: 2022-07-15T17:05:40.194Z: receive data from incoming 387 ``` As you can see in the timestamps, there is about a 30s delay. I investigated this and came to the conclusion that the hearbeat may block the request forwarding. I then set `heartbeat_interval = 1` and now the requests are working immediately: ``` 2022/07/15 17:45:11 [D] [proxy_wrapper.go:231] [1f2a8fdeee79bb1b] [test-proxy] start a new work connection, localAddr: 10.0.1.17:58084 remoteAddr: ??.??.??.??:21 2022/07/15 17:45:11 [T] [proxy.go:734] [1f2a8fdeee79bb1b] [test-proxy] handle tcp work connection, use_encryption: true, use_compression: false 2022/07/15 17:45:11 [D] [proxy.go:799] [1f2a8fdeee79bb1b] [test-proxy] join connections, localConn(l[127.0.0.1:45896] r[127.0.0.1:4205]) workConn(l[10.0.1.17:58084] r[??.??.??.??:21]) 2022/07/15 17:45:11 [D] [control.go:360] [1f2a8fdeee79bb1b] send heartbeat to server 2022/07/15 17:45:12 [D] [control.go:391] [1f2a8fdeee79bb1b] receive heartbeat from server my server: 2022-07-15T17:45:12.214Z: receive data from incoming 387 ``` If i disable heartbeats completely, FRPS just returns a 404-not found page after a while ### frpc Version 0.44.0 ### frps Version 0.44.0 ### System Architecture linux/amd64, ### Configurations frpc.ini ``` [common] server_addr = my.proxy-server.com server_port = 21 log_level = trace token=acbdefgh disable_log_color = true heartbeat_timeout = -1 heartbeat_interval = 1 [test-proxy] type = http custom_domains = my.proxy-actual.com use_encryption = true locations = /target1 http_user = user http_pwd = pass local_port = 4205 ``` frps.ini ``` [common] bind_addr = 0.0.0.0 bind_port = 21 bind_udp_port = 20 kcp_bind_port = 21 vhost_http_port = 81 vhost_https_port = 444 dashboard_addr = 0.0.0.0 dashboard_port = 7521 dashboard_user = user dashboard_pwd = pass enable_prometheus = false log_level = trace log_max_days = 3 disable_log_color = false detailed_errors_to_client = true authentication_method = token authenticate_heartbeats = false authenticate_new_work_conns = false token = abcdefgh allow_ports = 2000-3000,3001,3003,4000-50000 max_ports_per_client = 0 tls_only = false tls_enable = false tcp_keepalive = 60000 udp_packet_size = 1500 pprof_enable = false ``` ### Logs _No response_ ### Steps to reproduce 1. 2. 3. ... ### Affected area - [ ] Docs - [ ] Installation - [ ] Performance and Scalability - [ ] Security - [ ] User Experience - [ ] Test and Release - [ ] Developer Infrastructure - [ ] Client Plugin - [ ] Server Plugin - [ ] Extensions - [X] Others
gitea-mirror 2026-05-05 13:33:05 -06:00
Author
Owner

@fatedier commented on GitHub (Jul 16, 2022):

Unable to reproduce.

Can you use simplest configures and test it in local node to see if the problem exists? You can also test with other public http service.

<!-- gh-comment-id:1186082362 --> @fatedier commented on GitHub (Jul 16, 2022): Unable to reproduce. Can you use simplest configures and test it in local node to see if the problem exists? You can also test with other public http service.
Author
Owner

@github-actions[bot] commented on GitHub (Aug 16, 2022):

Issues go stale after 30d of inactivity. Stale issues rot after an additional 7d of inactivity and eventually close.

<!-- gh-comment-id:1216016412 --> @github-actions[bot] commented on GitHub (Aug 16, 2022): Issues go stale after 30d of inactivity. Stale issues rot after an additional 7d of inactivity and eventually close.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/frp#2412
No description provided.