[GH-ISSUE #4049] High CPU/Network Usage in frps #3204

Closed
opened 2026-05-05 14:04:12 -06:00 by gitea-mirror · 7 comments
Owner

Originally created by @martinleopold on GitHub (Mar 8, 2024).
Original GitHub issue: https://github.com/fatedier/frp/issues/4049

Bug Description

I'm trying a simple setup (mainly for https), with frps running on a cloud server (Ubuntu 20.04.6).
I am noticing pretty high CPU and network usage – no client connected at all – simply by running frps.

top shows frps constantly at around 40% CPU.
Screenshot 2024-03-08 at 17 05 11

In the hosting control panel it looks even worse.  Notice how CPU and network went down near the end of the graph, that's when I killed frps.
Screenshot 2024-03-08 at 17 16 34 copy

frpc Version

0.54.0

frps Version

0.54.0

System Architecture

linux/x86_64

Configurations

frps.toml:

bindAddr = "tunnel.example.com"
bindPort = 7000
auth.token = "{{ .Envs.FRP_AUTH_TOKEN }}"
subDomainHost = "example.com"
vhostHTTPPort = 80
vhostHTTPSPort = 443

Logs

OS/Machine Info:

OS: Ubuntu 20.04.6 LTS x86_64 
Host: vServer 20171111 
Kernel: 5.4.0-173-generic 
Uptime: 1 day, 5 hours, 39 mins 
Packages: 705 (dpkg), 5 (snap) 
Shell: bash 5.0.17 
Resolution: 1024x768 
Terminal: /dev/pts/0 
CPU: Intel Xeon (Skylake, IBRS) (2) @ 2.099GHz 
GPU: 00:02.0 Vendor 1234 Device 1111 
Memory: 268MiB / 3823MiB 

Steps to reproduce

  1. Run frps -c frps.toml with the above config file
  2. Observe CPU usage

Affected area

  • Docs
  • Installation
  • Performance and Scalability
  • Security
  • User Experience
  • Test and Release
  • Developer Infrastructure
  • Client Plugin
  • Server Plugin
  • Extensions
  • Others
Originally created by @martinleopold on GitHub (Mar 8, 2024). Original GitHub issue: https://github.com/fatedier/frp/issues/4049 ### Bug Description I'm trying a simple setup (mainly for https), with `frps` running on a cloud server (Ubuntu 20.04.6). I am noticing pretty high CPU and network usage – no client connected at all – simply by running `frps`. `top` shows frps constantly at around 40% CPU. <img width="986" alt="Screenshot 2024-03-08 at 17 05 11" src="https://github.com/fatedier/frp/assets/1692826/f8d07015-ed12-4e10-978f-53c499c420c6"> In the hosting control panel it looks even worse.  Notice how CPU and network went down near the end of the graph, that's when I killed `frps`. <img width="1284" alt="Screenshot 2024-03-08 at 17 16 34 copy" src="https://github.com/fatedier/frp/assets/1692826/f57083bc-61cf-4633-b0fe-f313dd162ca3"> ### frpc Version 0.54.0 ### frps Version 0.54.0 ### System Architecture linux/x86_64 ### Configurations frps.toml: ``` bindAddr = "tunnel.example.com" bindPort = 7000 auth.token = "{{ .Envs.FRP_AUTH_TOKEN }}" subDomainHost = "example.com" vhostHTTPPort = 80 vhostHTTPSPort = 443 ``` ### Logs OS/Machine Info: ``` OS: Ubuntu 20.04.6 LTS x86_64 Host: vServer 20171111 Kernel: 5.4.0-173-generic Uptime: 1 day, 5 hours, 39 mins Packages: 705 (dpkg), 5 (snap) Shell: bash 5.0.17 Resolution: 1024x768 Terminal: /dev/pts/0 CPU: Intel Xeon (Skylake, IBRS) (2) @ 2.099GHz GPU: 00:02.0 Vendor 1234 Device 1111 Memory: 268MiB / 3823MiB ``` ### Steps to reproduce 1. Run `frps -c frps.toml` with the above config file 2. Observe CPU usage ### Affected area - [ ] Docs - [ ] Installation - [X] Performance and Scalability - [ ] Security - [ ] User Experience - [ ] Test and Release - [ ] Developer Infrastructure - [ ] Client Plugin - [ ] Server Plugin - [ ] Extensions - [ ] Others
gitea-mirror 2026-05-05 14:04:12 -06:00
Author
Owner

@martinleopold commented on GitHub (Mar 8, 2024):

Short update: I've noticed the issue goes away when changing vhostHTTPSPort to something different than 443, e.g.

vhostHTTPSPort = 444

or

vhostHTTPSPort = 44344

then CPU goes down to 0%.

BTW: In any case - even when CPU usage is high – I can connect with frpc just fine and everything works.

<!-- gh-comment-id:1986104601 --> @martinleopold commented on GitHub (Mar 8, 2024): Short update: I've noticed the issue goes away when changing `vhostHTTPSPort` to something different than `443`, e.g. ``` vhostHTTPSPort = 444 ``` or ``` vhostHTTPSPort = 44344 ``` then CPU goes down to 0%. BTW: In any case - even when CPU usage is high – I can connect with `frpc` just fine and everything works.
Author
Owner

@fatedier commented on GitHub (Mar 11, 2024):

Exposure of services on the public network being accessed or scanned is very normal, you can troubleshoot traffic on port 443 by capturing packets.

<!-- gh-comment-id:1987532968 --> @fatedier commented on GitHub (Mar 11, 2024): Exposure of services on the public network being accessed or scanned is very normal, you can troubleshoot traffic on port 443 by capturing packets.
Author
Owner

@martinleopold commented on GitHub (Mar 11, 2024):

You are right, didn't think about that.

There are ~5 incoming TCP connections per second on port 443. Each connection attempt has about 8-9 packets exchanged with about 500 bytes of data total.
Screenshot 2024-03-11 at 11 47 50 copy

frps with loglevel of debug or trace, shows tons of messages like this:

2024/03/11 11:20:51 [D] [vhost.go:206] get hostname from http/https request error: tls: first record does not look like a TLS handshake

So I am thinking:

  • Is there any way the server can limit the amount of processing required for these invalid connections? (I guess this is unlikely, given the amount of packets seems already very small.)
  • Could you add more comprehensive logging for failed connection or authentication attempts, that include IP (and port)? Then a tool such as fail2ban could be used to ban hosts with repeated failed connections attempts, based on the log file.
<!-- gh-comment-id:1988242869 --> @martinleopold commented on GitHub (Mar 11, 2024): You are right, didn't think about that. There are ~5 incoming TCP connections per second on port 443. Each connection attempt has about 8-9 packets exchanged with about 500 bytes of data total. <img width="1253" alt="Screenshot 2024-03-11 at 11 47 50 copy" src="https://github.com/fatedier/frp/assets/1692826/c2d7d991-e546-46b6-b491-b58a60d270cd"> `frps` with loglevel of `debug` or `trace`, shows tons of messages like this: ``` 2024/03/11 11:20:51 [D] [vhost.go:206] get hostname from http/https request error: tls: first record does not look like a TLS handshake ``` So I am thinking: * Is there any way the server can limit the amount of processing required for these invalid connections? (I guess this is unlikely, given the amount of packets seems already very small.) * Could you add more comprehensive logging for failed connection or authentication attempts, that include IP (and port)? Then a tool such as [fail2ban](https://github.com/fail2ban/fail2ban) could be used to ban hosts with repeated failed connections attempts, based on the log file.
Author
Owner

@fatedier commented on GitHub (Mar 11, 2024):

Perhaps there are more professional tools/proxies available that can be used to identify/configure some simple protection rules.

Currently, frp will not make too many changes in this regard; this is more like a capability of a WAF gateway.

<!-- gh-comment-id:1988254165 --> @fatedier commented on GitHub (Mar 11, 2024): Perhaps there are more professional tools/proxies available that can be used to identify/configure some simple protection rules. Currently, frp will not make too many changes in this regard; this is more like a capability of a WAF gateway.
Author
Owner

@martinleopold commented on GitHub (Mar 15, 2024):

Right, this kind of protection is not the responsibility of frp. This is a small/hobby project so I can't invest in extra services – but I've managed to get the CPU down to idle levels by simply rate limiting incoming connections to the frp server.

Still, would you be willing to accept a PR to include the IP of failed connections in the log output? I feel this could help elsewhere as well, debugging other issues...
EDIT: Looking at the code, might be too hard for me actually, to get this right ;)

<!-- gh-comment-id:1999490995 --> @martinleopold commented on GitHub (Mar 15, 2024): Right, this kind of protection is not the responsibility of frp. This is a small/hobby project so I can't invest in extra services – but I've managed to get the CPU down to idle levels by simply rate limiting incoming connections to the frp server. ~~Still, would you be willing to accept a PR to include the IP of failed connections in the log output? I feel this could help elsewhere as well, debugging other issues...~~ EDIT: Looking at the code, might be too hard for me actually, to get this right ;)
Author
Owner

@fatedier commented on GitHub (Mar 18, 2024):

Still, would you be willing to accept a PR to include the IP of failed connections in the log output? I feel this could help elsewhere as well, debugging other issues...
EDIT: Looking at the code, might be too hard for me actually, to get this right ;)

Yes, we have similar plans in the refactoring of the v2 major version. This is a relatively long-term and complex plan, which is also related to other aspects of refactoring. I will write this part of the code myself.

At this stage, the main focus is on gathering requirements. Your feedback will be helpful for how we will refactor in the future.

<!-- gh-comment-id:2002781987 --> @fatedier commented on GitHub (Mar 18, 2024): > Still, would you be willing to accept a PR to include the IP of failed connections in the log output? I feel this could help elsewhere as well, debugging other issues... EDIT: Looking at the code, might be too hard for me actually, to get this right ;) Yes, we have similar plans in the refactoring of the v2 major version. This is a relatively long-term and complex plan, which is also related to other aspects of refactoring. I will write this part of the code myself. At this stage, the main focus is on gathering requirements. Your feedback will be helpful for how we will refactor in the future.
Author
Owner

@github-actions[bot] commented on GitHub (Apr 9, 2024):

Issues go stale after 21d of inactivity. Stale issues rot after an additional 7d of inactivity and eventually close.

<!-- gh-comment-id:2043934049 --> @github-actions[bot] commented on GitHub (Apr 9, 2024): Issues go stale after 21d of inactivity. Stale issues rot after an additional 7d of inactivity and eventually close.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/frp#3204
No description provided.