[GH-ISSUE #4382] Paralel requests performance #3460

Closed
opened 2026-05-05 14:13:36 -06:00 by gitea-mirror · 3 comments
Owner

Originally created by @blue-genie on GitHub (Aug 15, 2024).
Original GitHub issue: https://github.com/fatedier/frp/issues/4382

Bug Description

This is not really a bug, but I would like to get the community's recommendation on how to implement for my use case.

I have a web server for which I have 2 ISPs, one primary and a 2nd as backup for when the primary goes down, which runs frpc.
I also have a AWS server which runs frps and handles XXX.ZZZ.com and YYY.ZZZ.com, which is supposed to forward the requests to the in-house server.
The configuration for the client and server are below.
I have a page which on initial load, loads maybe about 30MBs, in 20-25 images, the images are supposed to be loaded in parallel.

frpc Version

fatedier/frps:v0.59.0

frps Version

fatedier/frps:v0.59.0

System Architecture

linux/amd64 all server

Configurations

# frps.toml
bindPort = 7000
vhostHTTPPort = 80
vhostHTTPSPort = 9443
# frpc.toml
serverAddr = "XXX.XXX.XXX.XXX"
serverPort = 7000

[[proxies]]
name = "XXXX-https"
#type = "tcp"
type = "https"
localIP = "nginx"
localPort = 443
customDomains = ["XXX.ZZZ.com", "YYY.ZZZ.com"]

# now v1 and v2 are supported
transport.proxyProtocolVersion = "v2"

[[proxies]]
name = "XXX-http"
type = "http"
localIP = "nginx"
localPort = 80 
customDomains = ["XXX.ZZZ.com", "YYY.ZZZ.com"]

Logs

no errors just optimization query

Steps to reproduce

Issue, using the same client computer - just messing with the posts file to decide if the request is sent to AWS&frps or directly to the server:

If my clients go directly to the server, bypassing AWS & frps, they get really fast, 3x, on a full page load, with cache disabled and 33 http requests.

If the request goes through frps I see performance issues. The page that used to load in 3s not it's loaded in 9-10s. My nginx server reports that the http request time, $request_time, was 0.5s, but the some images load in 6s - 3-4s waiting for server response and 2-3s content download

Question: Is my configuration optimum for my needs? Anything that I should change?

Affected area

  • Docs
  • Installation
  • Performance and Scalability
  • Security
  • User Experience
  • Test and Release
  • Developer Infrastructure
  • Client Plugin
  • Server Plugin
  • Extensions
  • Others
Originally created by @blue-genie on GitHub (Aug 15, 2024). Original GitHub issue: https://github.com/fatedier/frp/issues/4382 ### Bug Description This is not really a bug, but I would like to get the community's recommendation on how to implement for my use case. I have a web server for which I have 2 ISPs, one primary and a 2nd as backup for when the primary goes down, which runs frpc. I also have a AWS server which runs frps and handles XXX.ZZZ.com and YYY.ZZZ.com, which is supposed to forward the requests to the in-house server. The configuration for the client and server are below. I have a page which on initial load, loads maybe about 30MBs, in 20-25 images, the images are supposed to be loaded in parallel. ### frpc Version fatedier/frps:v0.59.0 ### frps Version fatedier/frps:v0.59.0 ### System Architecture linux/amd64 all server ### Configurations ``` # frps.toml bindPort = 7000 vhostHTTPPort = 80 vhostHTTPSPort = 9443 ``` ``` # frpc.toml serverAddr = "XXX.XXX.XXX.XXX" serverPort = 7000 [[proxies]] name = "XXXX-https" #type = "tcp" type = "https" localIP = "nginx" localPort = 443 customDomains = ["XXX.ZZZ.com", "YYY.ZZZ.com"] # now v1 and v2 are supported transport.proxyProtocolVersion = "v2" [[proxies]] name = "XXX-http" type = "http" localIP = "nginx" localPort = 80 customDomains = ["XXX.ZZZ.com", "YYY.ZZZ.com"] ``` ### Logs no errors just optimization query ### Steps to reproduce Issue, using the same client computer - just messing with the posts file to decide if the request is sent to AWS&frps or directly to the server: If my clients go directly to the server, bypassing AWS & frps, they get really fast, 3x, on a full page load, with cache disabled and 33 http requests. If the request goes through frps I see performance issues. The page that used to load in 3s not it's loaded in 9-10s. My nginx server reports that the http request time, `$request_time`, was 0.5s, but the some images load in 6s - 3-4s `waiting for server response` and 2-3s `content download` Question: Is my configuration optimum for my needs? Anything that I should change? ### Affected area - [ ] Docs - [ ] Installation - [X] Performance and Scalability - [ ] Security - [ ] User Experience - [ ] Test and Release - [ ] Developer Infrastructure - [ ] Client Plugin - [ ] Server Plugin - [ ] Extensions - [ ] Others
gitea-mirror 2026-05-05 14:13:36 -06:00
Author
Owner

@fatedier commented on GitHub (Aug 15, 2024):

This usually depends on your network conditions and bandwidth.

<!-- gh-comment-id:2290417722 --> @fatedier commented on GitHub (Aug 15, 2024): This usually depends on your network conditions and bandwidth.
Author
Owner

@blue-genie commented on GitHub (Aug 15, 2024):

In general I would agree, but I would assume that AWS has symmetric 1Gbps connection. I have 1Gbps symmetric on the server.

I understand that there will be an extra delay, because the request goes from local to AWS then comes back a different local, instead just between two locals, but still ...

Is my configuration the most optimum for the high concurent traffic? I tried to mess with the connection pooling, didn't see a difference.

What would be your configuration for this setup? Or how do I get more stats to debug and see if anything can be optimized?

<!-- gh-comment-id:2290457840 --> @blue-genie commented on GitHub (Aug 15, 2024): In general I would agree, but I would assume that AWS has symmetric 1Gbps connection. I have 1Gbps symmetric on the server. I understand that there will be an extra delay, because the request goes from local to AWS then comes back a different local, instead just between two locals, but still ... Is my configuration the most optimum for the high concurent traffic? I tried to mess with the connection pooling, didn't see a difference. What would be your configuration for this setup? Or how do I get more stats to debug and see if anything can be optimized?
Author
Owner

@github-actions[bot] commented on GitHub (Sep 6, 2024):

Issues go stale after 21d of inactivity. Stale issues rot after an additional 7d of inactivity and eventually close.

<!-- gh-comment-id:2332961452 --> @github-actions[bot] commented on GitHub (Sep 6, 2024): Issues go stale after 21d of inactivity. Stale issues rot after an additional 7d of inactivity and eventually close.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/frp#3460
No description provided.