mirror of
https://github.com/fatedier/frp.git
synced 2026-05-15 16:15:49 -06:00
[GH-ISSUE #4382] Paralel requests performance #3460
Labels
No labels
In Progress
WIP
WaitingForInfo
bug
doc
duplicate
easy
enhancement
future
help wanted
invalid
lifecycle/stale
need-issue-template
need-usage-help
no plan
proposal
pull-request
question
todo
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/frp#3460
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @blue-genie on GitHub (Aug 15, 2024).
Original GitHub issue: https://github.com/fatedier/frp/issues/4382
Bug Description
This is not really a bug, but I would like to get the community's recommendation on how to implement for my use case.
I have a web server for which I have 2 ISPs, one primary and a 2nd as backup for when the primary goes down, which runs frpc.
I also have a AWS server which runs frps and handles XXX.ZZZ.com and YYY.ZZZ.com, which is supposed to forward the requests to the in-house server.
The configuration for the client and server are below.
I have a page which on initial load, loads maybe about 30MBs, in 20-25 images, the images are supposed to be loaded in parallel.
frpc Version
fatedier/frps:v0.59.0
frps Version
fatedier/frps:v0.59.0
System Architecture
linux/amd64 all server
Configurations
Logs
no errors just optimization query
Steps to reproduce
Issue, using the same client computer - just messing with the posts file to decide if the request is sent to AWS&frps or directly to the server:
If my clients go directly to the server, bypassing AWS & frps, they get really fast, 3x, on a full page load, with cache disabled and 33 http requests.
If the request goes through frps I see performance issues. The page that used to load in 3s not it's loaded in 9-10s. My nginx server reports that the http request time,
$request_time, was 0.5s, but the some images load in 6s - 3-4swaiting for server responseand 2-3scontent downloadQuestion: Is my configuration optimum for my needs? Anything that I should change?
Affected area
@fatedier commented on GitHub (Aug 15, 2024):
This usually depends on your network conditions and bandwidth.
@blue-genie commented on GitHub (Aug 15, 2024):
In general I would agree, but I would assume that AWS has symmetric 1Gbps connection. I have 1Gbps symmetric on the server.
I understand that there will be an extra delay, because the request goes from local to AWS then comes back a different local, instead just between two locals, but still ...
Is my configuration the most optimum for the high concurent traffic? I tried to mess with the connection pooling, didn't see a difference.
What would be your configuration for this setup? Or how do I get more stats to debug and see if anything can be optimized?
@github-actions[bot] commented on GitHub (Sep 6, 2024):
Issues go stale after 21d of inactivity. Stale issues rot after an additional 7d of inactivity and eventually close.