[GH-ISSUE #4685] 一套服务端和客户端,如何实现多个端口点对点http使用? #3701

Closed
opened 2026-05-05 14:22:20 -06:00 by gitea-mirror · 4 comments
Owner

Originally created by @gowy222 on GitHub (Feb 22, 2025).
Original GitHub issue: https://github.com/fatedier/frp/issues/4685

Describe the feature request

环境: 客户端:9000端口跑的vue,9001跑的nextjs项目,
这种前端path敏感,不适用于 https://gofrp.org/zh-cn/docs/features/http-https/route/ 路由功能

同一个客户端配置是否有办法不用公共的serverAddr 和serverPort ?

serverAddr = "XXX.XXX.XXX.XXX"
serverPort = 9000

[[proxies]]
name = "app-vue"
type = "http"
localPort = 9000
localIP = "192.168.66.1"
customDomains = ["XXX.XXX.XXX.XXX"]

而是变成

[[proxies]]
name = "app-vue"
type = "http"
serverAddr = XXX.XXX.XXX.XXX # 各自独立
serverPort  = 9000 # 各自独立
localPort = 9000
localIP = "192.168.66.1"
customDomains = ["XXX.XXX.XXX.XXX"]



[[proxies]]
name = "app-next"
type = "http"
serverAddr = XXX.XXX.XXX.XXX # 各自独立
serverPort  = 9001 # 各自独立
localPort = 9001
localIP = "192.168.66.2"
customDomains = ["XXX.XXX.XXX.XXX"]

这样实现一个客户端各自对应所需要的端口,
同时这种情况下,一个服务端如何配置开放9000和90001两个端口?
这样一来,扩展性,10个应用也只需要一套服务端和客户端

Describe alternatives you've considered

No response

Affected area

  • Docs
  • Installation
  • Performance and Scalability
  • Security
  • User Experience
  • Test and Release
  • Developer Infrastructure
  • Client Plugin
  • Server Plugin
  • Extensions
  • Others
Originally created by @gowy222 on GitHub (Feb 22, 2025). Original GitHub issue: https://github.com/fatedier/frp/issues/4685 ### Describe the feature request 环境: 客户端:9000端口跑的vue,9001跑的nextjs项目, 这种前端path敏感,不适用于 https://gofrp.org/zh-cn/docs/features/http-https/route/ 路由功能 同一个客户端配置是否有办法不用公共的serverAddr 和serverPort ? ``` serverAddr = "XXX.XXX.XXX.XXX" serverPort = 9000 [[proxies]] name = "app-vue" type = "http" localPort = 9000 localIP = "192.168.66.1" customDomains = ["XXX.XXX.XXX.XXX"] ``` 而是变成 ``` [[proxies]] name = "app-vue" type = "http" serverAddr = XXX.XXX.XXX.XXX # 各自独立 serverPort = 9000 # 各自独立 localPort = 9000 localIP = "192.168.66.1" customDomains = ["XXX.XXX.XXX.XXX"] [[proxies]] name = "app-next" type = "http" serverAddr = XXX.XXX.XXX.XXX # 各自独立 serverPort = 9001 # 各自独立 localPort = 9001 localIP = "192.168.66.2" customDomains = ["XXX.XXX.XXX.XXX"] ``` 这样实现一个客户端各自对应所需要的端口, 同时这种情况下,一个服务端如何配置开放9000和90001两个端口? 这样一来,扩展性,10个应用也只需要一套服务端和客户端 ### Describe alternatives you've considered _No response_ ### Affected area - [ ] Docs - [ ] Installation - [ ] Performance and Scalability - [ ] Security - [ ] User Experience - [ ] Test and Release - [ ] Developer Infrastructure - [ ] Client Plugin - [ ] Server Plugin - [ ] Extensions - [ ] Others
gitea-mirror 2026-05-05 14:22:20 -06:00
Author
Owner

@KKKzzz7 commented on GitHub (Feb 24, 2025):

起两个client和serve程序不就解决了么

<!-- gh-comment-id:2679192060 --> @KKKzzz7 commented on GitHub (Feb 24, 2025): 起两个client和serve程序不就解决了么
Author
Owner

@gowy222 commented on GitHub (Feb 26, 2025):

起两个client和serve程序不就解决了么

目前就是堆数量的,但想着有没有更优解

<!-- gh-comment-id:2683790820 --> @gowy222 commented on GitHub (Feb 26, 2025): > 起两个client和serve程序不就解决了么 目前就是堆数量的,但想着有没有更优解
Author
Owner

@sowhyim commented on GitHub (Mar 3, 2025):

理论上是可行的,比如优先proxies里面配置的 server,当没有时,获取 global 的 server 配置,应当能满足你的要求

当然我还为读过该项目具体内容,不确定项目的思路是怎么样的😓

<!-- gh-comment-id:2693854783 --> @sowhyim commented on GitHub (Mar 3, 2025): 理论上是可行的,比如优先[[proxies]]里面配置的 server,当没有时,获取 global 的 server 配置,应当能满足你的要求 当然我还为读过该项目具体内容,不确定项目的思路是怎么样的😓
Author
Owner

@github-actions[bot] commented on GitHub (Mar 18, 2025):

Issues go stale after 14d of inactivity. Stale issues rot after an additional 3d of inactivity and eventually close.

<!-- gh-comment-id:2731269973 --> @github-actions[bot] commented on GitHub (Mar 18, 2025): Issues go stale after 14d of inactivity. Stale issues rot after an additional 3d of inactivity and eventually close.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/frp#3701
No description provided.