[GH-ISSUE #3138] [Feature Request] Add documentation to deploy in K8s, scaling parameters and resource consumption #2517

Closed
opened 2026-05-05 13:37:18 -06:00 by gitea-mirror · 9 comments
Owner

Originally created by @isurulucky on GitHub (Oct 19, 2022).
Original GitHub issue: https://github.com/fatedier/frp/issues/3138

Describe the feature request

Hello there,

Since there are not much resources related to using frp in K8s, I went ahead and tried a POC with the usecase of exposing the services privately. I would like to get your feedback and see if we can include this information in the FRP documentation for the benefit of others. Particularly, I think we need:

  • A reference architecture
  • Parameters for scaling frps horizontally for a typical scenario
  • Security aspects
  • And any other relevant information.

This is the basic sample architecture I came up with:

frp-k8s drawio

Here, the external secret client visitor (or any external client in the general case) should use TLS to communicate with the ingress controller. In addition, the token based authentication can be used between the FRP client and the server. Any internal communication between FRPS and FRPC within the K8s cluster need not use TLS. In addition, the external client is using websocket protocol to communicate with FRPS as the community ingress controller supports websockets out of the box, whereas TCP is not properly supported (TCP with ingress controller requires a port per ingress if am not mistaken).
As next steps we would need to to figure out autoscaling resource (CPU and memory) requests and limits. Any idea on what would be a resource requirements in a general level?
Also, any other additional improvements for the suggested approach are welcome!

Describe alternatives you've considered

No response

Affected area

  • Docs
  • Installation
  • Performance and Scalability
  • Security
  • User Experience
  • Test and Release
  • Developer Infrastructure
  • Client Plugin
  • Server Plugin
  • Extensions
  • Others
Originally created by @isurulucky on GitHub (Oct 19, 2022). Original GitHub issue: https://github.com/fatedier/frp/issues/3138 ### Describe the feature request Hello there, Since there are not much resources related to using frp in K8s, I went ahead and tried a POC with the usecase of exposing the services privately. I would like to get your feedback and see if we can include this information in the FRP documentation for the benefit of others. Particularly, I think we need: - A reference architecture - Parameters for scaling frps horizontally for a typical scenario - Security aspects - And any other relevant information. This is the basic sample architecture I came up with: ![frp-k8s drawio](https://user-images.githubusercontent.com/2777052/196608755-4762deeb-d9bc-4a34-b58f-c99c3e1ed21e.png) Here, the external secret client visitor (or any external client in the general case) should use TLS to communicate with the ingress controller. In addition, the token based authentication can be used between the FRP client and the server. Any internal communication between FRPS and FRPC within the K8s cluster need not use TLS. In addition, the external client is using websocket protocol to communicate with FRPS as the community ingress controller supports websockets out of the box, whereas TCP is not properly supported (TCP with ingress controller requires a port per ingress if am not mistaken). As next steps we would need to to figure out autoscaling resource (CPU and memory) requests and limits. Any idea on what would be a resource requirements in a general level? Also, any other additional improvements for the suggested approach are welcome! ### Describe alternatives you've considered _No response_ ### Affected area - [X] Docs - [ ] Installation - [X] Performance and Scalability - [ ] Security - [ ] User Experience - [ ] Test and Release - [ ] Developer Infrastructure - [ ] Client Plugin - [ ] Server Plugin - [ ] Extensions - [ ] Others
gitea-mirror 2026-05-05 13:37:18 -06:00
Author
Owner

@fatedier commented on GitHub (Oct 19, 2022):

  1. From development-status , we are developing v2 to replace current version and won't put a lot of effort into adding features to the current version.
  2. frps is not scalable now based on the current architecture. It's not cloud native. You may encounter a lot of problems.
  3. You're welcome to ask me questions about your POC issues.
<!-- gh-comment-id:1283783699 --> @fatedier commented on GitHub (Oct 19, 2022): 1. From [development-status ](https://github.com/fatedier/frp#development-status), we are developing v2 to replace current version and won't put a lot of effort into adding features to the current version. 2. `frps` is not scalable now based on the current architecture. It's not cloud native. You may encounter a lot of problems. 3. You're welcome to ask me questions about your POC issues.
Author
Owner

@isurulucky commented on GitHub (Oct 19, 2022):

Thanks for very much for the reply. I tried the POC out in a minimal K8s cluster, and it did seem to work. But my idea was to scale this based on need, such as when memory pressure increases etc., using the k8s autoscaling techniques. But if the current frp implementation does not support scaling, that would not work.
Would you be able to explain a bit why it won't scale as of now? And I assume the next version would address these shortcomings?

<!-- gh-comment-id:1283838746 --> @isurulucky commented on GitHub (Oct 19, 2022): Thanks for very much for the reply. I tried the POC out in a minimal K8s cluster, and it did seem to work. But my idea was to scale this based on need, such as when memory pressure increases etc., using the k8s autoscaling techniques. But if the current frp implementation does not support scaling, that would not work. Would you be able to explain a bit why it won't scale as of now? And I assume the next version would address these shortcomings?
Author
Owner

@fatedier commented on GitHub (Oct 19, 2022):

It's not stateless. All route configurations and frpc's meta infos are stored in memory.

When a request come in, frps will parse it and decide which frpc to forward by route configurations. If one frpc connect to frps A and you send requests to frps B, frps B will not know how to route this request.

<!-- gh-comment-id:1283899135 --> @fatedier commented on GitHub (Oct 19, 2022): It's not stateless. All route configurations and frpc's meta infos are stored in memory. When a request come in, frps will parse it and decide which frpc to forward by route configurations. If one frpc connect to frps A and you send requests to frps B, frps B will not know how to route this request.
Author
Owner

@isurulucky commented on GitHub (Oct 19, 2022):

It's not stateless. All route configurations and frpc's meta infos are stored in memory.

When a request come in, frps will parse it and decide which frpc to forward by route configurations. If one frpc connect to frps A and you send requests to frps B, frps B will not know how to route this request.

I see, I think I get the idea. If we take the exposing a private service as an example, both secret client and secret client visitor connects to frps separately. Since its a point to point connection, even if another frps comes up, it will not know about the routing configuration.
So the only way to make it work is make sure that there are only one frps running.. as I see, scaling frpc is not an issue as the configurations are read from the frpc.ini file. I assume if we can guarantee that there is one frps running, this setup would work?

<!-- gh-comment-id:1283962732 --> @isurulucky commented on GitHub (Oct 19, 2022): > It's not stateless. All route configurations and frpc's meta infos are stored in memory. > > When a request come in, frps will parse it and decide which frpc to forward by route configurations. If one frpc connect to frps A and you send requests to frps B, frps B will not know how to route this request. I see, I think I get the idea. If we take the exposing a private service as an example, both secret client and secret client visitor connects to frps separately. Since its a point to point connection, even if another frps comes up, it will not know about the routing configuration. So the only way to make it work is make sure that there are only one frps running.. as I see, scaling frpc is not an issue as the configurations are read from the frpc.ini file. I assume if we can guarantee that there is one frps running, this setup would work?
Author
Owner

@fatedier commented on GitHub (Oct 19, 2022):

Yes.

<!-- gh-comment-id:1283979211 --> @fatedier commented on GitHub (Oct 19, 2022): Yes.
Author
Owner

@isurulucky commented on GitHub (Oct 19, 2022):

Thanks @fatedier. Will close this.

<!-- gh-comment-id:1284038834 --> @isurulucky commented on GitHub (Oct 19, 2022): Thanks @fatedier. Will close this.
Author
Owner

@isurulucky commented on GitHub (Oct 20, 2022):

Re-opening to discuss a point we missed to discuss - for FRPS and FRPC, how do we need to set resources (memory and CPU) in a K8s environment? This is not for scaling but for the initial scheduling of FRPS and FRPC pods.

<!-- gh-comment-id:1284778773 --> @isurulucky commented on GitHub (Oct 20, 2022): Re-opening to discuss a point we missed to discuss - for FRPS and FRPC, how do we need to set resources (memory and CPU) in a K8s environment? This is not for scaling but for the initial scheduling of FRPS and FRPC pods.
Author
Owner

@fatedier commented on GitHub (Oct 20, 2022):

It depends on your usage scenarios. Small resource is fine for your demo.

<!-- gh-comment-id:1284852034 --> @fatedier commented on GitHub (Oct 20, 2022): It depends on your usage scenarios. Small resource is fine for your demo.
Author
Owner

@isurulucky commented on GitHub (Oct 21, 2022):

Thanks @fatedier. In a test running for a few hours, noted that all frp pods were consuming around 50MB memory and around couple of millicores of CPU. I think something around this should be a good starting point.

<!-- gh-comment-id:1286452268 --> @isurulucky commented on GitHub (Oct 21, 2022): Thanks @fatedier. In a test running for a few hours, noted that all frp pods were consuming around 50MB memory and around couple of millicores of CPU. I think something around this should be a good starting point.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/frp#2517
No description provided.