[GH-ISSUE #5018] FRPC - connectex: No connection could be made because the target machine actively refused it with high load #3951

Closed
opened 2026-05-05 14:30:54 -06:00 by gitea-mirror · 3 comments
Owner

Originally created by @Misiu on GitHub (Oct 13, 2025).
Original GitHub issue: https://github.com/fatedier/frp/issues/5018

Bug Description

I'm building a POC that will allow me to route some endpoint to an application running on a customer server.

Architecture:
Server:
.NET 9 API with YARP and FRPS in the same container.
Client:
.NET 9 API and FRPC running on a Windows 11 machine.

The user is making a request to my server, then the server is proxying it to 127.0.0.1:8081 (FRPS), next the request is passed to FRPC and to the target API running on port 5000.

I run tests with k6:

import http from 'k6/http';
import { check, sleep } from 'k6';
import { Rate } from 'k6/metrics';

// Custom metrics
const errorRate = new Rate('errors');

export const options = {
  stages: [
    { duration: '30s', target: 100 },
    { duration: '1m', target: 300 },
    { duration: '2m', target: 500 },
    { duration: '30s', target: 0 }, 
  ],
  thresholds: {
    http_req_duration: ['p(95)<2000'],
    http_req_failed: ['rate<0.15'], 
    errors: ['rate<0.25'], 
  },
};

const BASE_URL = 'http://device1.proxy.local/weatherforecast';

export default function () {
  const url = BASE_URL;

  // Configure request parameters
  const params = {
    headers: {
      'Content-Type': 'application/json',
      'User-Agent': 'k6-AGGRESSIVE-stress-test',
    },
    timeout: '30s',
  };

  // Make the HTTP request
  const response = http.get(url, params);

  // Check response
  const result = check(response, {
    'status is 200': (r) => r.status === 200,
    'status is not 404': (r) => r.status !== 404,
    'response time < 500ms': (r) => r.timings.duration < 500,
    'response time < 1000ms': (r) => r.timings.duration < 1000,
    'response has body': (r) => r.body && r.body.length > 0,
  });

  // Track errors
  errorRate.add(!result);

  // Log detailed information for non-200 responses
  if (response.status !== 200) {
    console.log(`\n=== NON-200 RESPONSE [AGGRESSIVE TEST] ===`);
    console.log(`URL: ${url}`);
    console.log(`Status: ${response.status}`);
    console.log(`Status Text: ${response.status_text}`);
    console.log(`Headers:`, JSON.stringify(response.headers, null, 2));
    console.log(`Body: ${response.body}`);
    console.log(`Response Time: ${response.timings.duration}ms`);
    console.log(`Error Code: ${response.error_code || 'None'}`);
    console.log(`Error: ${response.error || 'None'}`);
    console.log(`Current VUs: ${__VU}`);
    console.log(`=== END RESPONSE DETAILS ===\n`);
  }

  // Minimalny sleep dla MAKSYMALNEJ przepustowości - AGRESYWNIE!
  sleep(0.05); // Tylko 50ms sleep - maksymalne obciążenie!
}

// Setup function (runs once before the test starts)
export function setup() {
  console.log('🚀 Starting AGGRESSIVE stress test against:', BASE_URL);
  console.log('⚠️  WARNING: This is an EXTREME load test up to 1500 concurrent users!');
  
  // Optional: Perform a health check before starting the test
  const healthCheck = http.get(BASE_URL);
  if (healthCheck.status !== 200) {
    console.warn(`⚠️  Warning: Health check failed with status ${healthCheck.status}`);
  } else {
    console.log('✅ Health check passed - starting aggressive test!');
  }
  
  return { startTime: new Date().toISOString() };
}

// Teardown function (runs once after the test completes)
export function teardown(data) {
  console.log('🏁 AGGRESSIVE stress test completed!');
  console.log('Started at:', data.startTime);
  console.log('Finished at:', new Date().toISOString());
}

With around 150 UVs everything works perfectly, but when more UVs are added, I start getting errors on the client - FRPC:

info: Frp.Client.<FrpcWatchdog>F8F9C01DE98A2826DD864252014408A81D449FF3C5AA2E6578FB9A00426D170F2__FrpClientWatchdogService[0]
      [FRP Client] 2025-10-13 11:46:26.053 [E] [proxy/proxy.go:199] [166db207656b21b0] [weather-api] connect to local service [127.0.0.1:5000] error: dial tcp 127.0.0.1:5000: connectex: No connection could be made because the target machine actively refused it.
info: Frp.Client.<FrpcWatchdog>F8F9C01DE98A2826DD864252014408A81D449FF3C5AA2E6578FB9A00426D170F2__FrpClientWatchdogService[0]
      [FRP Client] 2025-10-13 11:46:26.057 [E] [proxy/proxy.go:199] [166db207656b21b0] [weather-api] connect to local service [127.0.0.1:5000] error: dial tcp 127.0.0.1:5000: connectex: No connection could be made because the target machine actively refused it.
info: Frp.Client.<FrpcWatchdog>F8F9C01DE98A2826DD864252014408A81D449FF3C5AA2E6578FB9A00426D170F2__FrpClientWatchdogService[0]
      [FRP Client] 2025-10-13 11:46:26.283 [E] [proxy/proxy.go:199] [166db207656b21b0] [weather-api] connect to local service [127.0.0.1:5000] error: dial tcp 127.0.0.1:5000: connectex: No connection could be made because the target machine actively refused it.
info: Frp.Client.<FrpcWatchdog>F8F9C01DE98A2826DD864252014408A81D449FF3C5AA2E6578FB9A00426D170F2__FrpClientWatchdogService[0]
      [FRP Client] 2025-10-13 11:46:26.284 [E] [proxy/proxy.go:199] [166db207656b21b0] [weather-api] connect to local service [127.0.0.1:5000] error: dial tcp 127.0.0.1:5000: connectex: No connection could be made because the target machine actively refused it.
info: Frp.Client.<FrpcWatchdog>F8F9C01DE98A2826DD864252014408A81D449FF3C5AA2E6578FB9A00426D170F2__FrpClientWatchdogService[0]
      [FRP Client] 2025-10-13 11:46:26.284 [E] [proxy/proxy.go:199] [166db207656b21b0] [weather-api] connect to local service [127.0.0.1:5000] error: dial tcp 127.0.0.1:5000: connectex: No connection could be made because the target machine actively refused it.
info: Frp.Client.<FrpcWatchdog>F8F9C01DE98A2826DD864252014408A81D449FF3C5AA2E6578FB9A00426D170F2__FrpClientWatchdogService[0]
      [FRP Client] 2025-10-13 11:46:26.287 [E] [proxy/proxy.go:199] [166db207656b21b0] [weather-api] connect to local service [127.0.0.1:5000] error: dial tcp 127.0.0.1:5000: connectex: No connection could be made because the target machine actively refused it.

frpc Version

0.65.0

frps Version

0.65.0

System Architecture

Server - Debian12 inside Docker, Client - Windows 11

Configurations

Server settings:

[common]
bind_addr = ${FRPS_BIND_ADDR}
bind_port = ${FRPS_CONTROL_PORT}

# vhost HTTP is internal to the container or pod
vhost_http_port = ${FRPS_VHOST_HTTP_PORT}
vhost_https_port = 0

# Custom 404 page
custom_404_page = /etc/frp/custom_404.html

# Security
authentication_method = token
token = ${FRP_TOKEN}

# Logging
log_file = /dev/stdout
log_level = info
log_max_days = 3

# Performance tuning for EXTREME high load (1k req/s)
max_pool_count = 100
tcp_mux = true
tcp_mux_keepalive_interval = 30
heartbeat_timeout = 90

# Connection limits
max_ports_per_client = 0
allow_ports = 0-0

# Timeouts
user_conn_timeout = 120

Client:

# FRP Client Configuration
serverAddr = "device1.proxy.local"
serverPort = 7000

# Authentication
auth.method = "token"
auth.token = "localtest123"

# Logging
log.to = "console"
log.level = "info"

# Login retry configuration
loginFailExit = false

# Transport settings - optimized for EXTREME high load
transport.tcpMux = true
transport.tcpMuxKeepaliveInterval = 30
transport.heartbeatInterval = 30
transport.heartbeatTimeout = 90

# Connection pool - MAXIMUM for extreme load (500 VUs × 20 req/s)
transport.poolCount = 50

# Dial server timeout
transport.dialServerTimeout = 10
transport.dialServerKeepAlive = 7200

# Proxy for local web application (example: Weather API)
[[proxies]]
name = "api"
type = "http"
localIP = "127.0.0.1"
localPort = 5000
customDomains = ["device1.proxy.local"]

# Health check for the proxy
healthCheck.type = "http"
healthCheck.timeoutSeconds = 3
healthCheck.maxFailed = 3
healthCheck.intervalSeconds = 10
healthCheck.path = "/healthz"

Logs

No response

Steps to reproduce

...

Affected area

  • Docs
  • Installation
  • Performance and Scalability
  • Security
  • User Experience
  • Test and Release
  • Developer Infrastructure
  • Client Plugin
  • Server Plugin
  • Extensions
  • Others
Originally created by @Misiu on GitHub (Oct 13, 2025). Original GitHub issue: https://github.com/fatedier/frp/issues/5018 ### Bug Description I'm building a POC that will allow me to route some endpoint to an application running on a customer server. Architecture: Server: .NET 9 API with YARP and FRPS in the same container. Client: .NET 9 API and FRPC running on a Windows 11 machine. The user is making a request to my server, then the server is proxying it to 127.0.0.1:8081 (FRPS), next the request is passed to FRPC and to the target API running on port 5000. I run tests with k6: ``` import http from 'k6/http'; import { check, sleep } from 'k6'; import { Rate } from 'k6/metrics'; // Custom metrics const errorRate = new Rate('errors'); export const options = { stages: [ { duration: '30s', target: 100 }, { duration: '1m', target: 300 }, { duration: '2m', target: 500 }, { duration: '30s', target: 0 }, ], thresholds: { http_req_duration: ['p(95)<2000'], http_req_failed: ['rate<0.15'], errors: ['rate<0.25'], }, }; const BASE_URL = 'http://device1.proxy.local/weatherforecast'; export default function () { const url = BASE_URL; // Configure request parameters const params = { headers: { 'Content-Type': 'application/json', 'User-Agent': 'k6-AGGRESSIVE-stress-test', }, timeout: '30s', }; // Make the HTTP request const response = http.get(url, params); // Check response const result = check(response, { 'status is 200': (r) => r.status === 200, 'status is not 404': (r) => r.status !== 404, 'response time < 500ms': (r) => r.timings.duration < 500, 'response time < 1000ms': (r) => r.timings.duration < 1000, 'response has body': (r) => r.body && r.body.length > 0, }); // Track errors errorRate.add(!result); // Log detailed information for non-200 responses if (response.status !== 200) { console.log(`\n=== NON-200 RESPONSE [AGGRESSIVE TEST] ===`); console.log(`URL: ${url}`); console.log(`Status: ${response.status}`); console.log(`Status Text: ${response.status_text}`); console.log(`Headers:`, JSON.stringify(response.headers, null, 2)); console.log(`Body: ${response.body}`); console.log(`Response Time: ${response.timings.duration}ms`); console.log(`Error Code: ${response.error_code || 'None'}`); console.log(`Error: ${response.error || 'None'}`); console.log(`Current VUs: ${__VU}`); console.log(`=== END RESPONSE DETAILS ===\n`); } // Minimalny sleep dla MAKSYMALNEJ przepustowości - AGRESYWNIE! sleep(0.05); // Tylko 50ms sleep - maksymalne obciążenie! } // Setup function (runs once before the test starts) export function setup() { console.log('🚀 Starting AGGRESSIVE stress test against:', BASE_URL); console.log('⚠️ WARNING: This is an EXTREME load test up to 1500 concurrent users!'); // Optional: Perform a health check before starting the test const healthCheck = http.get(BASE_URL); if (healthCheck.status !== 200) { console.warn(`⚠️ Warning: Health check failed with status ${healthCheck.status}`); } else { console.log('✅ Health check passed - starting aggressive test!'); } return { startTime: new Date().toISOString() }; } // Teardown function (runs once after the test completes) export function teardown(data) { console.log('🏁 AGGRESSIVE stress test completed!'); console.log('Started at:', data.startTime); console.log('Finished at:', new Date().toISOString()); } ``` With around 150 UVs everything works perfectly, but when more UVs are added, I start getting errors on the client - FRPC: ``` info: Frp.Client.<FrpcWatchdog>F8F9C01DE98A2826DD864252014408A81D449FF3C5AA2E6578FB9A00426D170F2__FrpClientWatchdogService[0] [FRP Client] 2025-10-13 11:46:26.053 [E] [proxy/proxy.go:199] [166db207656b21b0] [weather-api] connect to local service [127.0.0.1:5000] error: dial tcp 127.0.0.1:5000: connectex: No connection could be made because the target machine actively refused it. info: Frp.Client.<FrpcWatchdog>F8F9C01DE98A2826DD864252014408A81D449FF3C5AA2E6578FB9A00426D170F2__FrpClientWatchdogService[0] [FRP Client] 2025-10-13 11:46:26.057 [E] [proxy/proxy.go:199] [166db207656b21b0] [weather-api] connect to local service [127.0.0.1:5000] error: dial tcp 127.0.0.1:5000: connectex: No connection could be made because the target machine actively refused it. info: Frp.Client.<FrpcWatchdog>F8F9C01DE98A2826DD864252014408A81D449FF3C5AA2E6578FB9A00426D170F2__FrpClientWatchdogService[0] [FRP Client] 2025-10-13 11:46:26.283 [E] [proxy/proxy.go:199] [166db207656b21b0] [weather-api] connect to local service [127.0.0.1:5000] error: dial tcp 127.0.0.1:5000: connectex: No connection could be made because the target machine actively refused it. info: Frp.Client.<FrpcWatchdog>F8F9C01DE98A2826DD864252014408A81D449FF3C5AA2E6578FB9A00426D170F2__FrpClientWatchdogService[0] [FRP Client] 2025-10-13 11:46:26.284 [E] [proxy/proxy.go:199] [166db207656b21b0] [weather-api] connect to local service [127.0.0.1:5000] error: dial tcp 127.0.0.1:5000: connectex: No connection could be made because the target machine actively refused it. info: Frp.Client.<FrpcWatchdog>F8F9C01DE98A2826DD864252014408A81D449FF3C5AA2E6578FB9A00426D170F2__FrpClientWatchdogService[0] [FRP Client] 2025-10-13 11:46:26.284 [E] [proxy/proxy.go:199] [166db207656b21b0] [weather-api] connect to local service [127.0.0.1:5000] error: dial tcp 127.0.0.1:5000: connectex: No connection could be made because the target machine actively refused it. info: Frp.Client.<FrpcWatchdog>F8F9C01DE98A2826DD864252014408A81D449FF3C5AA2E6578FB9A00426D170F2__FrpClientWatchdogService[0] [FRP Client] 2025-10-13 11:46:26.287 [E] [proxy/proxy.go:199] [166db207656b21b0] [weather-api] connect to local service [127.0.0.1:5000] error: dial tcp 127.0.0.1:5000: connectex: No connection could be made because the target machine actively refused it. ``` ### frpc Version 0.65.0 ### frps Version 0.65.0 ### System Architecture Server - Debian12 inside Docker, Client - Windows 11 ### Configurations Server settings: ``` [common] bind_addr = ${FRPS_BIND_ADDR} bind_port = ${FRPS_CONTROL_PORT} # vhost HTTP is internal to the container or pod vhost_http_port = ${FRPS_VHOST_HTTP_PORT} vhost_https_port = 0 # Custom 404 page custom_404_page = /etc/frp/custom_404.html # Security authentication_method = token token = ${FRP_TOKEN} # Logging log_file = /dev/stdout log_level = info log_max_days = 3 # Performance tuning for EXTREME high load (1k req/s) max_pool_count = 100 tcp_mux = true tcp_mux_keepalive_interval = 30 heartbeat_timeout = 90 # Connection limits max_ports_per_client = 0 allow_ports = 0-0 # Timeouts user_conn_timeout = 120 ``` Client: ``` # FRP Client Configuration serverAddr = "device1.proxy.local" serverPort = 7000 # Authentication auth.method = "token" auth.token = "localtest123" # Logging log.to = "console" log.level = "info" # Login retry configuration loginFailExit = false # Transport settings - optimized for EXTREME high load transport.tcpMux = true transport.tcpMuxKeepaliveInterval = 30 transport.heartbeatInterval = 30 transport.heartbeatTimeout = 90 # Connection pool - MAXIMUM for extreme load (500 VUs × 20 req/s) transport.poolCount = 50 # Dial server timeout transport.dialServerTimeout = 10 transport.dialServerKeepAlive = 7200 # Proxy for local web application (example: Weather API) [[proxies]] name = "api" type = "http" localIP = "127.0.0.1" localPort = 5000 customDomains = ["device1.proxy.local"] # Health check for the proxy healthCheck.type = "http" healthCheck.timeoutSeconds = 3 healthCheck.maxFailed = 3 healthCheck.intervalSeconds = 10 healthCheck.path = "/healthz" ``` ### Logs _No response_ ### Steps to reproduce 1. 2. 3. ... ### Affected area - [ ] Docs - [ ] Installation - [x] Performance and Scalability - [ ] Security - [x] User Experience - [ ] Test and Release - [ ] Developer Infrastructure - [ ] Client Plugin - [ ] Server Plugin - [ ] Extensions - [ ] Others
gitea-mirror 2026-05-05 14:30:54 -06:00
Author
Owner

@github-actions[bot] commented on GitHub (Oct 28, 2025):

Issues go stale after 14d of inactivity. Stale issues rot after an additional 3d of inactivity and eventually close.

<!-- gh-comment-id:3453939269 --> @github-actions[bot] commented on GitHub (Oct 28, 2025): Issues go stale after 14d of inactivity. Stale issues rot after an additional 3d of inactivity and eventually close.
Author
Owner

@Misiu commented on GitHub (Oct 28, 2025):

Still actual

<!-- gh-comment-id:3454567280 --> @Misiu commented on GitHub (Oct 28, 2025): Still actual
Author
Owner

@github-actions[bot] commented on GitHub (Nov 13, 2025):

Issues go stale after 14d of inactivity. Stale issues rot after an additional 3d of inactivity and eventually close.

<!-- gh-comment-id:3524543715 --> @github-actions[bot] commented on GitHub (Nov 13, 2025): Issues go stale after 14d of inactivity. Stale issues rot after an additional 3d of inactivity and eventually close.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/frp#3951
No description provided.