Click the “blue text” above to follow us
“The program was running fast when it first launched, but why is it crawling like a snail now?” I received a request for help from a student a few days ago. After investigation, I found that he was using the default HTTP client from the Go standard library without any optimizations. As a result, under high concurrency, the connection resources were exhausted, and the response time skyrocketed.
Optimizing the HTTP client is crucial in a Go microservices architecture. Think about it, a single request might call 5-6 downstream services, and if each call is slow by 100ms, the overall latency becomes unbearable. Today, let’s discuss how to make the Go HTTP client fly.
1.
Connection Pooling: Reuse is Key
The Go<span>http.DefaultClient</span>
already implements a connection pool by default, but many people do not know how to configure it correctly.
The core principle of connection pooling is simple: establishing a TCP connection is time-consuming (three-way handshake), so it should be reused once established. It’s like going to a restaurant; you wouldn’t change tables every time a new dish is served, right?
The default connection pool settings may not be sufficient in high concurrency scenarios:
customTransport := &http.Transport{
MaxIdleConns: 100, // Maximum number of idle connections in the entire pool MaxIdleConnsPerHost: 10, // Maximum number of idle connections per host IdleConnTimeout: 90 * time.Second, // Idle connection timeout
}
client := &http.Client{Transport: customTransport}
The key point is<span>MaxIdleConnsPerHost</span>
, which is the parameter that truly determines performance! The default value is only 2, which is too small.
I had a project where I increased this value from 2 to 50, and the QPS tripled. The power of connection reuse is truly remarkable.
2.
Timeout Control: Don’t Give Slow Services a Chance
HTTP requests without timeout control are like kites released into the wind; they may never come back. I’ve seen too many cases where servers were overwhelmed due to the lack of timeout settings.
The Go HTTP client by default has no timeout settings! This is practically a ticking time bomb. The correct approach is:
client := &http.Client{
Timeout: 5 * time.Second, // Overall timeout Transport: &http.Transport{ ResponseHeaderTimeout: 2 * time.Second, // Timeout for reading response headers DialContext: (&net.Dialer{ Timeout: 1 * time.Second, // Timeout for establishing TCP connection }).DialContext, },
}
Timeout settings are layered, allowing precise control over each stage. It’s like a date; if the other person hasn’t arrived in 30 minutes, you certainly won’t wait indefinitely, right?
A student responsible for a payment system increased the success rate of the interface from 99.5% to 99.9% by implementing tiered timeout control. For a payment system, this 0.4% improvement means a significant number of orders won’t be interrupted abnormally.
3.
Retry Strategy: Never Give Up Easily
The network environment is complex and variable, and temporary fluctuations are common. A robust HTTP client must have a comprehensive retry mechanism.
A simple retry logic can be implemented as follows:
func doRequestWithRetry(client *http.Client, req *http.Request) (*http.Response, error) {
maxRetries := 3 for i := 0; i < maxRetries; i++ { resp, err := client.Do(req) if err == nil && resp.StatusCode < 500 { return resp, nil // Success or 4xx error do not retry } time.Sleep(time.Duration(math.Pow(2, float64(i))) * time.Second) // Exponential backoff } return nil, fmt.Errorf(“Reached maximum retry count”)
}
Theexponential backoff in the retry strategy is crucial as it prevents retry storms. The first failure waits 1 second, the second waits 2 seconds, the third waits 4 seconds… It’s like fishing; if the fish isn’t biting, you can’t rush it; you need to be patient.
After adding this retry mechanism to an e-commerce project, the order processing success rate improved by 5 percentage points during the Double 11 shopping festival. At critical moments, the retry strategy is your lifeline!
4.
Summary of Strategies
Optimizing the HTTP client has no silver bullet; it needs to be adjusted according to the business scenario. A robust HTTP client should:
- Appropriately increase the connection pool: Especially the MaxIdleConnsPerHost parameter
- Set reasonable timeouts: Control connection, read/write, and overall timeouts separately
- Implement intelligent retries: Use exponential backoff and distinguish between retryable and non-retryable errors
Another small tip: log key metrics for each request, such as connection time, server processing time, total time, etc. This data will help you identify performance bottlenecks.
A few days ago, I helped a gaming company improve the speed of their backend interface by 60%, all thanks to these few tricks. The code didn’t change; just optimizing the configuration yielded immediate results. This is the charm of optimization!
Performance optimization is like the internal martial arts in a wuxia novel; it’s not flashy but practical. Slow down, and you can speed up threefold! Programming in Go is just that interesting.