Efficient HTTP Large Data Requests Using Go Language Streaming Processing

Hi everyone, I am Hu Ge!

Today, let’s talk about something hardcore — the streaming processing feature of Go language. You should know that in development, we often need to handle some large amounts of data, such as large file uploads or processing content that is generated gradually. At this point, if you are using Go language, streaming processing will become your good helper. Simply put, it’s about how to efficiently handle this data in HTTP requests. Below, I will discuss this topic with some examples.

The Magic of Streaming Processing: What Is Streaming Write?

In Go language, if you want to initiate a request via HTTP, you usually need to write data into the body of the request. Most of the time, the data volume is not large, and we simply package and send it out without feeling pressure on memory. But when faced with extremely large data volumes, such as a few GB files, or data that needs to be generated dynamically, if you still want to load it all into memory at once, that’s basically a recipe for disaster — it can crash your service in no time. This is where the concept of streaming write comes into play.

In simple terms, streaming write is a way of generating and sending data on the fly without loading all the data into memory at once. The HTTP request body in Go language supports the io.Reader interface, which means you can gradually “feed” data to the HTTP request in a streaming manner instead of swallowing a whole data block at once.

Example Code: Practical Operation of Streaming Write

No need to talk too much; let’s get straight to the code. Suppose we have a large chunk of data, for example, a huge string. For demonstration, I will use a repeated string to simulate this situation. Let’s first look at how to stream this large string into the body of an HTTP request:

package main
import (
  "bytes"
  "fmt"
  "io"
  "net/http"
  "strings"
)
func main() {
  // Create a large string to simulate large data
  largeData := strings.Repeat("Hello, World!", 1000000) // About 13MB
  // Convert the large string to an io.Reader
  reader := strings.NewReader(largeData)
  // Create an HTTP request and set the io.Reader as the request body
  req, err := http.NewRequest("POST", "http://example.com/upload", reader)
  if err != nil {
      fmt.Println("Error creating request:", err)
      return
  }
  // Set Content-Type header
  req.Header.Set("Content-Type", "text/plain")
  // Send HTTP request
  client := &http.Client{}
  resp, err := client.Do(req)
  if err != nil {
      fmt.Println("Error sending request:", err)
      return
  }
  defer resp.Body.Close()
  // Read response
  body, err := io.ReadAll(resp.Body)
  if err != nil {
      fmt.Println("Error reading response body:", err)
      return
  }
  fmt.Println("Response status:", resp.Status)
  fmt.Println("Response body:", string(body))
}

This code is straightforward but demonstrates the power of streaming write. First, we created a string of about 13MB, then used strings.NewReader to convert it into an io.Reader object. This io.Reader is the key to streaming processing; it reads and sends data gradually when the request is sent, saving you from the pain of loading everything into memory at once.

Using Streaming Processing to Upload Large Files

Of course, in actual development, we are often required to upload large files, not just strings. Here, let’s look at another example of how to read data directly from a file using io.Reader and upload it:

package main
import (
  "fmt"
  "io"
  "net/http"
  "os"
)
func main() {
  // Open a large file
  file, err := os.Open("largefile.txt")
  if err != nil {
      fmt.Println("Error opening file:", err)
      return
  }
  defer file.Close()
  // Create an HTTP request and set the file's io.Reader as the request body
  req, err := http.NewRequest("POST", "http://example.com/upload", file)
  if err != nil {
      fmt.Println("Error creating request:", err)
      return
  }
  // Set Content-Type header
  req.Header.Set("Content-Type", "text/plain")
  // Send HTTP request
  client := &http.Client{}
  resp, err := client.Do(req)
  if err != nil {
      fmt.Println("Error sending request:", err)
      return
  }
  defer resp.Body.Close()
  // Read response
  body, err := io.ReadAll(resp.Body)
  if err != nil {
      fmt.Println("Error reading response body:", err)
      return
  }
  fmt.Println("Response status:", resp.Status)
  fmt.Println("Response body:", string(body))
}

In this example, we opened a large file and directly used the file’s io.Reader interface to set it as the body of the HTTP request. The beauty of this approach is that the data from the file is read gradually, without needing to load it all into memory at once, which is particularly suitable for handling large file uploads.

Why Use Streaming Write?

So the question arises, why is streaming write so important? To put it simply, if you are a developer dealing with large data or large files, memory is always a precious resource. Loading a huge volume of data into memory at once not only wastes resources but can also lead to system crashes. With streaming write, you can process and send data simultaneously, significantly reducing memory usage. Furthermore, streaming processing also makes your program more flexible when handling large data, especially when dealing with dynamically generated data, where streaming write becomes particularly crucial.

Extensions and Applications: Not Limited to Strings or Files

It is worth mentioning that the io.Reader interface in Go language is very powerful and can be used not only with strings or files. You can also implement the io.Reader interface to dynamically generate and send data, such as gradually reading data from a database or reading real-time data from sensors, all of which can be handled using streaming write.

In summary, the streaming processing feature of Go language provides us with an efficient way to handle large data, whether uploading files or dynamically generating request bodies, streaming write can help you save memory while ensuring performance, making it a real “lifesaver” for big data processing.

Alright, that’s all for today. I hope everyone can better understand the streaming processing feature in Go language through this article and apply it in actual development. Next time you encounter a big data processing issue, don’t forget that streaming processing is a powerful tool!

Currently, students interested in programming and the workplace can contact me on WeChat: golang404, and I will add you to the “Programmer Exchange Group”.

🔥 Hu Ge’s Private Collection Hot Recommendations 🔥

As an old coder, Hu Ge has organized the most comprehensive“GO Backend Development Resource Collection”.

The resources include “IDEA Video Tutorial”, “Most Comprehensive GO Interview Question Bank”, “Most Comprehensive Project Source Code and Video” and “Graduation Project System Source Code”, totaling up to 650GB. All of it is free to receive! It fully meets the learning needs of programmers at various stages!

Leave a Comment