Ruby on Rails 3: Tietojen suoratoisto Rails-sovelluksen kautta asiakkaalle

original title: "Ruby on Rails 3: Streaming data through Rails to client"


I am working on a Ruby on Rails app that communicates with RackSpace cloudfiles (similar to Amazon S3 but lacking some features).

Due to the lack of the availability of per-object access permissions and query string authentication, downloads to users have to be mediated through an application.

In Rails 2.3, it looks like you can dynamically build a response as follows:

# Streams about 180 MB of generated data to the browser.
render :text => proc { |response, output|
  10_000_000.times do |i|
    output.write("This is line #{i}\n")


Instead of 10_000_000.times... I could dump my cloudfiles stream generation code in there.

Trouble is, this is the output I get when I attempt to use this technique in Rails 3.


Looks like maybe the proc object's call method is not being called? Any other ideas?

Työskentelen Ruby on Rails -sovelluksella, joka kommunikoi RackSpace-pilvitiedostojen kanssa (samanlainen kuin Amazon S3, mutta josta puuttuu joitain ominaisuuksia). Per-ob: n saatavuuden puutteen vuoksi ...

Tämä on yhteenveto käännöksen jälkeen. Jos haluat tarkastella koko käännöstä, napsauta käännä-kuvaketta

Kaikki vastaukset
  • Translate

    It looks like this isn't available in Rails 3

    This appeared to work for me in my controller:

    self.response_body =  proc{ |response, output|
      output.write "Hello world"

  • Translate

    Assign to response_body an object that responds to #each:

    class Streamer
      def each
        10_000_000.times do |i|
          yield "This is line #{i}\n"
    self.response_body =

    If you are using 1.9.x or the Backports gem, you can write this more compactly using

    self.response_body = do |y|
      10_000_000.times do |i|
        y << "This is line #{i}\n"

    Note that when and if the data is flushed depends on the Rack handler and underlying server being used. I have confirmed that Mongrel, for instance, will stream the data, but other users have reported that WEBrick, for instance, buffers it until the response is closed. There is no way to force the response to flush.

    In Rails 3.0.x, there are several additional gotchas:

    • In development mode, doing things such as accessing model classes from within the enumeration can be problematic due to bad interactions with class reloading. This is an open bug in Rails 3.0.x.
    • A bug in the interaction between Rack and Rails causes #each to be called twice for each request. This is another open bug. You can work around it with the following monkey patch:

      class Rack::Response
        def close
          @body.close if @body.respond_to?(:close)

    Both problems are fixed in Rails 3.1, where HTTP streaming is a marquee feature.

    Note that the other common suggestion, self.response_body = proc {|response, output| ...}, does work in Rails 3.0.x, but has been deprecated (and will no longer actually stream the data) in 3.1. Assigning an object that responds to #each works in all Rails 3 versions.

  • Translate

    Thanks to all the posts above, here is fully working code to stream large CSVs. This code:

    1. Does not require any additional gems.
    2. Uses Model.find_each() so as to not bloat memory with all matching objects.
    3. Has been tested on rails 3.2.5, ruby 1.9.3 and heroku using unicorn, with single dyno.
    4. Adds a GC.start at every 500 rows, so as not to blow the heroku dyno's allowed memory.
    5. You may need to adjust the GC.start depending on your Model's memory footprint. I have successfully used this to stream 105K models into a csv of 9.7MB without any problems.

    Controller Method:

    def csv_export
      respond_to do |format|
        format.csv {
          @filename = "responses-#{}.csv"
          self.response.headers["Content-Type"] ||= 'text/csv'
          self.response.headers["Content-Disposition"] = "attachment; filename=#{@filename}"
          self.response.headers['Last-Modified'] =
          self.response_body = do |y|
            i = 0
            Model.find_each do |m|
              if i == 0
                y << Model.csv_header.to_csv
              y << sr.csv_array.to_csv
              i = i+1
              GC.start if i%500==0


    # Set to 3 instead of 4 as per
    worker_processes 3
    # Change timeout to 120s to allow downloading of large streamed CSVs on slow networks
    timeout 120
    #Enable streaming
    port = ENV["PORT"].to_i
    listen port, :tcp_nopush => false


      def self.csv_header
        ["ID", "Route", "username"]
      def csv_array
        [id, route, username]

  • Translate

    In case you are assigning to response_body an object that responds to #each method and it's buffering until the response is closed, try in in action controller:

    self.response.headers['Last-Modified'] =

  • Translate

    Just for the record, rails >= 3.1 has an easy way to stream data by assigning an object that respond to #each method to the controller's response.

    Everything is explained here:

  • Translate

    Yes, response_body is the Rails 3 way of doing this for the moment:

  • Translate

    This solved my problem as well - I have gzip'd CSV files, want to send to the user as unzipped CSV, so I read them a line at a time using a GzipReader.

    These lines are also helpful if you're trying to deliver a big file as a download:

    self.response.headers["Content-Type"] = "application/octet-stream" self.response.headers["Content-Disposition"] = "attachment; filename=#{filename}"

  • Translate

    In addition, you will have to set the 'Content-Length' header by your self.

    If not, Rack will have to wait (buffering body data into memory) to determine the length. And it will ruin your efforts using the methods described above.

    In my case, I could determine the length. In cases you can't, you need to make Rack to start sending body without a 'Content-Length' header. Try to add into "use Rack::Chunked" after 'require' before the 'run'. (Thanks arkadiy)

  • Translate

    I commented in the lighthouse ticket, just wanted to say the self.response_body = proc approach worked for me though I needed to use Mongrel instead of WEBrick to succeed.


  • Translate

    Applying John's solution along with Exequiel's suggestion worked for me.

    The statement

    self.response.headers['Last-Modified'] =

    marks the response as non-cacheable in rack.

    After investigating further, I figured one could also use this :

    headers['Cache-Control'] = 'no-cache'

    This, to me, is just slightly more intuitive. It conveys the message to any1 else who may be reading my code. Also, in case a future version of rack stops checking for Last-Modified , a lot of code may break and it may be a while for folks to figure out why.