rendering - OpenGL中的帧缓冲区和渲染缓冲区的概念和区别是什么?

original title: "rendering - What's the concept of and differences between Framebuffer and Renderbuffer in OpenGL?"


Translate

I'm confused about concept of Framebuffer and Renderbuffer. I know that they're required to render, but I want to understand them before use.

I know some bitmap buffer is required to store the temporary drawing result. The back buffer. And the other buffer is required to be seen on screen when those drawings are in progress. The front buffer. And flip them, and draw again. I know this concept, but it's hard to connect those objects to this concept.

What's the concept of and differences of them?



我对Framebuffer和Renderbuffer的概念感到困惑。我知道需要渲染它们,但是我想在使用之前先了解它们。我知道需要一些位图缓冲区来存储...

这是翻译后的摘要,如果您需要查看完整的翻译,请单击“Translate”图标


所有的回答
  • Translate

    This page has some details which I think explain the difference quite nicely. Firstly:

    The final rendering destination of the OpenGL pipeline is called [the] framebuffer.

    Whereas:

    Renderbuffer Object
    In addition, renderbuffer object is newly introduced for offscreen rendering. It allows to render a scene directly to a renderbuffer object, instead of rendering to a texture object. Renderbuffer is simply a data storage object containing a single image of a renderable internal format. It is used to store OpenGL logical buffers that do not have corresponding texture format, such as stencil or depth buffer.


  • Translate

    The Framebuffer object is not actually a buffer, but an aggregator object that contains one or more attachments, which by their turn, are the actual buffers. You can understand the Framebuffer as C structure where every member is a pointer to a buffer. Without any attachment, a Framebuffer object has very low footprint.

    Now each buffer attached to a Framebuffer can be a Renderbuffer or a texture.

    The Renderbuffer is an actual buffer (an array of bytes, or integers, or pixels). The Renderbuffer stores pixel values in native format, so it's optimized for offscreen rendering. In other words, drawing to a Renderbuffer can be much faster than drawing to a texture. The drawback is that pixels uses a native, implementation-dependent format, so that reading from a Renderbuffer is much harder than reading from a texture. Nevertheless, once a Renderbuffer has been painted, one can copy its content directly to screen (or to other Renderbuffer, I guess), very quickly using pixel transfer operations. This means that a Renderbuffer can be used to efficiently implement the double buffer pattern that you mentioned.

    Renderbuffers are a relatively new concept. Before them, a Framebuffer was used to render to a texture, which can be slower because a texture uses a standard format. It is still possible to render to a texture, and that's quite useful when one needs to perform multiple passes over each pixel to build a scene, or to draw a scene on a surface of another scene!

    The OpenGL wiki has this page that shows more details and links.