Chapter 5. ByteBuf

This chapter covers

  • ByteBuf —Netty’s data container
  • API details
  • Use cases
  • Memory allocation

As we noted earlier, the fundamental unit of network data is always the byte. Java NIO provides ByteBuffer as its byte container, but this class makes usage overly complex and somewhat cumbersome to use.

Netty’s alternative to ByteBuffer is ByteBuf, a powerful implementation that addresses the limitations of the JDK API and provides a better API for network application developers.

In this chapter we’ll illustrate the superior functionality and flexibility of ByteBuf as compared to the JDK’s ByteBuffer. This will also give you a better understanding of Netty’s approach to data handling in general and prepare you for our discussion of ChannelPipeline and ChannelHandler in chapter 6.

5.1. The ByteBuf API

Netty’s API for data handling is exposed through two components—abstract class ByteBuf and interface ByteBufHolder.

These are some of the advantages of the ByteBuf API:

  • It’s extensible to user-defined buffer types.
  • Transparent zero-copy is achieved by a built-in composite buffer type.
  • Capacity is expanded on demand (as with the JDK StringBuilder).
  • Switching between reader and writer modes doesn’t require calling ByteBuffer’s flip() method.
  • Reading and writing employ distinct indices.
  • Method chaining is supported.
  • Reference counting is supported.
  • Pooling is supported.

Other classes are available for managing the allocation of ByteBuf instances and for performing a variety of operations on the container and the data it holds. We’ll explore these features as we study ByteBuf and ByteBufHolder in detail.

5.2. Class ByteBuf—Netty’s data container

Because all network communications involve the movement of sequences of bytes, an efficient and easy-to-use data structure is an obvious necessity. Netty’s ByteBuf implementation meets and exceeds these requirements. Let’s start by looking at how it uses indices to simplify access to the data it contains.

5.2.1. How it works

ByteBuf maintains two distinct indices: one for reading and one for writing. When you read from a ByteBuf, its readerIndex is incremented by the number of bytes read. Similarly, when you write to a ByteBuf, its writerIndex is incremented. Figure 5.1 shows the layout and state of an empty ByteBuf.

Figure 5.1. A 16-byte ByteBuf with its indices set to 0

To understand the relationship between these indices, consider what would happen if you were to read bytes until the readerIndex reached the same value as the writerIndex. At that point, you would have reached the end of readable data. Attempting to read beyond that point would trigger an IndexOutOfBoundsException, just as when you attempt to access data beyond the end of an array.

ByteBuf methods whose names begin with read or write advance the corresponding index, whereas operations that begin with set and get do not. The latter methods operate on a relative index that’s passed as an argument to the method.

The maximum capacity of a ByteBuf can be specified, and attempting to move the write index past this value will trigger an exception. (The default limit is Integer.MAX_VALUE.)

5.2.2. ByteBuf usage patterns

While working with Netty, you’ll encounter several common usage patterns built around ByteBuf. As we examine them, it will help to keep figure 5.1 in mind—an array of bytes with distinct indices to control read and write access.

Heap buffers

The most frequently used ByteBuf pattern stores the data in the heap space of the JVM. Referred to as a backing array, this pattern provides fast allocation and deallocation in situations where pooling isn’t in use. This approach, shown in listing 5.1, is well suited to cases where you have to handle legacy data.

Listing 5.1. Backing array

Note

Attempting to access a backing array when hasArray() returns false will trigger an UnsupportedOperationException. This pattern is similar to uses of the JDK’s ByteBuffer.

Direct buffers

Direct buffer is another ByteBuf pattern. We expect that memory allocated for object creation will always come from the heap, but it doesn’t have to—the ByteBuffer class that was introduced in JDK 1.4 with NIO allows a JVM implementation to allocate memory via native calls. This aims to avoid copying the buffer’s contents to (or from) an intermediate buffer before (or after) each invocation of a native I/O operation.

The Javadoc for ByteBuffer states explicitly, “The contents of direct buffers will reside outside of the normal garbage-collected heap.”[1] This explains why direct buffers are ideal for network data transfer. If your data were contained in a heap-allocated buffer, the JVM would, in fact, copy your buffer to a direct buffer internally before sending it through the socket.

1

Java Platform, Standard Edition 8 API Specification, java.nio, Class ByteBuffer, http://docs.oracle.com/javase/8/docs/api/java/nio/ByteBuffer.html.

The primary disadvantage of direct buffers is that they’re somewhat more expensive to allocate and release than are heap-based buffers. You may also encounter another drawback if you’re working with legacy code: because the data isn’t on the heap, you may have to make a copy, as shown next.

Listing 5.2. Direct buffer data access

Clearly, this involves a bit more work than using a backing array, so if you know in advance that the data in the container will be accessed as an array, you may prefer to use heap memory.

Composite buffers

The third and final pattern uses a composite buffer, which presents an aggregated view of multiple ByteBufs. Here you can add and delete ByteBuf instances as needed, a feature entirely absent from the JDK’s ByteBuffer implementation.

Netty implements this pattern with a subclass of ByteBuf, CompositeByteBuf, which provides a virtual representation of multiple buffers as a single, merged buffer.

Warning

The ByteBuf instances in a CompositeByteBuf may include both direct and nondirect allocations. If there is only one instance, calling hasArray() on a CompositeByteBuf will return the hasArray() value of that component; otherwise it will return false.

To illustrate, let’s consider a message composed of two parts, header and body, to be transmitted via HTTP. The two parts are produced by different application modules and assembled when the message is sent out. The application has the option of reusing the same message body for multiple messages. When this happens, a new header is created for each message.

Because we don’t want to reallocate both buffers for each message, CompositeByteBuf is a perfect fit; it eliminates unnecessary copying while exposing the common ByteBuf API. Figure 5.2 shows the resulting message layout.

Figure 5.2. CompositeByteBuf holding a header and body

The following listing shows how this requirement would be implemented using the JDK’s ByteBuffer. An array of two ByteBuffers is created to hold the message components, and a third one is created to hold a copy of all the data.

Listing 5.3. Composite buffer pattern using ByteBuffer
// Use an array to hold the message parts
ByteBuffer[] message = new ByteBuffer[] { header, body };
// Create a new ByteBuffer and use copy to merge the header and body
ByteBuffer message2 =
    ByteBuffer.allocate(header.remaining() + body.remaining());
message2.put(header);
message2.put(body);
message2.flip();

The allocation and copy operations, along with the need to manage the array, make this version inefficient as well as awkward. The next listing shows a version using CompositeByteBuf.

Listing 5.4. Composite buffer pattern using CompositeByteBuf

CompositeByteBuf may not allow access to a backing array, so accessing the data in a CompositeByteBuf resembles the direct buffer pattern, as shown next.

Listing 5.5. Accessing the data in a CompositeByteBuf

Note that Netty optimizes socket I/O operations that employ CompositeByteBuf, eliminating whenever possible the performance and memory usage penalties that are incurred with the JDK’s buffer implementation.[2] This optimization takes place in Netty’s core code and is therefore not exposed, but you should be aware of its impact.

2

This applies particularly to the JDK’s use of a technique known as Scatter/Gather I/O, defined as “a method of input and output where a single system call writes to a vector of buffers from a single data stream, or, alternatively, reads into a vector of buffers from a single data stream.” Robert Love, Linux System Programming (O’Reilly, 2007).

The CompositeByteBuf API

Beyond the methods it inherits from ByteBuf, CompositeByteBuf offers a great deal of added functionality. Refer to the Netty Javadocs for a full listing of the API.

5.3. Byte-level operations

ByteBuf provides numerous methods beyond the basic read and write operations for modifying its data. In the next sections we’ll discuss the most important of these.

5.3.1. Random access indexing

Just as in an ordinary Java byte array, ByteBuf indexing is zero-based: the index of the first byte is 0 and that of the last byte is always capacity() - 1. The next listing shows that the encapsulation of storage mechanisms makes it very simple to iterate over the contents of a ByteBuf.

Listing 5.6. Access data
ByteBuf buffer = ...;
for (int i = 0; i < buffer.capacity(); i++) {
    byte b = buffer.getByte(i);
    System.out.println((char) b);
}

Note that accessing the data using one of the methods that takes an index argument doesn’t alter the value of either readerIndex or writerIndex. Either can be moved manually if necessary by calling readerIndex(index) or writerIndex(index).

5.3.2. Sequential access indexing

While ByteBuf has both reader and writer indices, the JDK’s ByteBuffer has only one, which is why you have to call flip() to switch between read and write modes. Figure 5.3 shows how a ByteBuf is partitioned by its two indices into three areas.

Figure 5.3. ByteBuf internal segmentation

5.3.3. Discardable bytes

The segment labeled discardable bytes in figure 5.3 contains bytes that have already been read. They can be discarded and the space reclaimed by calling discardReadBytes(). The initial size of this segment, stored in readerIndex, is 0, increasing as read operations are executed (get* operations don’t move the readerIndex).

Figure 5.4 shows the result of calling discardReadBytes() on the buffer shown in figure 5.3. You can see that the space in the discardable bytes segment has become available for writing. Note that there’s no guarantee about the contents of the writable segment after discardReadBytes() has been called.

Figure 5.4. ByteBuf after discarding read bytes

While you may be tempted to call discardReadBytes() frequently to maximize the writable segment, please be aware that this will most likely cause memory copying because the readable bytes (marked CONTENT in the figures) have to be moved to the start of the buffer. We advise doing this only when it’s really needed; for example, when memory is at a premium.

5.3.4. Readable bytes

The readable bytes segment of a ByteBuf stores the actual data. The default value of a newly allocated, wrapped, or copied buffer’s readerIndex is 0. Any operation whose name starts with read or skip will retrieve or skip the data at the current readerIndex and increase it by the number of bytes read.

If the method called takes a ByteBuf argument as a write target and doesn’t have a destination index argument, the destination buffer’s writerIndex will be increased as well; for example,

readBytes(ByteBuf dest);

If an attempt is made to read from the buffer when readable bytes have been exhausted, an IndexOutOfBoundsException is raised.

This listing shows how to read all readable bytes.

Listing 5.7. Read all data
ByteBuf buffer = ...;
while (buffer.isReadable()) {
    System.out.println(buffer.readByte());
}

5.3.5. Writable bytes

The writable bytes segment is an area of memory with undefined contents, ready for writing. The default value of a newly allocated buffer’s writerIndex is 0. Any operation whose name starts with write will start writing data at the current writerIndex, increasing it by the number of bytes written. If the target of the write operation is also a ByteBuf and no source index is specified, the source buffer’s readerIndex will be increased by the same amount. This call would appear as follows:

writeBytes(ByteBuf dest);

If an attempt is made to write beyond the target’s capacity, an IndexOutOfBoundException will be raised.

The following listing is an example that fills the buffer with random integer values until it runs out of space. The method writableBytes() is used here to determine whether there is sufficient space in the buffer.

Listing 5.8. Write data
// Fills the writable bytes of a buffer with random integers.
ByteBuf buffer = ...;
while (buffer.writableBytes() >= 4) {
    buffer.writeInt(random.nextInt());
}

5.3.6. Index management

The JDK’s InputStream defines the methods mark(int readlimit) and reset(). These are used to mark the current position in the stream to a specified value and to reset the stream to that position, respectively.

Similarly, you can set and reposition the ByteBuf readerIndex and writerIndex by calling markReaderIndex(), markWriterIndex(), resetReaderIndex(), and resetWriterIndex(). These are similar to the InputStream calls, except that there’s no readlimit parameter to specify when the mark becomes invalid.

You can also move the indices to specified positions by calling readerIndex(int) or writerIndex(int). Attempting to set either index to an invalid position will cause an IndexOutOfBoundsException.

You can set both readerIndex and writerIndex to 0 by calling clear(). Note that this doesn’t clear the contents of memory. Figure 5.5 (which repeats figure 5.3) shows how it works.

Figure 5.5. Before clear() is called

As before, the ByteBuf contains three segments. Figure 5.6 shows the ByteBuf after clear() is called.

Figure 5.6. After clear() is called

Calling clear() is much less expensive than discardReadBytes() because it resets the indices without copying any memory.

5.3.7. Search operations

There are several ways to determine the index of a specified value in a ByteBuf. The simplest of these uses the indexOf() methods. More complex searches can be executed with methods that take a ByteBufProcessor argument. This interface defines a single method,

boolean process(byte value)

which reports whether the input value is the one being sought.

ByteBufProcessor defines numerous convenience methods targeting common values. Suppose your application needs to integrate with so-called Flash sockets,[3] which have NULL-terminated content. Calling

3

Flash sockets are discussed in the Flash ActionScript 3.0 Developer’s Guide, Networking and communication, Sockets page at http://help.adobe.com/en_US/as3/dev/WSb2ba3b1aad8a27b0-181c51321220efd9d1c-8000.html.

forEachByte(ByteBufProcessor.FIND_NUL)

consumes the Flash data simply and efficiently, because fewer bounds checks are executed during processing.

This listing shows an example of searching for a carriage return character ( ).

Listing 5.9. Using ByteBufProcessor to find
ByteBuf buffer = ...;
int index = buffer.forEachByte(ByteBufProcessor.FIND_CR);

5.3.8. Derived buffers

A derived buffer provides a view of a ByteBuf that represents its contents in a specialized way. Such views are created by the following methods:

  • duplicate()
  • slice()
  • slice(int, int)
  • Unpooled.unmodifiableBuffer(...)
  • order(ByteOrder)
  • readSlice(int)

Each returns a new ByteBuf instance with its own reader, writer, and marker indices. The internal storage is shared just as in a JDK ByteBuffer. This makes a derived buffer inexpensive to create, but it also means that if you modify its contents you are modifying the source instance as well, so beware.

ByteBuf copying

If you need a true copy of an existing buffer, use copy() or copy(int,int). Unlike a derived buffer, the ByteBuf returned by this call has an independent copy of the data.

The next listing shows how to work with a ByteBuf segment using slice (int, int).

Listing 5.10. Slice a ByteBuf

Now let’s see how a copy of a ByteBuf segment differs from a slice.

Listing 5.11. Copying a ByteBuf

The two cases are identical except for the effect of modifying a slice or a copy of the original ByteBuf. Whenever possible, use slice() to avoid the cost of copying memory.

5.3.9. Read/write operations

As we’ve mentioned, there are two categories of read/write operations:

  • get() and set() operations that start at a given index and leave it unchanged
  • read() and write() operations that start at a given index and adjust it by the number of bytes accessed

Table 5.1 lists the most frequently used get() methods. For a complete list, refer to the API docs.

Table 5.1. get() operations

Name

Description

getBoolean(int) Returns the Boolean value at the given index
getByte(int) Returns the byte at the given index
getUnsignedByte(int) Returns the unsigned byte value at the given index as a short
getMedium(int) Returns the 24-bit medium int value at the given index
getUnsignedMedium(int) Returns the unsigned 24-bit medium int value at the given index
getInt(int) Returns the int value at the given index
getUnsignedInt(int) Returns the unsigned int value at the given index as a long
getLong(int) Returns the long value at the given index
getShort(int) Returns the short value at the given index
getUnsignedShort(int) Returns the unsigned short value at the given index as an int
getBytes(int, ...) Transfers this buffer’s data to a specified destination starting at the given index

Most of these operations have a corresponding set() method. These are listed in table 5.2.

Table 5.2. set()operations

Name

Description

setBoolean(int, boolean) Sets the Boolean value at the given index
setByte(int index, int value) Sets byte value at the given index
setMedium(int index, int value) Sets the 24-bit medium value at the given index
setInt(int index, int value) Sets the int value at the given index
setLong(int index, long value) Sets the long value at the given index
setShort(int index, int value) Sets the short value at the given index

The following listing illustrates the use of get() and set() methods, showing that they don’t alter the read and write indices.

Listing 5.12. get() and set() usage

Now let’s examine the read() operations, which act on the current readerIndex or writerIndex. These methods are used to read from the ByteBuf as if it were a stream. Table 5.3 shows the most commonly used methods.

Table 5.3. read()operations

Name

Description

readBoolean() Returns the Boolean value at the current readerIndex and increases the readerIndex by 1.
readByte() Returns the byte value at the current readerIndex and increases the readerIndex by 1.
readUnsignedByte() Returns the unsigned byte value at the current readerIndex as a short and increases the readerIndex by 1.
readMedium() Returns the 24-bit medium value at the current readerIndex and increases the readerIndex by 3.
readUnsignedMedium() Returns the unsigned 24-bit medium value at the current readerIndex and increases the readerIndex by 3.
readInt() Returns the int value at the current readerIndex and increases the readerIndex by 4.
readUnsignedInt() Returns the unsigned int value at the current readerIndex as a long and increases the readerIndex by 4.
readLong() Returns the long value at the current readerIndex and increases the readerIndex by 8.
readShort() Returns the short value at the current readerIndex and increases the readerIndex by 2.
readUnsignedShort() Returns the unsigned short value at the current readerIndex as an int and increases the readerIndex by 2.
readBytes(ByteBuf | byte[] destination, int dstIndex [,int length]) Transfers data from the current ByteBuf starting at the current readerIndex (for, if specified, length bytes) to a destination ByteBuf or byte[], starting at the destination’s dstIndex. The local readerIndex is incremented by the number of bytes transferred.

Almost every read() method has a corresponding write() method, used to append to a ByteBuf. Note that the arguments to these methods, listed in table 5.4, are the values to be written, not index values.

Table 5.4. Write operations

Name

Description

writeBoolean(boolean) Writes the Boolean value at the current writerIndex and increases the writerIndex by 1.
writeByte(int) Writes the byte value at the current writerIndex and increases the writerIndex by 1.
writeMedium(int) Writes the medium value at the current writerIndex and increases the writerIndex by 3.
writeInt(int) Writes the int value at the current writerIndex and increases the writerIndex by 4.
writeLong(long) Writes the long value at the current writerIndex and increases the writerIndex by 8.
writeShort(int) Writes the short value at the current writerIndex and increases the writerIndex by 2.
writeBytes(source ByteBuf | byte[] [,int srcIndex ,int length]) Transfers data starting at the current writerIndex from the specified source (ByteBuf or byte[]). If srcIndex and length are provided, reading starts at srcIndex and proceeds for length bytes. The current writerIndex is incremented by the number of bytes written.

Listing 5.13 shows these methods in use.

Listing 5.13. read() and write() operations on the ByteBuf

5.3.10. More operations

Table 5.5 lists additional useful operations provided by ByteBuf.

Table 5.5. Other useful operations

Name

Description

isReadable() Returns true if at least one byte can be read.
isWritable() Returns true if at least one byte can be written.
readableBytes() Returns the number of bytes that can be read.
writableBytes() Returns the number of bytes that can be written.
capacity() Returns the number of bytes that the ByteBuf can hold. After this it will try to expand again until maxCapacity() is reached.
maxCapacity() Returns the maximum number of bytes the ByteBuf can hold.
hasArray() Returns true if the ByteBuf is backed by a byte array.
array() Returns the byte array if the ByteBuf is backed by a byte array; otherwise it throws an UnsupportedOperationException.

5.4. Interface ByteBufHolder

We often find that we need to store a variety of property values in addition to the actual data payload. An HTTP response is a good example; along with the content represented as bytes, there are status code, cookies, and so on.

Netty provides ByteBufHolder to handle this common use case. ByteBufHolder also provides support for advanced features of Netty, such as buffer pooling, where a ByteBuf can be borrowed from a pool and also be released automatically if required.

ByteBufHolder has just a handful of methods for access to the underlying data and reference counting. Table 5.6 lists them (leaving aside those it inherits from ReferenceCounted).

Table 5.6. ByteBufHolder operations

Name

Description

content() Returns the ByteBuf held by this ByteBufHolder
copy() Returns a deep copy of this ByteBufHolder, including an unshared copy of the contained ByteBuf’s data
duplicate() Returns a shallow copy of this ByteBufHolder, including a shared copy of the contained ByteBuf’s data

ByteBufHolder is a good choice if you want to implement a message object that stores its payload in a ByteBuf.

5.5. ByteBuf allocation

In this section we’ll describe ways of managing ByteBuf instances.

5.5.1. On-demand: interface ByteBufAllocator

To reduce the overhead of allocating and deallocating memory, Netty implements pooling with the interface ByteBufAllocator, which can be used to allocate instances of any of the ByteBuf varieties we’ve described. The use of pooling is an application-specific decision that doesn’t alter the ByteBuf API in any way.

Table 5.7 lists the operations provided by ByteBufAllocator.

Table 5.7. ByteBufAllocator methods

Name

Description

buffer() buffer(int initialCapacity); buffer(int initialCapacity, int maxCapacity); Returns a ByteBuf with heap-based or direct data storage
heapBuffer() heapBuffer(int initialCapacity) heapBuffer(int initialCapacity, int maxCapacity) Returns a ByteBuf with heap-based storage
directBuffer() directBuffer(int initialCapacity) directBuffer(int initialCapacity, int maxCapacity) Returns a ByteBuf with direct storage
compositeBuffer() compositeBuffer(int maxNumComponents); compositeDirectBuffer() compositeDirectBuffer(int maxNumComponents); compositeHeapBuffer() compositeHeapBuffer(int maxNumComponents); Returns a CompositeByteBuf that can be expanded by adding heap-based or direct buffers up to the specified number of components
ioBuffer() Returns a ByteBuf that will be used for I/O operations on a socket

You can obtain a reference to a ByteBufAllocator either from a Channel (each of which can have a distinct instance) or through the ChannelHandlerContext that is bound to a ChannelHandler. The following listing illustrates both of these methods.

Listing 5.14. Obtaining a ByteBufAllocator reference

Netty provides two implementations of ByteBufAllocator: PooledByteBufAllocator and UnpooledByteBufAllocator. The former pools ByteBuf instances to improve performance and minimize memory fragmentation. This implementation uses an efficient approach to memory allocation known as jemalloc[4] that has been adopted by a number of modern OSes. The latter implementation doesn’t pool ByteBuf instances and returns a new instance every time it’s called.

4

Jason Evans, “A Scalable Concurrent malloc(3) Implementation for FreeBSD” (2006), http://people.freebsd.org/~jasone/jemalloc/bsdcan2006/jemalloc.pdf.

Although Netty uses the PooledByteBufAllocator by default, this can be changed easily via the ChannelConfig API or by specifying a different allocator when bootstrapping your application. More details can be found in chapter 8.

5.5.2. Unpooled buffers

There may be situations where you don’t have a reference to a ByteBufAllocator. For this case, Netty provides a utility class called Unpooled, which provides static helper methods to create unpooled ByteBuf instances. Table 5.8 lists the most important of these methods.

Table 5.8. Unpooled methods

Name

Description

buffer() buffer(int initialCapacity) buffer(int initialCapacity, int maxCapacity) Returns an unpooled ByteBuf with heap-based storage
directBuffer() directBuffer(int initialCapacity) directBuffer(int initialCapacity, int maxCapacity) Returns an unpooled ByteBuf with direct storage
wrappedBuffer() Returns a ByteBuf, which wraps the given data.
copiedBuffer() Returns a ByteBuf, which copies the given data

The Unpooled class also makes ByteBuf available to non-networking projects that can benefit from a high-performance extensible buffer API and that don’t require other Netty components.

5.5.3. Class ByteBufUtil

ByteBufUtil provides static helper methods for manipulating a ByteBuf. Because this API is generic and unrelated to pooling, these methods have been implemented outside the allocation classes.

The most valuable of these static methods is probably hexdump(), which prints a hexadecimal representation of the contents of a ByteBuf. This is useful in a variety of situations, such as logging the contents of a ByteBuf for debugging purposes. A hex representation will generally provide a more usable log entry than would a direct representation of the byte values. Furthermore, the hex version can easily be converted back to the actual byte representation.

Another useful method is boolean equals(ByteBuf, ByteBuf), which determines the equality of two ByteBuf instances. You may find other methods of ByteBufUtil useful if you implement your own ByteBuf subclasses.

5.6. Reference counting

Reference counting is a technique for optimizing memory use and performance by releasing the resources held by an object when it is no longer referenced by other objects. Netty introduced reference counting in version 4 for ByteBuf and ByteBufHolder, both of which implement interface ReferenceCounted.

The idea behind reference counting isn’t particularly complex; mostly it involves tracking the number of active references to a specified object. A ReferenceCounted implementation instance will normally start out with an active reference count of 1. As long as the reference count is greater than 0, the object is guaranteed not to be released. When the number of active references decreases to 0, the instance will be released. Note that while the precise meaning of release may be implementation-specific, at the very least an object that has been released should no longer be available for use.

Reference counting is essential to pooling implementations, such as PooledByteBufAllocator, which reduces the overhead of memory allocation. Examples are shown in the next two listings.

Listing 5.15. Reference counting

Listing 5.16. Release reference-counted object

Trying to access a reference-counted object that’s been released will result in an IllegalReferenceCountException.

Note that a specific class can define its release-counting contract in its own unique way. For example, we can envision a class whose implementation of release() always sets the reference count to zero whatever its current value, thus invalidating all active references at once.

Who is responsible for release?

In general, the last party to access an object is responsible for releasing it. In chapter 6 we’ll explain the relevance of this conept to ChannelHandler and ChannelPipeline.

5.7. Summary

This chapter was devoted to Netty’s data containers, based on ByteBuf. We started out by explaining the advantages of ByteBuf over the implementation provided by the JDK. We also highlighted the APIs of the available variants and indicated which are best suited to specific use cases.

These are the main points we covered:

  • The use of distinct read and write indices to control data access
  • Different approaches to memory usage—backing arrays and direct buffers
  • The aggregate view of multiple ByteBufs using CompositeByteBuf
  • Data-access methods: searching, slicing, and copying
  • The read, write, get, and set APIs
  • ByteBufAllocator pooling and reference counting

In the next chapter, we’ll focus on ChannelHandler, which provides the vehicle for your data-processing logic. Because ChannelHandler makes heavy use of ByteBuf, you’ll begin to see important pieces of the overall architecture of Netty coming together.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset