I've been thinking about the problem some more and I've come up with an alternate approach. Basically, the solution to the problem is to simply not allocate a buffer until the read takes place. This is made pretty simple by use of the
BufferAllocatorinterface in XNIO.
Using this interface, the signature of the asynchronous read method would look like this:
IoFuture<ByteBuffer> asyncRead(BufferAllocator<ByteBuffer> allocator) throws IOException;
The buffer is allocated only when the channel is readable. And if an NIO.2-style (or similar) async read is used "under the covers" for whatever reason, then the allocation can simply happen right upfront.
Look for this feature in XNIO 1.1!