Looks like you've got a good understanding of how socket channels work. Took me weeks to understand them
Yes, if you use TCP (Socket, SocketChannel) the underlying OS/socket protocol stack will arbitrarily decide when to send a packet, you can somewhat affect this by calling flush if you are using a Socket, but I'd have to do some research.
The JavaDoc for OutputStream's flush() method mentions:
If the intended destination of this stream is an abstraction provided by the underlying operating system, for example a file, then flushing the stream guarantees only that bytes previously written to the stream are passed to the operating system for writing; it does not guarantee that they are actually written to a physical device such as a disk drive.
However, this doesn't really apply to SocketChannel's read() or write() method. So I'm not sure that there are any guarantees about the behavior of either method.
If you use a DatagramSocket or DatagramChannel and keep the data you write under the maximum UDP transmission size, then the write methods will almost always send one packet per each write() method call, although it may vary from OS to OS.
So yes, the behavior you are describing is expected. One way around it is to use a special marker byte/character between packets.
However, if the data in your packets might contain that special marker, then you will need to use an escape marker for your special marker to differentiate between the end of a packet and a marker that happens to be data in the packet.
Another solution is to include the length of the packet at the beginning of each packet, this way you don't need any special markers or escape markers. Just read the length whenever you receive data and keep reading until number-of-bytes-read == length, then read the next packet's length and so on.