Hi there!
I’m finding myself scratching my head about how to deal with large buffers sent to BufferedSerial::write() as I was working on optimizing/refactoring our logger.
should we write the buffer byte after byte or in chuncks?
The data is stored in a CircularBuffer (fifo) and the call to BufferedSerial::write() is made with an event queue in a low priority thread to avoid blocking important stuff and only output logs when nothing else is happening.
Current implementation writing byte after byte looks like this:
inline void process_fifo()
{
while (!buffer::fifo.empty()) {
auto c = char {};
buffer::fifo.pop(c);
_default_serial.write(&c, 1);
}
}
Chunk implementation looks like that:
inline auto process_buffer = std::array<char, 64> {};
// ...
inline void process_fifo()
{
while (!buffer::fifo.empty()) {
auto length = buffer::fifo.pop(buffer::process_buffer.data(), std::size(buffer::process_buffer));
_default_serial.write(buffer::process_buffer.data(), length);
}
}
We first pop the data in a 64-byte std::array then pass this buffer to BufferedSerial::write().
The assumptions are that using the temporary 64-byte std::array buffer can:
- reduce the number of calls to
BufferedSerial::write() - empty the
CircularBuffer(fifo) faster allowing it to be filled faster as well - allow the compiler to copy/move bigger chunks of memory and optimize things
But to be honest I’m not sure ![]()
Using chuncks adds 64 bytes of RAM and 64 bytes of flash, which is something we can live with.
But are my assumptions correct? Am I optimizing anything? Or doing premature pessimization?
Our test code seems to be running the same, character output is the same:
- input
1988 characters/msin fifo - output
12 characters/msto serial
So what do you guys think? Should we make the change? ![]()
For reference: