We’re using serial to send Modbus commands to some devices. The board is using a serial chip that has these pesky DE and RE pins that need to be controlled in an elaborate dance when writing and receiving data. Our previous board used a better chip and we had no issues with Modbus and some other adhoc RS485 communication.
So, the issue here is that immediately when the last bit of a data frame has been written to the wire we need to flip an I/O pin in order to start receiving the response from the other device. If we flip the pin to too soon we lose part of what we write and if we flip it too late we lose a part of the response sent by the device. There’s a short 3.5 byte delay (or should be) before the device sends its response, so there’s a tiny gap where we need to start receiving.
For this we need to know exactly when all data has been written out. This rules out BufferedSerial
as it has its own internal buffering and we know nothing when something is done. This leaves UnbufferedSerial
which in theory should be unbuffered, but just because a call to UnbufferedSerial::write(const void *buffer, size_t size)
is completed doesn’t mean the data is out on the wire. The best I can think of so far is to just add some trial and error magic delay after write()
returns and then enable the receiving mode. There is this in SerialBase
which could at least give an indication when Mbed thinks the data is written: SerialBase::write(const uint8_t *buffer, int length, const event_callback_t &callback, int event)
. Using that we could then add a shorter magic delay to account for the data travelling through the chips and out on the wire.
This however seems like a really basic thing that surely has a good solution that an idiot like me simply isn’t yet aware of. Especially as this type of serial chip seem to be pretty common, although I have not seen anyone mention that they like them, so there’s that.