UDPSocket.sendTo() in an Ticker interrupt context

I’m using a Ticker to periodically a function whose purpose is to send a packet to a multicast address every few seconds. It’s a form of simple service announcement.

Ticker announcementTimer;
...
us_timestamp_t announcementInterval = 2 * 1000  * 1000;
announcementTimer.attach_us( callback(sendMulticast), announcementInterval);

This is fine and the sendMulticast method is called and I send a precreated packet to a UDP socket that’s been set up before wnd which is working fine for other data.

// send a packet to a precreated address
nsapi_error_t status = udpSocket.sendto(address, data, sze);

This however hard faults immediately. In another case I handle a UDP socket using a sigio() callback and there I can in the interrupt handler read data from the socket and also write replies. What is special about Ticker that it should crash if I write to a socket in its interrupt handler? I handle a few serial buses (RS-485, Modbus) the same way and read in their interrupt handlers as well as write without issues. I also use a Ticker to handle some RS-485 sending and that also works fine. Is UDPSocket (LWIP) somehow special in that it somehow breaks when used according to the docs? The Ticker docs actually do not really say anything about what you can and can not do in an interrupt handler.

I’m using the branch mbed-os-5.15.1 with an LPC1768 based board.

Oh, and I see nothing on the serial console. It just hard faults and the hardware watchdog reboots the board after a while. Sometimes I do see errors on the serial console, but 95% of crashes just go boom.

One way to get out of an ISR is to use an EventQueue, but that requires a thread too, and we simply do not have memory for more threads. This board hard 32kB of memory and we’re constantly shaving off bytes here and there to keep the whole thing alive.

I did try to use an EventQueue too, as it seems that it doesn’t require a thread after all. However the docs are somewhat sparse, so I’m probably using it wrong as I get no events to be sent. I saw this example in the code:

  #include "mbed.h"
 
   void handler(int data) { ... }
 
   class Device {
      public:
      void handler(int data) { ... }
   };
 
   Device dev;
 
   // queue with not internal storage for dynamic events
   // accepts only user allocated events
   static EventQueue queue(0);
   // Create events
   static auto e1 = make_user_allocated_event(&dev, Device::handler, 2);
   static auto e2 = queue.make_user_allocated_event(handler, 3);
 
   int main()
   {
     e1.call_on(&queue);
     e2.call();
 
     queue.dispatch(1);
   }

This seems like it’s enough to post my events to the queue, something that would call sendMulticast and then call eventQueue.dispatch(1) to allow it to dispatch events for 1ms and thus dispatch the one added event. Doesn’t seem to work that way though?

EventQueue eventQueue(0);
auto announcementEvent = eventQueue.make_user_allocated_event(sendMulticast);
...

us_timestamp_t announcementInterval = 3 * 1000  * 1000;
announcementTimer.attach_us( callback(announcementCallback), announcementInterval);
...

void announcementCallback() {
    announcementEvent.call();
    eventQueue.dispatch(1);
}

Probably something I don’t grok here, but the docs aren’t particularly good.

Ah, I had a copy and paste error and my event called the wrong function. But even then the code hard faults when it tries to send the packet to the socket. So the ISR perhaps isn’t the problem, as the event handler is run in a “user context”. I would assume it to be totally impossible for a call to UDPSocket.sendTo() to totally crash, to me it seems a bit like a bug.

Thank you for the help and for following my adventures this far. I will now try to dig onwards and see what I can come up with.

I believe sendto() cannot be called in the ISR context because of Mutex. Maybe I don’t fully understand what you are trying to do… But can’t you just do something like this using EventQueue?

#include "mbed.h"
EventQueue *queue = mbed_event_queue();
NetworkInterface* net;
Ticker ticker;

void udpSend() {
    UDPSocket sock;
    sock.open(net);
    char out_buffer[] = "time";
    nsapi_error_t status = sock.sendto("time.nist.gov", 37, out_buffer, sizeof(out_buffer));
    printf("status: %d\n\r", status);
    sock.close();
}

void tickerCallback() {
    queue->call(&udpSend); // This defers the execution to a user context
} 

int main() {
    net = NetworkInterface::get_default_instance();
    net->connect();
    ticker.attach(&tickerCallback, 1.0);
    queue->dispatch_forever();
}

I have used sendTo() just fine in a sigio() ISR context, so unless the Ticker’s context is special it should work.

I don’t have a spare thread that could do queue->dispatch_forever(), I’m trying to manage without new threads. And based on the example I found it should work just fine.

I have actually no idea how that evet queue is supposed to work, I must be holding it totally wrong.

EventQueue eventQueue(0);
auto announcementEvent = eventQueue.make_user_allocated_event(sendMulticast);
uint32_t sendMulticastCounter = 0;
announcementCallbackCounter = 0;
...

us_timestamp_t announcementInterval = 3 * 1000 * 1000;
announcementTimer.attach_us( callback(announcementCallback), announcementInterval);
...

void sendMulticast() {
   sendMulticastCounter++;
}

void announcementCallback() {
    announcementCallbackCounter++;
    announcementEvent.call();
    eventQueue.dispatch(0);
}

This works without crashes. But when I print the two counters after running some minutes I see that announcementCallbackCounter has incremented to about 200 (running for 400 seconds or so), but that sendMulticastCounter is just 5. From what I understand those should be exactly equal. Checking the code for dispatch() I see that eventually this is called:

// Dispatch events
//
// Executes events until the specified milliseconds have passed. If ms is
// negative, equeue_dispatch will dispatch events indefinitely or until
// equeue_break is called on this queue.
//
// When called with a finite timeout, the equeue_dispatch function is
// guaranteed to terminate. When called with a timeout of 0, the
// equeue_dispatch does not wait and is irq safe.
void equeue_dispatch(equeue_t *queue, int ms);

So calling it with a timeout of 0 ms should clear the queue but not wait for any new events. But that definitely does not happen here.

And trying it with a thread insted of using the static version seems to not work at all right now?

EventQueue eventQueue(2);
Thread eventQueueThread(osPriorityNormal, 768, NULL, "eventqueue");
...

eventQueueThread.start(callback(&eventQueue, &EventQueue::dispatch_forever));
us_timestamp_t announcementInterval = 3 * 1000 * 1000;
announcementTimer.attach_us( callback(announcementCallback), announcementInterval);
...

void sendMulticast() {
    sendMulticastCounter++;
}

void announcementCallback() {
    announcementCallbackCounter++;
    eventQueue.call(sendMulticast);
}

No, I can not do any logging in those functions, as that crashes immediately. Can the issue be that I’m doing all this setup already in a thread, not the main context?

All in all, not terribly impressed right now. But will soldier on. Very thankful for all the help so far!

Hello Jan,

This board hard 32kB of memory and we’re constantly shaving off bytes here and there to keep the whole thing alive.

The LPC1768 is equipped with 64kB (2 x 32kB). The upper 32kB block is reserved/used for/by the Ethernet, USB or CAN drivers.

You can use an EventQueue to periodically call a callback rather than a Ticker for example as below. Please notice that it is not intended as a working program.

#include "mbed.h"

EventQueue               queue(32 * EVENTS_EVENT_SIZE);
Thread                   thread;
UDPSocket                udpSocket;
SocketAddress            address("192.168.1.12");
const char               data[] = { 1, 2, 3 };
nsapi_size_t             sze = sizeof(data);
volatile nsapi_error_t   status;

void sendUdpMessage(void)
{
    status = udpSocket.sendto(address, data, sze);
}

int main()
{
    thread.start(callback(&queue, &EventQueue::dispatch_forever));
    queue.call_every(2000, sendUdpMessage);
}

Best regards,

Zoltan

That is a most awesome idea! Works ok too. Now I don’t need a Ticker at least. Thank you!

If I try to use an event queue to handle incoming TCP connections and I/O on accepted TCP sockets, I find that the event queue is slow. Often it takes 1-2 seconds before my “user space” callback is called and that is too slow. Handling all I/O directly in a sigio() handler gives good response times, but my app seems to freeze after I close() a socket. The docs seem to imply I must call close(), and that will free resources. Calling delete on the sockets hard faults immediately. But close() freezes, even though I immediately after that stop using the pointer.

But from my understanding EventQueue.call() should be pretty much without any delay? The thread that serves the event queue is not doing anything else at all.

Is there really no good way to create a poll()/select() like thing with MBed? Handling a bit of I/O has turned into a real nightmare of frustrating issues. :frowning:

Yeah, not calling close() on client sockets after the connection has closed seems to fix my freeze. But that’s likely not good, probably a massive resource leak?

Trying the EventQueue route again, but it’s just too slow, the client in the other end times out before the MBed server has responded. It takes 3 seconds after the accept() until the data is received. By looking at Wireshark I see that the packet arrived immediately after the accepting handshake. So something makes this not work sanely at all.

// normal bind(), listen() done
serverSocket.set_timeout( 0 );
serverSocket.set_blocking( false );
serverSocket.sigio(callback(incomingClientCallback));

void incomingClientCallback() {
    getEventQueue().call(&handleIncomingClient);
}

void handleIncomingClient() {
    nsapi_error_t status = NSAPI_ERROR_OK;
    TCPSocket * client = serverSocket.accept(&status);

    // any errors?
    if ( status != NSAPI_ERROR_OK ) {
        if ( client != NULL ) {
            client->close();
        }

        return;
    }

    if ( client == NULL ) {
        return;
    }

    log("handleIncomingClient: incoming client data");

    client->set_timeout( 0 );
    client->set_blocking( false );
    client->sigio(callback(clientDataCallback));
}

void clientDataCallback() {
    getEventQueue().call(&handleClientData);
}

void handleClientData() {
    log("handleClientData: incoming client data");
...
}

The three seconds is between the two log messages. Also my handleIncomingClient is always called twice. The first accepts the incoming client and the second comes immediately after and fails with -3005 (NSAPI_ERROR_NO_SOCKET). Why would this get triggered twice? Or can sigio not be used as a reliable mechanism for detecting incoming clients?

To me this looks like a “by the book” approach to handling clients, but it can’t be this slow?

Hello Jan,

You can try to poll as follows:

#include "mbed.h"

EventQueue              queue(32 * EVENTS_EVENT_SIZE);
Thread                  sendThread;
UDPSocket               sock;
SocketAddress           address("192.168.1.12");
const char              data[] = { 1, 2, 3 };
nsapi_size_t            sze = sizeof(data);
volatile nsapi_error_t  status;
Thread                  recvThread;
volatile bool           sockStateChanged = false;
SocketAddress           peer;
uint8_t                 buff[256];

void sendUdpMessage()
{
    status = sock.sendto(address, data, sze);
}

void onSockStateChanged()
{
    sockStateChanged = true;
}

void recvPoll()
{
    if (sockStateChanged) {
        sockStateChanged = false;
        nsapi_size_or_error_t n = sock.recv(buff, sizeof(buff));
        if (n > 0) {
            nsapi_error_t   error = sock.getpeername(&peer);
            if (error == NSAPI_ERROR_OK) {
                printf("Data received from %s:\r\n", peer.get_ip_address());
                for (int i = 0; i < n; i++) {
                    printf("data[%i] = Ox%.2x\r\n", i, buff[i]);
                }
            }
        }
    }
    ThisThread::sleep_for(10);
}

int main()
{
    sock.set_blocking(false);
    sock.sigio(callback(onSockStateChanged));
    recvThread.start(callback(recvPoll));
    sendThread.start(callback(&queue, &EventQueue::dispatch_forever));
    queue.call_every(2000, sendUdpMessage);
}
1 Like

That would however spend most CPU time polling. This thing needs to drive an air conditioning unit with all motors, sensors and other forms of I/O.

You can create as many additional threads for additional tasks, like air conditioning, motor drivers etc., as you need and the MPU RAM allows for. You can also let the other threads run longer by increasing the sleep time of the recvPoll thread. They will all run in parallel.

There isn’t much memory, so keeping threads to a minimum is paramount. We already have to have a TCP/IP stack there, file system, USB handling etc. The thread I created for the event queue was about as much as can be afforded now. I even gave it a much higher priority, but I still see delays between accepting a connection to reading the first bytes between 300ms to 3000ms.