Uint16_t swapping bytes vs uint8_t?

Hey everyone,

I’m running into an immensely peculiar problem. I am trying to read Artnet DMX packages with a udp_socket.rcvfrom( ) call and I have noticed that instead of the expected “512” packet length I keep getting “2”.

The code doing the receiving is is here:

int num_bytes_received = udp_socket.recvfrom(&sock_addr, &packet, sizeof(ArtnetPacket) );

To narrow this problem down I have changed my code to print out the first 20 bytes of every packet I am receiving and noticed something strange. This is the result of printing out the first 20 values of a uint8_t array that I’ve written:
good_bytes_uint8

This matches what I’d expect to see, as shown with the Wireshark capture below:
wireshark

42 72 74 2d 4e 65 74 00 is the string “Art-Net\0”. I’ve highlighted the size field “02 00” located underneath it. 0x0200 calculates to the expected value of 512.

However, this is the result of printing the first 20 values of a uint16_t array with the same data:
backwards_bytes_uint16

You’ll note that within each 16 bit value, the two bytes are now reversed vs their 8 bit counterparts:

42 72 74 2d 4e 65 74 00 becomes 7242 2d74 654e 0074 and 0x0200 becomes 0x0002, or 2.

I cannot explain this behavior but it is very annoying. Anyone know what might be happening here?

the 8-bit “counterparts” have no order to them, they just are a byte value. the 16bit ones need to know which order their 2 bytes are in (whether the “left” one is “worth” more/less), something called “Endianess”. (see also Endianness - Wikipedia).
kinda how we have the convention that in “double-digits” the string “1 2” next to each other means “twelve” instead of “twenty-one”, but in “single-digits” we don’t need such a convention to know that it’s a “one” and a “two”.

Big-endianness is the dominant ordering in networking protocols, such as in the internet protocol suite, where it is referred to as network order, transmitting the most significant byte first. Conversely, little-endianness is the dominant ordering for processor architectures (x86, most ARM implementations, base RISC-V implementations) and their associated memory.

what’s happening there is that you’re transferring data from an environment of one endianess to one of the opposite. the two bytes don’t actually get swapped, but when interpreting them as a single 2-byte value they have the opposite meaning suddenly.

to put in context with my example above: consider a language or whatever where people read numbers the other way around. you write down twelve, “12”, but someone else (seeing the same physical representation “12”) reads twentyone.

since you’re dealing with a network there, consider using the (C-standard) function htons and ntohs to automatically have it convert “host to net” (when sending any 16bit number) / “net to host” (when receiving any 16bit number).

in your concrete example you’ll want to just put any 16bit number inside that packet through ntohs after receiving it. oh and pray whatever Artnet DMX is follows the IP conventions.