Bad performance / big number of reads when using LittleFS

Hello,

I am using LittleFS as the the file system for my device.
The purpose of this device is to record and log data to a SPI flash memory (S25FL128L).

There are two files in my memory, a configuration file and a measurement file. A measurement point will be written to the measurement file every second.
Both files are open at the same time so that I can quickly change configuration parameters without having to close or open the other file.

Now that I have set up my code and the logging part is working I noticed that the lfs_file_write / lfs_file_sync performance decreases every time the function is being called. I have set up counters which increment when the read, prog or erase functions are being called by lfs.

Here you can see how the numbers of reads and writes increases over time.
Also, there are some huge spikes visible where the number of reads goes > 1000.
For every write there is a block erase being triggered. Sometimes two.

You can also download the raw data file from https://dataspace.presens.de/#/public/shares-downloads/TpDcXEAYSlA9i9MSamyLVBbLJ7SFiCoz

This is my LittleFS configuration:

#define READ_SIZE 64
#define WRITE_SIZE 64
#define BLOCK_SIZE 4096
#define BLOCK_COUNT 4096
#define CACHE_SIZE 64
#define LOOKAHEAD_SIZE 64
#define BLOCK_CYCLES 500

Here is the code I am using to write a measurement point:

    read_counter = 0;
    write_counter = 0;
    erase_counter = 0;

    // ... write to file
    int32_t err = lfs_file_write(&lfs, &active_measurement_file->file, measurement_row, sizeof(m_lfs_storage_measurement_row_t));
    if (err < 0)
    {
        // return error
    }

    err = lfs_file_sync(&lfs, &active_measurement_file->file);
    if (err < 0)
    {
        // return error
    }

    LOG_INFO("File written %d r, %d w, %d e", read_counter, write_counter, erase_counter);
    //return no error;

Can you tell me what is causing these massive spikes and the performance degradation over time?
Also, why does LittleFS need to delete and rewrite a whole block every time I append a new measurement point to the end of my file?

Hope you can help me.

Regards
Michael

Hi Michael,

Performance and minimizing read/write overhead is an ongoing area of improvement for LittleFS, so I may not have all the answers.

Looking at your config it looks like this is v2, is that correct? If not I would suggest switching to v2. It can be found here and added to an Mbed project without conflicts:

If you’re in v2, you may get performance by reducing the READ_SIZE/WRITE_SIZE to 1 byte if your flash part supports it. v2 added the CACHE_SIZE config option for performance so READ_SIZE and WRITE_SIZE can be reduced without a cost. These control the granularity of commits, so smaller sizes means more commits in a block.

For every write there is a block erase being triggered.

I don’t think this should be happening in v2, though it does happen in v1. Do you close and reopen the file every write? That may cause this to happen as LittleFS can’t trust the file’s previous state. Keeping the log file open should let LittleFS know it doesn’t need to erase the last block. And as long as you are syncing the file the power-resilience means LittleFS should resume operation just fine after a power-loss.


The spikes you see are caused by directory-level garbage collection. This happens with the directory fills with commits (file changes).

It’s not a cost that can be removed completely but we are looking into how to improve it. This issues is currently being tracked here: