We currently use queue from the pico SDK for our ring buffers. Queue is core safe, so uses spinlocks. The way we use it, and the way Henry has added asserts to enforce it’s use, doesn’t need the core safe features.
I’ve been looking at alternatives and this looks pretty good:
It has the ability to handle arrays, instead of the single bytes we currently feed the queue.
Was there a (performance or other) problem that drove you to look for an alternative to the queue?
Either way, can you help me better understand the expected benefit of switching?
Why I would ask...
I’m probably just missing context about the problem that was occurring.
From what I understand, if only one core uses a spinlock, it’s about the cheapest lock that exists … just set a value to one (or zero on release) … it’ll never spin waiting if only used from one core.
If the benefit is small, consider also whether there might later be a need to (even if rarely) insert / remove / etc. something from a queue from another core. If so, going lock-free may have a much larger cost then.
This issue arose because we might be losing bytes in UART bridge mode during high data throughput.
For me personally, I’d like to be able to efficiently transfer multiple byte arrays into the buffer.
At the moment each end of the queue is locked to one core, so the spinlock/unlock happens on each byte pushed in and out. It’s a speed bump we don’t need.
To start, I’ll just comment out the spin locks in the existing queue and see if it impacts the data loss (if the loss is happening, analysis of the TX pin shows no loss, so it could be in the RX part).