The place where random ideas get written down and lost in time.
2025-10-02 - Rust Tasks, Threads, Synchronization in ESP-RS
Category Rust
Some notes on how to perform basic Tasks, Threads, Synchronization functions using Rust with ESP-RS. And to clarify, by “ESP-RS”, I mean the “std” mode built around the esp-idf-sys/svc/hal crates.
This isn’t meant to be used as a canonical guide. Most of these are just a quick reminder for myself when I need something so that I don’t need to re-read the docs every time. So basically that only covers stuff I know I would need. I’m not pretending to be exhaustive here. And since I’m a newbie in Rust, I’m not pretending to be right either. Do your own due diligence by reading the official docs.
Tasks & Threads
The canonical example is in this ThreadSpawnConfiguration example:
https://github.com/esp-rs/esp-idf-hal/issues/228#issuecomment-1492483018
I haven’t seen any more official examples of that stuff.
The implementation is here:
https://github.com/esp-rs/esp-idf-hal/blob/master/src/task.rs
This feels like a wrapper around the ESP-pthread API:
https://docs.espressif.com/projects/esp-idf/en/v4.2.2/esp32/api-reference/system/esp_pthread.html
esp_pthread_set_cfg sets a “global” config that indicates how subsequent pthread_create() calls behave. That’s what ThreadSpawnConfiguration does.
Usage example:
ThreadSpawnConfiguration {
name: Some("thread_name\0".as_bytes()),
stack_size: 4096,
priority: 15,
pin_to_core: Some(core::Core1),
..Default::default()
}
.set()
.unwrap();
let thread_handle = std::thread::Builder::new()
.stack_size(stack_size)
.spawn(move || {
//do stuff
})
.unwrap();
thread_handle.join().unwrap();
AFAIK, the default is for threads to not have any CPU affinity.
In the SDB C++ implementation, I was creating all tasks on APP_CPU (core 1), and I was varying the priority. So here we would just call that method above before creating each “thread”.
Since all threads a.k.a. tasks are initialized once when the embedded program starts, in a definitely predetermined order, the “global” aspect of that configuration is not a big issue.
There are a few peculiarities with these calls above:
- The task name as used by ThreadSpawnConfiguration must contain a terminal \0 byte.
- The task name must be 16 characters max.
- The stack size is in the bytes but…
- The thread Builder has its own stack size, which defaults to a different value (e.g. not using the one from ThreadSpawnConfiguration).
- The default task stack size is given by CONFIG_ESP_MAIN_TASK_STACK_SIZE in the “sdkconfig.defaults” file. It defaults to 3kB and the default sample sets it to 8000 bytes.
Thread Sleep
That one is fairly easy:
thread::sleep(std::time::Duration::from_millis(500));
How is that different from FreeRtos::delay_ms(1000) ?
(I’m guessing it isn’t…)
Of note, the “sdkconfig.defaults” file can define a constant to change the frequency of the FreeRTOS timer. The file indicates the default frequency is 100 Hz, which translates to a 10 ms precision at most in the “delay_ms()” calls. It can be changed to 1000 Hz if a millisecond precision is needed.
Channel: “mpsc::channel”
“mpsc” stands for Multiple Producer → Single Consumer. That says it all.
Usage:
use std::sync::mpsc;
let (tx, rx) = mpsc::channel();
tx.send(()).expect("Failed to send start signal"); // in one thread
rx.recv().expect("Failed to receive start signal"); // in another thread
For a bidirectional exchange, two channels are used.
There are “Channels” and “Sync Channels”. The former is an unlimited queue, the latter is a bounded queue with blocking semantics.
Classic Mutex
Usage:
use std::sync::{Arc, Mutex};
let shared_data = Arc::new(Mutex::new(SharedData { … } ));
let shared_data2 = Arc::clone(&shared_data);
// in one thread:
{
let mut data = shared_data.lock().unwrap();
// exiting the scope releases the lock
}
// in another thread:
{
let mut data = shared_data2.lock().unwrap();
// exiting the scope releases the lock
}
Arc is a thread-safe Rc (reference count) for sharing data. So we literally thread-safely share a mutex. The point of Arc/Rc is to share clones of the wrapper and free the underlying resource when nobody owns it anymore.
Mutex.lock() does not really return the Mutex itself: it returns a “mutex guard” object.
There’s a thing in the doc about “poisoned Mutex” which represents a Mutex in an invalid state if a thread panicked (a.k.a. crashed) whilst holding a mutex.
I was expecting the Mutex to have get/set methods… and actually it does have a Mutex.set method but I don’t see it used: Mutex implements the “Deref” trait (a.k.a. *foo), and it seems the normal idiom is to access the underlying object as “*mutex_guard” directly instead of using the get/set methods on the Mutex itself.
Wait barrier: Condvar
Usage:
use std::sync::{Arc, Mutex, Condvar};
let pair = Arc::new((Mutex::new(false), Condvar::new()));
let pair2 = pair.clone();
// in one thread:
let (lock, cvar) = &*pair;
let mut started = lock.lock().unwrap();
*started = true;
cvar.notify_one();
// in another thread:
let (lock, cvar) = &*pair2;
let mut started = lock.lock().unwrap();
while (!*started) {
started = cvar.wait_while(started, |&mut started| !started).unwrap();
}
The type of that pair above is worth pointing out:
Arc< ( Mutex<bool>, Condvar ) >
That surprised me at first. In other words it’s Arc< Tuple< Mutex, Condvar > > except that tuple never appears as a constructed type anywhere, at least it’s not explicit.
Note that the signal is the “condvar”. The thread reading the signal must loop and check the condition: the wait is blocking, yet can have spurious wake up calls. Also note that the value exchanged (in the Mutex) doesn’t have to be a boolean. Can be anything more suitable for the program (e.g. an enum, etc).
Note how we have “started” vs “*started” above: the Mutex lock returns a “MutexGuard” object. So “started” isn’t really a Mutex object. It’s a “MutexGuard” and it implements Deref (*foo) to access the underlying object protected by the mutex. That kind of subtlety is going to bite me every time.
static_cell ⇒ OnceLock ⇒ LazyLock
These are crates that define data as “static” (program’s lifetime) so Arc is not needed to share the references; however Mutex is needed to lock the data access.
OnceLock is in std::sync starting with Rust 1.70:
use std::sync::{Mutex, OnceLock};
// this is at the top level (doesn’t have to be)
static SHARED_DATA: OnceLock<Mutex<SharedData>> = OnceLock::new();
fn main() {
let mutex_ref = SHARED_DATA.get_or_init(|| {
Mutex::new(SharedData { value: 0 })
});
}
// Threads 1 and 2:
{
let mutex = SHARED_DATA.get().unwrap();
let mut data = mutex.lock().unwrap();
// exiting the scope releases the lock
}
LazyLock is similar but the value is not initialized until it’s first used. The static creator takes a lambda to initialize the value the first time it’s used. LazyLock is “easier” to use if one needs to create and set a value all at once (OnceLock requires separate calls). Downside: the LazyLock init must be a lambda closure that does not have any function-local captures (the error was that the capture must have a lifetime "greater than static”, which is of course impossible, isn’t it?)
Sharing a Simple Counter
A marine walks into a bar and asks “Where’s my counter?” 🤣
Let’s say a thread needs to export a counter and another thread needs to read it.
Option 1 is to use LazyLock:
static COUNTER: LazyLock<Mutex<i32>> = LazyLock::new(|| Mutex::new(0));
fn increment_counter() {
let mut num = COUNTER.lock().unwrap();
*num += 1;
}
LazyLock allows creating and initializing the value in just one line (as long as it’s really a static value and not requiring any external data… for that case use OnceLock).
LazyLock is nice as it exposes the value as “just a &T” so it’s easy to use: the user automatically locks and gets the object in the scope, and the lock is released when the scope ends.
Option 2 is to use an Atomic type:
For stuff as simple as an i32, there’s an atomic object just for that:
https://doc.rust-lang.org/std/sync/atomic/struct.AtomicI32.html
static ATOMIC_COUNTER: AtomicI32 = AtomicI32::new(0);
When declared inside a function, the AtomicI32 is “owned” (move) by the first function call that uses it. To solve that problem, an Arc<> can be used:
let counter = Arc::new(AtomicI32::new(0)); // for closure or thread A
let counter2 = counter.clone(); // for closure or thread B
Then there are a number of class atomic instructions to use it:
- counter.into_inner() → i32 -- this doesn’t work with Arc()
- counter.load(Ordering::SeqCst or Relaxed) → i32 (with memory ordering semantics)
- Relaxed: no constraints.
- Acquire: …? I have no idea what this even means…
- SeqCst: all others threads see sequentially consistent operations,.
- counter.store(value, Ordering)
- counter.swap(value, Ordering) → i32
- counter.compare_and_swap, compare_exchange
- counter.fetch_add(val_to_add, Ordering) → older i32
- and fetch_sub, fetch_and (bitwise), fetch_min/max/nand/or/xor (!!)
- update / try_update / fetch_update … takes a function to modify the value.
There’s also an Atomic<Type> for all the basic intrinsic types. When using a custom struct, it’s really Arc() or Once/LazyLock.
The conclusion: when it comes to synchronization, the Rust standard library and its associated crates, well OK, it’s the kitchen sink really -- you want it, they have it.