Inter-thread synchronization

In a multi-threaded real-time system, a task can often be completed by coordinating multiple threads. So how can multiple threads cooperate with each other to ensure that the task is executed without errors? Let's take an example to illustrate.

For example, two threads in a job: one thread receives data from the sensor and writes the data to the shared memory, while the other thread periodically reads the data from the shared memory and sends it to the display. The following figure describes the data transfer between the two threads:

If access to shared memory is not exclusive, then threads may access it simultaneously, which can cause data consistency problems. For example, if the receiving thread has not finished writing data before the display thread attempts to display it, the display will contain data sampled at different times, causing the displayed data to be garbled.

The receiving thread #1 that writes the sensor data to the shared memory block and the thread #2 that reads the sensor data from the shared memory block both access the same memory block. In order to prevent data errors, the access actions of the two threads must be mutually exclusive. One thread should be allowed to operate on the shared memory block only after the other thread completes the operation. In this way, the receiving thread #1 and the display thread #2 can work together normally to perform this work correctly.

Synchronization refers to running in a predetermined order. Thread synchronization refers to multiple threads controlling the execution order between threads through specific mechanisms (such as mutexes, event objects, critical sections). It can also be said that the execution order relationship is established between threads through synchronization. If there is no synchronization, the threads will be out of order.

When multiple threads operate/access the same area (code), this area is called a critical section. The shared memory block in the above example is a critical section. Thread mutual exclusion refers to the exclusivity of access to critical section resources. When multiple threads need to use critical section resources, only one thread is allowed to use it at any time. Other threads that want to use the resource must wait until the resource occupier releases the resource. Thread mutual exclusion can be regarded as a special kind of thread synchronization.

There are many ways to synchronize threads, and the core idea is to only allow one (or one type of) thread to run when accessing a critical section. There are many ways to enter/exit a critical section:

1) Call rt_hw_interrupt_disable() to enter the critical section, and call rt_hw_interrupt_enable() to exit the critical section; see the global interrupt switch content in "Interrupt Management" for details.

2) Call rt_enter_critical() to enter the critical section and call rt_exit_critical() to exit the critical section.

This chapter will introduce various synchronization methods: semaphore , mutex , and event . After studying this chapter, you will learn how to use semaphore, mutex, and event to synchronize threads.

Let’s take a parking lot in life as an example to understand the concept of semaphore:

① When the parking lot is empty, the parking lot manager finds that there are many empty parking spaces, and will allow cars outside to enter the parking lot one after another to obtain parking spaces;

② When the parking lot is full, the administrator will prohibit cars from entering the parking lot and the cars will queue outside to wait;

③ When a car leaves the parking lot, the administrator will find an empty parking space and allow outside cars to enter the parking lot; after the empty parking spaces are filled, outside vehicles are prohibited from entering.

In this example, the administrator is equivalent to a semaphore, and the number of empty parking spaces in the administrator's hands is the value of the semaphore (a non-negative number that changes dynamically); parking spaces are equivalent to public resources (critical areas), and vehicles are equivalent to threads. Vehicles obtain parking spaces by obtaining permission from the administrator, just like threads access public resources by obtaining semaphores.

A semaphore is a lightweight kernel object used to solve synchronization problems between threads. Threads can acquire or release it to achieve synchronization or mutual exclusion.

The schematic diagram of semaphore operation is shown in the figure below. Each semaphore object has a semaphore value and a thread waiting queue. The semaphore value corresponds to the number of instances and resources of the semaphore object. If the semaphore value is 5, it means that there are 5 semaphore instances (resources) available for use. When the number of semaphore instances is zero, the thread that applies for the semaphore will be suspended on the waiting queue of the semaphore, waiting for an available semaphore instance (resource).

In RT-Thread, the semaphore control block is a data structure used by the operating system to manage semaphores, represented by the structure struct rt_semaphore. Another C expression, rt_sem_t, represents the handle of the semaphore, which is implemented in C language as a pointer to the semaphore control block. The detailed definition of the semaphore control block structure is as follows:

struct rt_semaphore
{
   struct rt_ipc_object parent;  /* 继承自 ipc_object 类 */
   rt_uint16_t value;            /* 信号量的值 */
};
/* rt_sem_t 是指向 semaphore 结构体的指针类型 */
typedef struct rt_semaphore* rt_sem_t;copymistakeCopy Success

The rt_semaphore object is derived from rt_ipc_object and is managed by the IPC container. The maximum value of the semaphore is 65535.

The semaphore control block contains important parameters related to semaphores, and acts as a link between various states of semaphores. The semaphore-related interface is shown in the figure below. The operations on a semaphore include: creating/initializing a semaphore, acquiring a semaphore, releasing a semaphore, and deleting/detaching a semaphore.

Creating and Deleting Semaphores

When creating a semaphore, the kernel first creates a semaphore control block and then performs basic initialization on the control block. The following function interface is used to create a semaphore:

 rt_sem_t rt_sem_create(const char *name,
                        rt_uint32_t value,
                        rt_uint8_t flag);copymistakeCopy Success

When this function is called, the system will first allocate a semaphore object from the object manager and initialize this object, and then initialize the parent IPC object and the parts related to semaphore. In the parameters specified for creating a semaphore, the semaphore flag parameter determines the queuing method for multiple threads waiting when the semaphore is not available. When the RT_IPC_FLAG_FIFO (first in, first out) mode is selected, the waiting thread queue will be queued in a first in, first out manner, and the first thread to enter will first obtain the waiting semaphore; when the RT_IPC_FLAG_PRIO (priority waiting) mode is selected, the waiting thread queue will be queued according to priority, and the waiting thread with a high priority will first obtain the waiting semaphore.

Note

Note: RT_IPC_FLAG_FIFO is a non-real-time scheduling method. Unless the application is very concerned about first-come-first-served, and you clearly understand that all threads involving the semaphore will become non-real-time threads, you can use RT_IPC_FLAG_FIFO. Otherwise, it is recommended to use RT_IPC_FLAG_PRIO, that is, to ensure the real-time nature of the thread.

The following table describes the input parameters and return values ​​of this function:

parameter

describe

name

Semaphore Name

value

Semaphore initial value

flag

Semaphore flag, which can take the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO

return

——

RT_NULL

Creation failed

Pointer to the semaphore's control block

Create Success

When the system no longer uses a semaphore, it can be deleted to release system resources. This is applicable to dynamically created semaphores. To delete a semaphore, use the following function interface:

rt_err_t rt_sem_delete(rt_sem_t sem);copymistakeCopy Success

When this function is called, the system will delete the semaphore. If there is a thread waiting for the semaphore when the semaphore is deleted, the deletion operation will first wake up the thread waiting on the semaphore (the return value of the waiting thread is - RT_ERROR), and then release the memory resources of the semaphore. The following table describes the input parameters and return values ​​of this function:

parameter

describe

which

The semaphore object created by rt_sem_create()

return

——

RT_EOK

Deleted successfully

Initializing and detaching semaphores

For static semaphore objects, their memory space is allocated by the compiler at compile time and placed in the read-write data segment or uninitialized data segment. At this time, the semaphore no longer needs to be created using the rt_sem_create interface, but only needs to be initialized before use. The following function interface can be used to initialize the semaphore object:

rt_err_t rt_sem_init(rt_sem_t       sem,
                    const char     *name,
                    rt_uint32_t    value,
                    rt_uint8_t     flag)copymistakeCopy Success

When this function is called, the system will initialize the semaphore object, and then initialize the IPC object and the parts related to semaphore. The semaphore flag can use the flag mentioned in the semaphore creation function above. The following table describes the input parameters and return values ​​of this function:

parameter

describe

which

A handle to a semaphore object

name

Semaphore Name

value

Semaphore initial value

flag

Semaphore flag, which can take the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO

return

——

RT_EOK

Initialization successful

Detaching a semaphore is to detach the semaphore object from the kernel object manager, which is applicable to statically initialized semaphores. Detaching a semaphore uses the following function interface:

rt_err_t rt_sem_detach(rt_sem_t sem);copymistakeCopy Success

After using this function, the kernel first wakes up all threads hanging on the semaphore waiting queue, and then detaches the semaphore from the kernel object manager. The waiting thread originally hanging on the semaphore will get a return value of - RT_ERROR. The following table describes the input parameters and return values ​​of this function:

parameter

describe

which

A handle to a semaphore object

return

——

RT_EOK

Breakaway success

Get semaphore

The thread obtains the semaphore resource instance by acquiring the semaphore. When the semaphore value is greater than zero, the thread will obtain the semaphore, and the corresponding semaphore value will be reduced by 1. The following function interface is used to obtain the semaphore:

rt_err_t rt_sem_take (rt_sem_t sem, rt_int32_t time);copymistakeCopy Success

When calling this function, if the value of the semaphore is equal to zero, it means that the current semaphore resource instance is unavailable. The thread applying for the semaphore will choose to return directly, or suspend and wait for a period of time, or wait forever according to the time parameter, until other threads or interrupts release the semaphore. If the semaphore is still not obtained within the time specified by the parameter time, the thread will time out and return with a return value of - RT_ETIMEOUT. The following table describes the input parameters and return values ​​of this function:

parameter

describe

which

A handle to a semaphore object

time

The specified waiting time is in OS Ticks.

return

——

RT_EOK

Successfully obtained the semaphore

-RT_ETIMEOUT

The semaphore is still not obtained after the timeout.

-RT_ERROR

Other Errors

No waiting to acquire semaphore

When the user does not want to suspend the thread to wait on the applied semaphore, the semaphore can be obtained in a non-waiting manner. The non-waiting semaphore uses the following function interface:

rt_err_t rt_sem_trytake(rt_sem_t sem);copymistakeCopy Success

This function has rt_sem_take(sem, RT_WAITING_NO)the same function as , that is, when the semaphore resource instance requested by the thread is not available, it will not wait on the semaphore, but directly return - RT_ETIMEOUT. The following table describes the input parameters and return values ​​of this function:

parameter

describe

which

A handle to a semaphore object

return

——

RT_EOK

Successfully obtained the semaphore

-RT_ETIMEOUT

Failed to obtain

Release semaphore

Releasing a semaphore can wake up the thread suspended on the semaphore. Releasing a semaphore uses the following function interface:

rt_err_t rt_sem_release(rt_sem_t sem);copymistakeCopy Success

For example, when the value of the semaphore is equal to zero and there is a thread waiting for this semaphore, releasing the semaphore will wake up the first thread in the thread queue waiting for the semaphore, and it will obtain the semaphore; otherwise, the value of the semaphore will be increased by 1. The following table describes the input parameters and return values ​​of this function:

parameter

describe

which

A handle to a semaphore object

return

——

RT_EOK

Successfully released the semaphore

This is a semaphore usage routine. This routine creates a dynamic semaphore and initializes two threads. One thread sends the semaphore, and the other thread performs the corresponding operation after receiving the semaphore. The code is as follows:

Note: RT-Thread 5.0 and later versions have ALIGNchanged the keyword to rt_align, so please pay attention to the modification when using it.

Use of semaphores

#include <rtthread.h>

#define THREAD_PRIORITY 25
#define THREAD_TIMESLICE 5

/* 信号量退出标志 */
static rt_bool_t sem_flag = 0;
/* 指向信号量的指针 */
static rt_sem_t dynamic_sem = RT_NULL;

ALIGN(RT_ALIGN_SIZE)
static char thread1_stack[1024];
static struct rt_thread thread1;
static void rt_thread1_entry(void *parameter)
{
    static rt_uint8_t count = 0;

    while (1)
    {
        if (count <= 100)
        {
            count++;
        }
        else
        {
            rt_kprintf("thread1 exiting...\n");
            sem_flag = 1;
            rt_sem_release(dynamic_sem);
            count = 0;
            return;
        }

        /* count 每计数 10 次,就释放一次信号量 */
        if (0 == (count % 10))
        {
            rt_kprintf("t1 release a dynamic semaphore.\n");
            rt_sem_release(dynamic_sem);
        }
    }
}

ALIGN(RT_ALIGN_SIZE)
static char thread2_stack[1024];
static struct rt_thread thread2;
static void rt_thread2_entry(void *parameter)
{
    static rt_err_t result;
    static rt_uint8_t number = 0;
    while (1)
    {
        /* 永久方式等待信号量,获取到信号量,则执行 number 自加的操作 */
        result = rt_sem_take(dynamic_sem, RT_WAITING_FOREVER);
        if (sem_flag && result == RT_EOK)
        {
            rt_kprintf("thread2 exiting...\n");
            rt_sem_delete(dynamic_sem);
            sem_flag = 0;
            number = 0;
            return;
        }
        else
        {
            number++;
            rt_kprintf("t2 take a dynamic semaphore. number = %d\n", number);
        }
    }
}

/* 信号量示例的初始化 */
int semaphore_sample(void)
{
    /* 创建一个动态信号量,初始值是 0 */
    dynamic_sem = rt_sem_create("dsem", 0, RT_IPC_FLAG_PRIO);
    if (dynamic_sem == RT_NULL)
    {
        rt_kprintf("create dynamic semaphore failed.\n");
        return -1;
    }
    else
    {
        rt_kprintf("create done. dynamic semaphore value = 0.\n");
    }

    rt_thread_init(&thread1,
                   "thread1",
                   rt_thread1_entry,
                   RT_NULL,
                   &thread1_stack[0],
                   sizeof(thread1_stack),
                   THREAD_PRIORITY, THREAD_TIMESLICE);
    rt_thread_startup(&thread1);

    rt_thread_init(&thread2,
                   "thread2",
                   rt_thread2_entry,
                   RT_NULL,
                   &thread2_stack[0],
                   sizeof(thread2_stack),
                   THREAD_PRIORITY - 1, THREAD_TIMESLICE);
    rt_thread_startup(&thread2);

    return 0;
}
/* 导出到 msh 命令列表中 */
MSH_CMD_EXPORT(semaphore_sample, semaphore sample);copymistakeCopy Success

Simulation results:

 \ | /
- RT -     Thread Operating System
 / | \     4.1.1 build Sep  2 2024 14:52:06
 2006 - 2022 Copyright by RT-Thread team
msh >semaphore_sample
create done. dynamic semaphore value = 0.
msh >thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 1
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 2
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 3
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 4
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 5
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 6
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 7
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 8
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 9
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 10
thread1 exiting...
thread2 exiting...

msh >semaphore_sample
create done. dynamic semaphore value = 0.
msh >thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 1
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 2
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 3
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 4
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 5
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 6
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 7
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 8
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 9
thread1 release a dynamic semaphore.
thread2 take a dynamic semaphore. number = 10
thread1 exiting...
thread2 exiting...copymistakeCopy Success

As shown in the above running results: Thread 1 sends a semaphore when count is a multiple of 10 (the thread exits after count reaches 100), and thread 2 adds 1 to number after receiving the semaphore.

Another application example of semaphore is shown below. This example uses 2 threads and 3 semaphores to implement producer and consumer examples.

The three semaphores are: ①lock: The function of the semaphore lock. Since both threads will operate on the same array, the array is a shared resource, and the lock is used to protect this shared resource. ②empty: The number of empty positions, initialized to 5 empty positions. ③full: The number of full positions, initialized to 0 full positions.

The two threads are: ① Producer thread: After getting an empty position, it generates a number, puts it into the array in a loop, and then releases a full position. ② Consumer thread: After getting a full position, it reads the array content and adds it, and then releases an empty position.

Producer Consumer Routines

#include <rtthread.h>

#define THREAD_PRIORITY       6
#define THREAD_STACK_SIZE     512
#define THREAD_TIMESLICE      5

/* 定义最大 5 个元素能够被产生 */
#define MAXSEM 5

/* 用于放置生产的整数数组 */
rt_uint32_t array[MAXSEM];

/* 指向生产者、消费者在 array 数组中的读写位置 */
static rt_uint32_t set, get;

/* 指向线程控制块的指针 */
static rt_thread_t producer_tid = RT_NULL;
static rt_thread_t consumer_tid = RT_NULL;

struct rt_semaphore sem_lock;
struct rt_semaphore sem_empty, sem_full;

/* 生产者线程入口 */
void producer_thread_entry(void *parameter)
{
    int cnt = 0;

    /* 运行 10 次 */
    while (cnt < 10)
    {
        /* 获取一个空位 */
        rt_sem_take(&sem_empty, RT_WAITING_FOREVER);

        /* 修改 array 内容,上锁 */
        rt_sem_take(&sem_lock, RT_WAITING_FOREVER);
        array[set % MAXSEM] = cnt + 1;
        rt_kprintf("the producer generates a number: %d\n", array[set % MAXSEM]);
        set++;
        rt_sem_release(&sem_lock);

        /* 发布一个满位 */
        rt_sem_release(&sem_full);
        cnt++;

        /* 暂停一段时间 */
        rt_thread_mdelay(20);
    }

    rt_kprintf("the producer exit!\n");
    cnt = 0;
}

/* 消费者线程入口 */
void consumer_thread_entry(void *parameter)
{
    rt_uint32_t sum = 0;

    while (1)
    {
        /* 获取一个满位 */
        rt_sem_take(&sem_full, RT_WAITING_FOREVER);

        /* 临界区,上锁进行操作 */
        rt_sem_take(&sem_lock, RT_WAITING_FOREVER);
        sum += array[get % MAXSEM];
        rt_kprintf("the consumer[%d] get a number: %d\n", (get % MAXSEM), array[get % MAXSEM]);
        get++;
        rt_sem_release(&sem_lock);

        /* 释放一个空位 */
        rt_sem_release(&sem_empty);

        /* 生产者生产到 10 个数目,停止,消费者线程相应停止 */
        if (get == 10) break;

        /* 暂停一小会时间 */
        rt_thread_mdelay(50);
    }

    rt_kprintf("the consumer sum is: %d\n", sum);
    rt_kprintf("the consumer exit!\n");
    rt_sem_detach(&sem_lock);
    rt_sem_detach(&sem_empty);
    rt_sem_detach(&sem_full);
    sum = 0;
}

int producer_consumer(void)
{
    set = 0;
    get = 0;

    /* 初始化 3 个信号量 */
    rt_sem_init(&sem_lock, "lock",     1,      RT_IPC_FLAG_PRIO);
    rt_sem_init(&sem_empty, "empty",   MAXSEM, RT_IPC_FLAG_PRIO);
    rt_sem_init(&sem_full, "full",     0,      RT_IPC_FLAG_PRIO);

    /* 创建生产者线程 */
    producer_tid = rt_thread_create("producer",
                                    producer_thread_entry, RT_NULL,
                                    THREAD_STACK_SIZE,
                                    THREAD_PRIORITY - 1,
                                    THREAD_TIMESLICE);
    if (producer_tid != RT_NULL)
    {
        rt_thread_startup(producer_tid);
    }
    else
    {
        rt_kprintf("create thread producer failed");
        return -1;
    }

    /* 创建消费者线程 */
    consumer_tid = rt_thread_create("consumer",
                                    consumer_thread_entry, RT_NULL,
                                    THREAD_STACK_SIZE,
                                    THREAD_PRIORITY + 1,
                                    THREAD_TIMESLICE);
    if (consumer_tid != RT_NULL)
    {
        rt_thread_startup(consumer_tid);
    }
    else
    {
        rt_kprintf("create thread consumer failed");
        return -1;
    }

    return 0;
}

/* 导出到 msh 命令列表中 */
MSH_CMD_EXPORT(producer_consumer, producer_consumer sample);copymistakeCopy Success

The simulation results of this routine are as follows:

 \ | /
- RT -     Thread Operating System
 / | \     4.1.1 build Sep  2 2024 18:24:30
 2006 - 2022 Copyright by RT-Thread team
msh >producer_consumer
the producer generates a number: 1
the consumer[0] get a number: 1
msh >the producer generates a number: 2
the producer generates a number: 3
the consumer[1] get a number: 2
the producer generates a number: 4
the producer generates a number: 5
the consumer[2] get a number: 3
the producer generates a number: 6
the producer generates a number: 7
the producer generates a number: 8
the consumer[3] get a number: 4
the producer generates a number: 9
the consumer[4] get a number: 5
the producer generates a number: 10
the producer exit!
the consumer[0] get a number: 6
the consumer[1] get a number: 7
the consumer[2] get a number: 8
the consumer[3] get a number: 9
the consumer[4] get a number: 10
the consumer sum is: 55
the consumer exit!

msh >producer_consumer
the producer generates a number: 1
the consumer[0] get a number: 1
msh >the producer generates a number: 2
the producer generates a number: 3
the consumer[1] get a number: 2
the producer generates a number: 4
the producer generates a number: 5
the consumer[2] get a number: 3
the producer generates a number: 6
the producer generates a number: 7
the producer generates a number: 8
the consumer[3] get a number: 4
the producer generates a number: 9
the consumer[4] get a number: 5
the producer generates a number: 10
the producer exit!
the consumer[0] get a number: 6
the consumer[1] get a number: 7
the consumer[2] get a number: 8
the consumer[3] get a number: 9
the consumer[4] get a number: 10
the consumer sum is: 55
the consumer exit!copymistakeCopy Success

This routine can be understood as producers producing products and putting them into the warehouse, and consumers taking products from the warehouse.

(1) Producer thread:

1) Get 1 empty space (to put the product number), and then reduce the empty space by 1;

2) Lock protection; the number value generated this time is cnt+1, and the value is stored in the array in a loop; then unlock;

3) Release 1 full slot (place a product in the warehouse, and the warehouse will have one more full slot), and add 1 to the full slot;

(2) Consumer thread:

1) Get a full position (get product number), then subtract 1 from the full position;

2) Lock protection; read the number value produced by the producer this time from the array and add it to the last number value; then unlock;

3) Release 1 empty space (when a product is taken away from the warehouse, there will be one more empty space in the warehouse), and the number of empty spaces increases by 1.

The producer generates 10 numbers in turn, and the consumer takes them in turn and sums the values ​​of the 10 numbers. The semaphore lock protects the array critical section resources: it ensures the exclusivity of the number value taken by the consumer each time and realizes synchronization between threads.

Semaphore is a very flexible synchronization method that can be used in many occasions. It can form locks, synchronization, resource counting and other relationships, and can also be conveniently used for synchronization between threads and threads, interrupts and threads.

Thread synchronization

Thread synchronization is the simplest type of semaphore application. For example, when using a semaphore to synchronize two threads, the semaphore value is initialized to 0, indicating that there are 0 semaphore resource instances; and the thread that attempts to obtain the semaphore will directly wait on this semaphore.

When the thread holding the semaphore completes the work it is processing, it releases the semaphore, which can wake up the thread waiting on the semaphore and allow it to perform the next part of the work. This type of situation can also be seen as using semaphores to mark the completion of work: the thread holding the semaphore completes its own work, and then notifies the thread waiting for the semaphore to continue the next part of the work.

Lock(This function is for understanding only)

Lock, a single lock is often used for multiple threads to access the same shared resource (i.e. critical section). When a semaphore is used as a lock, the semaphore resource instance should usually be initialized to 1, which means that the system has a resource available by default. Because the value of the semaphore always changes between 1 and 0, this type of lock is also called a binary semaphore. As shown in the figure below, when a thread needs to access a shared resource, it needs to obtain the resource lock first. When this thread successfully obtains the resource lock, other threads that intend to access the shared resource will be suspended due to the inability to obtain the resource. This is because when other threads try to obtain the lock, the lock has been locked (the semaphore value is 0). When the thread that obtains the semaphore has completed the processing and exits the critical section, it will release the semaphore and unlock the lock, and the first waiting thread suspended on the lock will be awakened to obtain access to the critical section.

Note

Note: In the history of computer operating system development, people used binary semaphores to protect critical sections in the early days. However, in 1990, researchers discovered that using semaphores to protect critical sections would lead to unbounded priority inversion, so they proposed the concept of mutexes. Today, we no longer use binary semaphores to protect critical sections, but mutexes have replaced them.

Interrupt and thread synchronization

Semaphores can also be conveniently used for synchronization between interrupts and threads. For example, when an interrupt is triggered, the interrupt service routine needs to notify the thread to perform corresponding data processing. At this time, the initial value of the semaphore can be set to 0. When the thread tries to hold this semaphore, since the initial value of the semaphore is 0, the thread directly hangs on this semaphore until the semaphore is released. When an interrupt is triggered, hardware-related actions are performed first, such as reading the corresponding data from the hardware I/O port, confirming the interrupt to clear the interrupt source, and then releasing a semaphore to wake up the corresponding thread for subsequent data processing. For example, the processing method of FinSH thread is shown in the figure below.

The semaphore value is initially 0. When the FinSH thread tries to obtain the semaphore, it will be suspended because the semaphore value is 0. When the console device has data input, an interrupt is generated, and the interrupt service routine is entered. In the interrupt service routine, it reads the data from the console device and puts the read data into the UART buffer for buffering, and then releases the semaphore. The operation of releasing the semaphore will wake up the shell thread. After the interrupt service routine is completed, if there is no ready thread with a higher priority than the shell thread in the system, the shell thread will hold the semaphore and run to obtain the input data from the UART buffer.

Note

Note: The mutual exclusion between interrupts and threads cannot be achieved by using semaphores (locks), but by using switch interrupts.

Resource Count

A semaphore can also be considered as an increasing or decreasing counter. It should be noted that the value of a semaphore is non-negative. For example, if the value of a semaphore is initialized to 5, the semaphore can be continuously reduced up to 5 times until the counter is reduced to 0. Resource counting is suitable for situations where the processing speeds of threads do not match. At this time, the semaphore can be used as a count of the number of completed tasks of the previous thread, and when it is scheduled to the next thread, it can also process multiple events at a time in a continuous manner. For example, in the producer and consumer problem, the producer can release the semaphore multiple times, and then the consumer can process multiple semaphore resources at a time when it is scheduled.

Note

Note: Generally, resource counting types are hybrid thread synchronization types, because there are still multiple accesses by threads for a single resource processing, which requires accessing and processing a single resource and performing mutual exclusion operations in a lock-based manner.

Mutex, also known as mutually exclusive semaphore, is a special binary semaphore. Mutex is similar to a parking lot with only one parking space: when a car enters, the parking lot gate is locked and other vehicles wait outside. When the car inside comes out, the parking lot gate is opened and the next car can enter.

The difference between mutexes and semaphores is that the thread that owns the mutex owns the mutex, mutexes support recursive access and can prevent thread priority flipping; and mutexes can only be released by the holding thread, while semaphores can be released by any thread.

There are only two states of a mutex, unlocked or locked (two state values). When a thread holds it, the mutex is in a locked state, and this thread acquires its ownership. On the contrary, when this thread releases it, it will unlock the mutex and lose its ownership. When a thread holds a mutex, other threads will not be able to unlock or hold it, and the thread holding the mutex can also acquire the lock again without being suspended, as shown in the figure below. This feature is very different from general binary semaphores: in semaphores, because there is no instance, threads that recursively hold will actively suspend (eventually forming a deadlock).

Another potential problem caused by using semaphores is the thread priority flip problem. The so-called priority flip means that when a high-priority thread tries to access a shared resource through the semaphore mechanism, if the semaphore is already held by a low-priority thread, and this low-priority thread may be preempted by other medium-priority threads during operation, the high-priority thread is blocked by many threads with lower priorities, and real-time performance is difficult to guarantee. As shown in the following figure: There are three threads with priorities A, B and C, and priority A> B> C. Threads A and B are in a suspended state, waiting for an event to be triggered, and thread C is running. At this time, thread C starts to use a shared resource M. During the use process, the event that thread A is waiting for arrives, and thread A turns to the ready state. Because it has a higher priority than thread C, it is executed immediately. However, when thread A wants to use the shared resource M, because it is being used by thread C, thread A is suspended and switched to thread C for execution. If the event that thread B is waiting for arrives at this time, thread B turns to the ready state. Since thread B has a higher priority than thread C and thread B does not use shared resource M, thread B starts running and thread C does not start running until it finishes running. Only when thread C releases shared resource M can thread A be executed. In this case, the priority is reversed: thread B runs before thread A. In this way, the response time of the high-priority thread cannot be guaranteed.

In the RT-Thread operating system, mutexes can solve the priority flip problem, and the priority inheritance protocol (Sha, 1990) is implemented. Priority inheritance solves the problem caused by priority flip by raising the priority of thread C to the priority level of thread A during the period when thread A is suspended while trying to obtain shared resources. This prevents C (and indirectly A) from being preempted by B, as shown in the figure below. Priority inheritance means raising the priority of a low-priority thread that occupies a certain resource to the same priority as the highest priority thread among all threads waiting for the resource, and then executing it. When the low-priority thread releases the resource, the priority returns to the initial setting. Therefore, threads that inherit priorities prevent system resources from being preempted by any intermediate priority threads.

Note

Note: After obtaining the mutex, please release the mutex as soon as possible, and during the process of holding the mutex, the priority of the thread holding the mutex must not be changed again, otherwise the problem of unbounded priority inversion may be artificially introduced.

In RT-Thread, the mutex control block is a data structure used by the operating system to manage mutexes, represented by the structure struct rt_mutex. Another C expression, rt_mutex_t, represents the handle of the mutex, which is implemented in C language as a pointer to the mutex control block. For the detailed definition of the mutex control block structure, see the following code:

struct rt_mutex
    {
        struct rt_ipc_object parent;                /* 继承自 ipc_object 类 */

        rt_uint16_t          value;                   /* 互斥量的值 */
        rt_uint8_t           original_priority;     /* 持有线程的原始优先级 */
        rt_uint8_t           hold;                     /* 持有线程的持有次数   */
        struct rt_thread    *owner;                 /* 当前拥有互斥量的线程 */
    };
    /* rt_mutext_t 为指向互斥量结构体的指针类型  */
    typedef struct rt_mutex* rt_mutex_t;copymistakeCopy Success

The rt_mutex object is derived from rt_ipc_object and is managed by the IPC container.

The mutex control block contains important parameters related to mutex, which plays an important role in the implementation of mutex function. The mutex related interface is shown in the figure below. The operation of a mutex includes: creating/initializing a mutex, acquiring a mutex, releasing a mutex, and deleting/detaching a mutex.

Creating and Deleting Mutexes

When creating a mutex, the kernel first creates a mutex control block and then completes the initialization of the control block. The following function interface is used to create a mutex:

rt_mutex_t rt_mutex_create (const char* name, rt_uint8_t flag);copymistakeCopy Success

You can call the rt_mutex_create function to create a mutex, whose name is specified by name. When calling this function, the system will first allocate a mutex object from the object manager and initialize this object, and then initialize the parent IPC object and the mutex-related parts. The flag of the mutex has been invalidated. Regardless of whether the user selects RT_IPC_FLAG_PRIO or RT_IPC_FLAG_FIFO, the kernel will process it according to RT_IPC_FLAG_PRIO. The following table describes the input parameters and return values ​​of this function:

parameter

describe

name

The name of the mutex

flag

This flag has been deprecated. Regardless of whether the user selects RT_IPC_FLAG_PRIO or RT_IPC_FLAG_FIFO, the kernel will process it according to RT_IPC_FLAG_PRIO.

return

——

Mutex handle

Create Success

RT_NULL

Creation failed

When the mutex is no longer used, the system resources are released by deleting the mutex. This is applicable to dynamically created mutexes. The following function interface is used to delete the mutex:

rt_err_t rt_mutex_delete (rt_mutex_t mutex);copymistakeCopy Success

When a mutex is deleted, all threads waiting for this mutex will be awakened, and the return value obtained by the waiting thread is - RT_ERROR. Then the system deletes the mutex from the kernel object manager linked list and releases the memory space occupied by the mutex. The following table describes the input parameters and return values ​​of this function:

parameter

describe

mutex

A handle to the mutex object

return

——

RT_EOK

Deleted successfully

Initializing and releasing a mutex

The memory of static mutex objects is allocated by the compiler when the system is compiled, and is generally placed in the read-write data segment or the uninitialized data segment. Before using such static mutex objects, they need to be initialized. To initialize the mutex, use the following function interface:

rt_err_t rt_mutex_init (rt_mutex_t mutex, const char* name, rt_uint8_t flag);copymistakeCopy Success

When using this function interface, you need to specify the handle of the mutex object (i.e. the pointer to the mutex control block), the mutex name and the mutex flag. The mutex flag can be the flag mentioned in the above mutex creation function. The following table describes the input parameters and return values ​​of this function:

parameter

describe

mutex

The handle of the mutex object, which is provided by the user and points to the memory block of the mutex object

name

The name of the mutex

flag

This flag has been deprecated. Regardless of whether the user selects RT_IPC_FLAG_PRIO or RT_IPC_FLAG_FIFO, the kernel will process it according to RT_IPC_FLAG_PRIO.

return

——

RT_EOK

Initialization successful

Detaching a mutex will detach the mutex object from the kernel object manager, which is applicable to statically initialized mutexes. Detaching a mutex uses the following function interface:

rt_err_t rt_mutex_detach (rt_mutex_t mutex);copymistakeCopy Success

After using this function interface, the kernel first wakes up all threads hanging on the mutex (the return value of the thread is -RT_ERROR), and then the system detaches the mutex from the kernel object manager. The following table describes the input parameters and return values ​​of this function:

parameter

describe

mutex

A handle to the mutex object

return

——

RT_EOK

success

Get a mutex

Once a thread acquires a mutex, it has ownership of the mutex, that is, a mutex can only be held by one thread at a time. To acquire a mutex, use the following function interface:

rt_err_t rt_mutex_take (rt_mutex_t mutex, rt_int32_t time);copymistakeCopy Success

If the mutex is not controlled by other threads, the thread applying for the mutex will successfully obtain the mutex. If the mutex is already controlled by the current thread, the mutex's holding count will increase by 1, and the current thread will not suspend waiting. If the mutex is already owned by other threads, the current thread will suspend waiting on the mutex until the other thread releases it or the waiting time exceeds the specified timeout. The following table describes the input parameters and return values ​​of this function:

parameter

describe

mutex

A handle to the mutex object

time

Specify the waiting time

return

——

RT_EOK

Successfully acquired the mutex

-RT_ETIMEOUT

time out

-RT_ERROR

Failed to obtain

Acquire mutex without waiting

When the user does not want to suspend the thread to wait on the applied mutex, the mutex can be obtained in a wait-free manner. The following function interface is used for waiting-free mutex acquisition:

rt_err_t rt_mutex_trytake(rt_mutex_t mutex);copymistakeCopy Success

This function has rt_mutex_take(mutex, RT_WAITING_NO)the same function as , that is, when the mutex resource instance requested by the thread is not available, it will not wait on the mutex, but directly return - RT_ETIMEOUT. The following table describes the input parameters and return values ​​of this function:

parameter

describe

mutex

A handle to the mutex object

return

——

RT_EOK

Successfully acquired the mutex

-RT_ETIMEOUT

Failed to obtain

Release the mutex

When a thread completes access to a mutex resource, it should release the mutex it occupies as soon as possible so that other threads can obtain the mutex in time. To release a mutex, use the following function interface:

rt_err_t rt_mutex_release(rt_mutex_t mutex);copymistakeCopy Success

When using this function interface, only the thread that already has control over the mutex can release it. Each time the mutex is released, its holding count is reduced by 1. When the holding count of the mutex is zero (that is, the holding thread has released all holding operations), it becomes available and the thread waiting on the mutex will be awakened. If the running priority of the thread is raised by the mutex, then when the mutex is released, the thread returns to the priority before holding the mutex. The following table describes the input parameters and return values ​​of this function:

parameter

describe

mutex

A handle to the mutex object

return

——

RT_EOK

success

This is a mutex application routine. Mutex is a method to protect shared resources. When a thread has a mutex, it can protect shared resources from being destroyed by other threads. Here is an example to illustrate. There are two threads: thread 1 and thread 2. Thread 1 adds 1 to two numbers respectively; thread 2 also adds 1 to two numbers respectively. Mutex is used to ensure that the thread's operation of changing the value of the two numbers is not interrupted. As shown in the following code:

Mutex Routines

#include <rtthread.h>

#define THREAD_PRIORITY 8
#define THREAD_TIMESLICE 5

/* 指向互斥量的指针 */
static rt_mutex_t dynamic_mutex = RT_NULL;
static rt_uint8_t number1, number2 = 0;
/* 线程退出标志 */
static rt_bool_t thread_exit_flag = RT_FALSE;

ALIGN(RT_ALIGN_SIZE)
static char thread1_stack[1024];
static struct rt_thread thread1;

static void rt_thread_entry1(void *parameter)
{
    while (1)
    {
        /* 线程 1 在获取互斥量前检查它是否存在 */
        if (dynamic_mutex == RT_NULL || thread_exit_flag)
        {
            number1 = 0;
            number2 = 0;

            /* 重置退出标志 */
            thread_exit_flag = RT_FALSE;
            break; /* 退出线程 */
        }

        /* 获取互斥量并进行操作 */
        if (rt_mutex_take(dynamic_mutex, RT_WAITING_FOREVER) == RT_EOK)
        {
            number1++;
            number2++;
            rt_kprintf("thread1 mutex protect, number1 = number2 is %d\n", number1);
            rt_mutex_release(dynamic_mutex);
            rt_thread_mdelay(10);
        }
    }
}

ALIGN(RT_ALIGN_SIZE)
static char thread2_stack[1024];
static struct rt_thread thread2;
static void rt_thread_entry2(void *parameter)
{
    while (1)
    {
        /* 获取互斥量 */
        if (rt_mutex_take(dynamic_mutex, RT_WAITING_FOREVER) == RT_EOK)
        {
            if (number1 != number2)
            {
                rt_kprintf("not protect. number1 = %d, number2 = %d\n", number1, number2);
            }
            else
            {
                rt_kprintf("mutex protect, number1 = number2 is %d\n", number1);
            }

            number1++;
            number2++;
            rt_mutex_release(dynamic_mutex);

            /* 判断是否达到退出条件 */
            if (number1 >= 50)
            {
                thread_exit_flag = RT_TRUE;

                /* 删除互斥量 */
                rt_mutex_delete(dynamic_mutex);
                dynamic_mutex = RT_NULL;

                break; /* 退出线程 */
            }
        }
    }
}

/* 互斥量示例的初始化 */
int mutex_sample(void)
{
    /* 创建一个动态互斥量 */
    dynamic_mutex = rt_mutex_create("dmutex", RT_IPC_FLAG_PRIO);
    if (dynamic_mutex == RT_NULL)
    {
        rt_kprintf("create dynamic mutex failed.\n");
        return -1;
    }

    rt_thread_init(&thread1,
                   "thread1",
                   rt_thread_entry1,
                   RT_NULL,
                   &thread1_stack[0],
                   sizeof(thread1_stack),
                   THREAD_PRIORITY, THREAD_TIMESLICE);
    rt_thread_startup(&thread1);

    rt_thread_init(&thread2,
                   "thread2",
                   rt_thread_entry2,
                   RT_NULL,
                   &thread2_stack[0],
                   sizeof(thread2_stack),
                   THREAD_PRIORITY - 1, THREAD_TIMESLICE);
    rt_thread_startup(&thread2);

    return 0;
}

/* 导出到 MSH 命令列表中 */
MSH_CMD_EXPORT(mutex_sample, mutex sample);copymistakeCopy Success

Thread 1 and Thread 2 both use mutexes to protect the operations on two numbers (if the statements for acquiring and releasing mutexes in Thread 1 are commented out, Thread 1 will no longer protect number). The simulation results are as follows:

\ | /
- RT -     Thread Operating System
 / | \     4.1.1 build Sep  2 2024 19:21:00
 2006 - 2022 Copyright by RT-Thread team
msh >mutex_sample
msh >mutex protect, number1 = number2 is 1
mutex protect, number1 = number2 is 2
mutex protect, number1 = number2 is 3
mutex protect, number1 = number2 is 4
mutex protect, number1 = number2 is 5
mutex protect, number1 = number2 is 6
mutex protect, number1 = number2 is 7
mutex protect, number1 = number2 is 8
mutex protect, number1 = number2 is 9
mutex protect, number1 = number2 is 10
mutex protect, number1 = number2 is 11
mutex protect, number1 = number2 is 12
mutex protect, number1 = number2 is 13
mutex protect, number1 = number2 is 14
mutex protect, number1 = number2 is 15
mutex protect, number1 = number2 is 16
mutex protect, number1 = number2 is 17
mutex protect, number1 = number2 is 18
mutex protect, number1 = number2 is 19
mutex protect, number1 = number2 is 20
mutex protect, number1 = number2 is 21
mutex protect, number1 = number2 is 22
mutex protect, number1 = number2 is 23
mutex protect, number1 = number2 is 24
mutex protect, number1 = number2 is 25
mutex protect, number1 = number2 is 26
mutex protect, number1 = number2 is 27
mutex protect, number1 = number2 is 28
mutex protect, number1 = number2 is 29
mutex protect, number1 = number2 is 30
mutex protect, number1 = number2 is 31
mutex protect, number1 = number2 is 32
mutex protect, number1 = number2 is 33
mutex protect, number1 = number2 is 34
mutex protect, number1 = number2 is 35
mutex protect, number1 = number2 is 36
mutex protect, number1 = number2 is 37
mutex protect, number1 = number2 is 38
mutex protect, number1 = number2 is 39
mutex protect, number1 = number2 is 40
mutex protect, number1 = number2 is 41
mutex protect, number1 = number2 is 42
mutex protect, number1 = number2 is 43
mutex protect, number1 = number2 is 44
mutex protect, number1 = number2 is 45
mutex protect, number1 = number2 is 46
mutex protect, number1 = number2 is 47
mutex protect, number1 = number2 is 48
mutex protect, number1 = number2 is 49
msh >mutex_sample
msh >mutex protect, number1 = number2 is 1
mutex protect, number1 = number2 is 2
mutex protect, number1 = number2 is 3
mutex protect, number1 = number2 is 4
mutex protect, number1 = number2 is 5
mutex protect, number1 = number2 is 6
mutex protect, number1 = number2 is 7
mutex protect, number1 = number2 is 8
mutex protect, number1 = number2 is 9
mutex protect, number1 = number2 is 10
mutex protect, number1 = number2 is 11
mutex protect, number1 = number2 is 12
mutex protect, number1 = number2 is 13
mutex protect, number1 = number2 is 14
mutex protect, number1 = number2 is 15
mutex protect, number1 = number2 is 16
mutex protect, number1 = number2 is 17
mutex protect, number1 = number2 is 18
mutex protect, number1 = number2 is 19
mutex protect, number1 = number2 is 20
mutex protect, number1 = number2 is 21
mutex protect, number1 = number2 is 22
mutex protect, number1 = number2 is 23
mutex protect, number1 = number2 is 24
mutex protect, number1 = number2 is 25
mutex protect, number1 = number2 is 26
mutex protect, number1 = number2 is 27
mutex protect, number1 = number2 is 28
mutex protect, number1 = number2 is 29
mutex protect, number1 = number2 is 30
mutex protect, number1 = number2 is 31
mutex protect, number1 = number2 is 32
mutex protect, number1 = number2 is 33
mutex protect, number1 = number2 is 34
mutex protect, number1 = number2 is 35
mutex protect, number1 = number2 is 36
mutex protect, number1 = number2 is 37
mutex protect, number1 = number2 is 38
mutex protect, number1 = number2 is 39
mutex protect, number1 = number2 is 40
mutex protect, number1 = number2 is 41
mutex protect, number1 = number2 is 42
mutex protect, number1 = number2 is 43
mutex protect, number1 = number2 is 44
mutex protect, number1 = number2 is 45
mutex protect, number1 = number2 is 46
mutex protect, number1 = number2 is 47
mutex protect, number1 = number2 is 48
mutex protect, number1 = number2 is 49copymistakeCopy Success

The threads use a mutex to protect the operations on the two numbers so that the number values ​​remain consistent.

Another example of mutex is shown in the code below. This example creates three dynamic threads to check whether the priority of the holding thread is adjusted to the highest priority among the waiting threads when the mutex is held.

Priority Inversion Prevention Feature Routine

#include <rtthread.h>

/* 指向线程控制块的指针 */
static rt_thread_t tid1 = RT_NULL;
static rt_thread_t tid2 = RT_NULL;
static rt_thread_t tid3 = RT_NULL;
static rt_mutex_t mutex = RT_NULL;


#define THREAD_PRIORITY       10
#define THREAD_STACK_SIZE     512
#define THREAD_TIMESLICE    5

/* 线程 1 入口 */
static void thread1_entry(void *parameter)
{
    /* 先让低优先级线程运行 */
    rt_thread_mdelay(100);

    /* 此时 thread3 持有 mutex,并且 thread2 等待持有 mutex */

    /* 检查 thread2 与 thread3 的优先级情况 */
    if (tid2->current_priority != tid3->current_priority)
    {
        /* 优先级不相同,测试失败 */
        rt_kprintf("the priority of thread2 is: %d\n", tid2->current_priority);
        rt_kprintf("the priority of thread3 is: %d\n", tid3->current_priority);
        rt_kprintf("test failed.\n");
        return;
    }
    else
    {
        rt_kprintf("the priority of thread2 is: %d\n", tid2->current_priority);
        rt_kprintf("the priority of thread3 is: %d\n", tid3->current_priority);
        rt_kprintf("test OK.\n");
    }
}

/* 线程 2 入口 */
static void thread2_entry(void *parameter)
{
    rt_err_t result;

    rt_kprintf("the priority of thread2 is: %d\n", tid2->current_priority);

    /* 先让低优先级线程运行 */
    rt_thread_mdelay(50);

    /*
     * 试图持有互斥锁,此时 thread3 持有,应把 thread3 的优先级提升
     * 到 thread2 相同的优先级
     */
    result = rt_mutex_take(mutex, RT_WAITING_FOREVER);

    if (result == RT_EOK)
    {
        /* 释放互斥锁 */
        rt_mutex_release(mutex);
    }
}

/* 线程 3 入口 */
static void thread3_entry(void *parameter)
{
    rt_tick_t tick;
    rt_err_t result;

    rt_kprintf("the priority of thread3 is: %d\n", tid3->current_priority);

    result = rt_mutex_take(mutex, RT_WAITING_FOREVER);
    if (result != RT_EOK)
    {
        rt_kprintf("thread3 take a mutex, failed.\n");
    }

    /* 做一个长时间的循环,500ms */
    tick = rt_tick_get();
    while (rt_tick_get() - tick < (RT_TICK_PER_SECOND / 2)) ;

    rt_mutex_release(mutex);
}

int pri_inversion(void)
{
    /* 创建互斥锁 */
    mutex = rt_mutex_create("mutex", RT_IPC_FLAG_PRIO);
    if (mutex == RT_NULL)
    {
        rt_kprintf("create dynamic mutex failed.\n");
        return -1;
    }

    /* 创建线程 1 */
    tid1 = rt_thread_create("thread1",
                            thread1_entry,
                            RT_NULL,
                            THREAD_STACK_SIZE,
                            THREAD_PRIORITY - 1, THREAD_TIMESLICE);
    if (tid1 != RT_NULL)
         rt_thread_startup(tid1);

    /* 创建线程 2 */
    tid2 = rt_thread_create("thread2",
                            thread2_entry,
                            RT_NULL,
                            THREAD_STACK_SIZE,
                            THREAD_PRIORITY, THREAD_TIMESLICE);
    if (tid2 != RT_NULL)
        rt_thread_startup(tid2);

    /* 创建线程 3 */
    tid3 = rt_thread_create("thread3",
                            thread3_entry,
                            RT_NULL,
                            THREAD_STACK_SIZE,
                            THREAD_PRIORITY + 1, THREAD_TIMESLICE);
    if (tid3 != RT_NULL)
        rt_thread_startup(tid3);

    return 0;
}

/* 导出到 msh 命令列表中 */
MSH_CMD_EXPORT(pri_inversion, prio_inversion sample);copymistakeCopy Success

The simulation results are as follows:

 \ | /
- RT -     Thread Operating System
 / | \     3.1.0 build Aug 27 2018
 2006 - 2018 Copyright by rt-thread team
msh >pri_inversion
the priority of thread2 is: 10
the priority of thread3 is: 11
the priority of thread2 is: 10
the priority of thread3 is: 10
test OK.copymistakeCopy Success

The example demonstrates how to use a mutex. Thread 3 first holds the mutex, and then thread 2 tries to hold the mutex. At this time, the priority of thread 3 is raised to the same as that of thread 2.

Note

Note: It is important to remember that mutexes cannot be used in interrupt service routines.

The use of mutex is relatively simple, because it is a kind of semaphore, and it exists in the form of a lock. When initialized, the mutex is always in the unlocked state, and when it is held by a thread, it immediately changes to the locked state. Mutex is more suitable for:

(1) When a thread holds a mutex multiple times. This can avoid deadlock caused by the same thread holding it recursively multiple times.

(2) Priority reversal may occur due to multi-thread synchronization.

Event sets are also one of the mechanisms for inter-thread synchronization. An event set can contain multiple events. Event sets can be used to achieve one-to-many or many-to-many inter-thread synchronization. The following uses the example of taking a bus to explain events. When waiting for a bus at a bus stop, there may be the following situations:

①P1 Take the bus to a certain place. There is only one type of bus that can reach the destination. You can wait for this bus to depart.

②P1 Take a bus to a certain place. There are 3 types of buses that can reach the destination. You can wait for any one of them to depart.

③P1 makes an appointment with another person P2 to go somewhere together. Then P1 must wait until both conditions, “companion P2 arrives at the bus stop” and “the bus arrives at the bus stop”, are met before setting off.

Here, P1 going to a certain place can be regarded as a thread, and "the bus arrives at the bus stop" and "companion P2 arrives at the bus stop" can be regarded as the occurrence of events. Case ① is that a specific event wakes up the thread; Case ② is that any single event wakes up the thread; Case ③ is that multiple events occur simultaneously to wake up the thread.

Event sets are mainly used for synchronization between threads. Unlike semaphores, they can achieve one-to-many and many-to-many synchronization. That is, the relationship between a thread and multiple events can be set as follows: any one of the events wakes up the thread, or the thread wakes up for subsequent processing only after several events arrive; similarly, events can also be multiple threads synchronizing multiple events. This collection of multiple events can be represented by a 32-bit unsigned integer variable. Each bit of the variable represents an event. The thread associates one or more events through "logical AND" or "logical OR" to form an event combination. The "logical OR" of events is also called independent synchronization, which means that the thread synchronizes with any one of the events; the "logical AND" of events is also called associated synchronization, which means that the thread synchronizes with several events.

The event set defined by RT-Thread has the following characteristics:

1) Events are only related to threads, and events are independent of each other: Each thread can have 32 event flags, which are recorded using a 32-bit unsigned integer, and each bit represents an event;

2) Events are only used for synchronization and do not provide data transmission function;

3) There is no queueing for events, that is, sending the same event to a thread multiple times (if the thread has not had time to read it) is equivalent to sending it only once.

In RT-Thread, each thread has an event information flag, which has three attributes: RT_EVENT_FLAG_AND (logical AND), RT_EVENT_FLAG_OR (logical OR), and RT_EVENT_FLAG_CLEAR (clear flag). When a thread waits for event synchronization, it can use 32 event flags and this event information flag to determine whether the currently received event meets the synchronization condition.

As shown in the figure above, the 1st and 30th bits of the event flag of thread #1 are set. If the event information flag is set to logical AND, it means that thread #1 will be triggered to wake up only after both event 1 and event 30 occur. If the event information flag is set to logical OR, any occurrence of event 1 or event 30 will trigger the wake-up of thread #1. If the information flag is also set to the clear flag bit, when thread #1 wakes up, it will actively clear event 1 and event 30 to zero, otherwise the event flag will still exist (that is, set to 1).

In RT-Thread, the event set control block is a data structure used by the operating system to manage events, represented by the structure struct rt_event. Another C expression, rt_event_t, represents the handle of the event set, which is implemented in C language as a pointer to the event set control block. For the detailed definition of the event set control block structure, please see the following code:

struct rt_event
{
    struct rt_ipc_object parent;    /* 继承自 ipc_object 类 */

    /* 事件集合,每一 bit 表示 1 个事件,bit 位的值可以标记某事件是否发生 */
    rt_uint32_t set;
};
/* rt_event_t 是指向事件结构体的指针类型  */
typedef struct rt_event* rt_event_t;copymistakeCopy Success

The rt_event object is derived from rt_ipc_object and is managed by the IPC container.

The event set control block contains important parameters related to the event set, and plays an important role in the realization of the event set function. The event set related interface is shown in the figure below. The operations on an event set include: creating/initializing an event set, sending events, receiving events, and deleting/leaving an event set.

Creating and Deleting Event Sets

When creating an event set, the kernel first creates an event set control block, and then performs basic initialization on the event set control block. The following function interface is used to create an event set:

rt_event_t rt_event_create(const char* name, rt_uint8_t flag);copymistakeCopy Success

When calling this function interface, the system will allocate an event set object from the object manager, initialize this object, and then initialize the parent class IPC object. The following table describes the input parameters and return values ​​of this function:

parameter

describe

name

The name of the event set

flag

The flag of the event set, which can take the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO

return

——

RT_NULL

Creation failed

Handle to the event object

Create Success

Note

Note: RT_IPC_FLAG_FIFO is a non-real-time scheduling method. Unless the application is very concerned about first-come-first-served, and you clearly understand that all threads involved in this event set will become non-real-time threads, you can use RT_IPC_FLAG_FIFO. Otherwise, it is recommended to use RT_IPC_FLAG_PRIO, that is, to ensure the real-time nature of the thread.

When the system no longer uses the event set object created by rt_event_create(), it releases system resources by deleting the event set object control block. You can use the following function interface to delete an event set:

rt_err_t rt_event_delete(rt_event_t event);copymistakeCopy Success

When calling the rt_event_delete function to delete an event set object, you should ensure that the event set is no longer in use. Before deleting, all threads suspended on the event set will be awakened (the return value of the thread is - RT_ERROR), and then the memory block occupied by the event set object will be released. The following table describes the input parameters and return values ​​of this function:

parameter

describe

event

Handle to the event collection object

return

——

RT_EOK

success

Initialize and exit event sets

The memory of the static event set object is allocated by the compiler when the system is compiled, and is generally placed in the read-write data segment or the uninitialized data segment. Before using the static event set object, it needs to be initialized. To initialize the event set, use the following function interface:

rt_err_t rt_event_init(rt_event_t event, const char* name, rt_uint8_t flag);copymistakeCopy Success

When calling this interface, you need to specify the handle of the static event set object (i.e. the pointer to the event set control block), and then the system will initialize the event set object and add it to the system object container for management. The following table describes the input parameters and return values ​​of this function:

parameter

describe

event

Handle to the event collection object

name

The name of the event set

flag

The flag of the event set, which can take the following values: RT_IPC_FLAG_FIFO or RT_IPC_FLAG_PRIO

return

——

RT_EOK

success

When the system no longer uses the event set object initialized by rt_event_init(), it releases system resources by detaching the event set object control block. Detaching the event set is to detach the event set object from the kernel object manager. Detaching the event set uses the following function interface:

rt_err_t rt_event_detach(rt_event_t event);copymistakeCopy Success

When the user calls this function, the system first wakes up all threads hanging on the waiting queue of the event set (the return value of the thread is - RT_ERROR), and then detaches the event set from the kernel object manager. The following table describes the input parameters and return values ​​of this function:

parameter

describe

event

Handle to the event collection object

return

——

RT_EOK

success

Sending events

The send event function can send one or more events in the event set, as follows:

rt_err_t rt_event_send(rt_event_t event, rt_uint32_t set);copymistakeCopy Success

When using this function interface, the event flag value of the event set object is set through the event flag specified by the parameter set, and then the waiting thread list waiting on the event set object is traversed to determine whether there is a thread whose event activation requirement matches the current event object event flag value. If so, the thread is awakened. The following table describes the input parameters and return values ​​of this function:

parameter

describe

event

Handle to the event collection object

set

Flag value of one or more events sent

return

——

RT_EOK

success

Receiving Events

The kernel uses a 32-bit unsigned integer to identify an event set. Each bit represents an event. Therefore, an event set object can wait to receive 32 events at the same time. The kernel can choose how to activate the thread by specifying the selection parameter "logical AND" or "logical OR". Using the "logical AND" parameter means that the thread will be activated only when all the waiting events occur, while using the "logical OR" parameter means that the thread will be activated as long as one of the waiting events occurs. The following function interface is used to receive events:

rt_err_t rt_event_recv(rt_event_t event,
                           rt_uint32_t set,
                           rt_uint8_t option,
                           rt_int32_t timeout,
                           rt_uint32_t* recved);copymistakeCopy Success

When the user calls this interface, the system first determines whether the event it wants to receive has occurred based on the set parameter and the receiving option option. If it has occurred, it determines whether to reset the corresponding flag of the event based on whether RT_EVENT_FLAG_CLEAR is set on the option parameter, and then returns (where the recved parameter returns the received event); if it has not occurred, the set and option parameters of the waiting are filled into the structure of the thread itself, and then the thread is suspended on this event until the event it is waiting for meets the conditions or the waiting time exceeds the specified timeout. If the timeout is set to zero, it means that when the event to be received by the thread does not meet its requirements, it will not wait, but directly return - RT_ETIMEOUT. The following table describes the input parameters and return values ​​of this function:

parameter

describe

event

Handle to the event collection object

set

Receive events that the thread is interested in

option

Receiving options

timeout

Specifying a timeout

recved

Points to the received event

return

——

RT_EOK

success

-RT_ETIMEOUT

time out

-RT_ERROR

mistake

The possible values ​​of option are:

/* 选择 逻辑与 或 逻辑或 的方式接收事件 */
RT_EVENT_FLAG_OR
RT_EVENT_FLAG_AND

/* 选择清除重置事件标志位 */
RT_EVENT_FLAG_CLEARcopymistakeCopy Success

This is an application example of an event set. In the example, an event set and two threads are initialized. One thread waits for the event it is concerned about to occur, and the other thread sends the event, as shown in the following code:

Event set usage routine

#include <rtthread.h>

#define THREAD_PRIORITY      9
#define THREAD_TIMESLICE     5

#define EVENT_FLAG3 (1 << 3)
#define EVENT_FLAG5 (1 << 5)

/* 事件控制块 */
static struct rt_event event;

ALIGN(RT_ALIGN_SIZE)
static char thread1_stack[1024];
static struct rt_thread thread1;

/* 线程 1 入口函数 */
static void thread1_recv_event(void *param)
{
    rt_uint32_t e;

    /* 第一次接收事件,事件 3 或事件 5 任意一个可以触发线程 1,接收完后清除事件标志 */
    if (rt_event_recv(&event, (EVENT_FLAG3 | EVENT_FLAG5),
                      RT_EVENT_FLAG_OR | RT_EVENT_FLAG_CLEAR,
                      RT_WAITING_FOREVER, &e) == RT_EOK)
    {
        rt_kprintf("thread1: OR recv event 0x%x\n", e);
    }

    rt_kprintf("thread1: delay 1s to prepare the second event\n");
    rt_thread_mdelay(1000);

    /* 第二次接收事件,事件 3 和事件 5 均发生时才可以触发线程 1,接收完后清除事件标志 */
    if (rt_event_recv(&event, (EVENT_FLAG3 | EVENT_FLAG5),
                      RT_EVENT_FLAG_AND | RT_EVENT_FLAG_CLEAR,
                      RT_WAITING_FOREVER, &e) == RT_EOK)
    {
        rt_kprintf("thread1: AND recv event 0x%x\n", e);
    }
    /* 执行完该事件集后进行事件集的脱离,事件集重复初始化会导致再次运行时,出现重复初始化的问题 */
    rt_event_detach(&event);
    rt_kprintf("thread1 leave.\n");
}


ALIGN(RT_ALIGN_SIZE)
static char thread2_stack[1024];
static struct rt_thread thread2;

/* 线程 2 入口 */
static void thread2_send_event(void *param)
{
    rt_kprintf("thread2: send event3\n");
    rt_event_send(&event, EVENT_FLAG3);
    rt_thread_mdelay(200);

    rt_kprintf("thread2: send event5\n");
    rt_event_send(&event, EVENT_FLAG5);
    rt_thread_mdelay(200);

    rt_kprintf("thread2: send event3\n");
    rt_event_send(&event, EVENT_FLAG3);
    rt_kprintf("thread2 leave.\n");
}

int event_sample(void)
{
    rt_err_t result;

    /* 初始化事件对象 */
    result = rt_event_init(&event, "event", RT_IPC_FLAG_PRIO);
    if (result != RT_EOK)
    {
        rt_kprintf("init event failed.\n");
        return -1;
    }

    rt_thread_init(&thread1,
                   "thread1",
                   thread1_recv_event,
                   RT_NULL,
                   &thread1_stack[0],
                   sizeof(thread1_stack),
                   THREAD_PRIORITY - 1, THREAD_TIMESLICE);
    rt_thread_startup(&thread1);

    rt_thread_init(&thread2,
                   "thread2",
                   thread2_send_event,
                   RT_NULL,
                   &thread2_stack[0],
                   sizeof(thread2_stack),
                   THREAD_PRIORITY, THREAD_TIMESLICE);
    rt_thread_startup(&thread2);

    return 0;
}

/* 导出到 msh 命令列表中 */
MSH_CMD_EXPORT(event_sample, event sample);copymistakeCopy Success

The simulation results are as follows:

 \ | /
- RT -     Thread Operating System
 / | \     4.1.1 build Sep  5 2024 15:53:21
 2006 - 2022 Copyright by RT-Thread team
msh >event_sample
thread2: send event3
thread1: OR recv event 0x8
thread1: delay 1s to prepare the second event
msh >thread2: send event5
thread2: send event3
thread2 leave.
thread1: AND recv event 0x28
thread1 leave.

msh >event_sample
thread2: send event3
thread1: OR recv event 0x8
thread1: delay 1s to prepare the second event
msh >thread2: send event5
thread2: send event3
thread2 leave.
thread1: AND recv event 0x28
thread1 leave.copymistakeCopy Success

The example demonstrates how to use the event set. Thread 1 receives events twice, using the "logical or" and "logical and" methods respectively.

Event sets can be used in many occasions. They can replace semaphores to a certain extent and are used for synchronization between threads. A thread or interrupt service routine sends an event to the event set object, and then the waiting thread is awakened and processes the corresponding event. However, unlike semaphores, the event sending operation is not cumulative before the event is cleared, while the release action of semaphores is cumulative. Another feature of events is that the receiving thread can wait for multiple events, that is, multiple events correspond to one thread or multiple threads. At the same time, according to the parameters that the thread is waiting for, you can choose whether to trigger with "logical or" or "logical and". This feature is also not available in semaphores, etc. Semaphores can only recognize a single release action, and cannot wait for multiple types of releases at the same time. The following figure shows a schematic diagram of multi-event reception:

An event set contains 32 events, and a specific thread only waits for and receives the events it is interested in. One thread can wait for multiple events to arrive (threads 1 and 2 both wait for multiple events, and the events can use "and" or "or" logic to trigger the thread), or multiple threads can wait for the arrival of one event (event 25). When the event they are interested in occurs, the thread will be awakened and perform subsequent processing actions.

Last updated

Assoc. Prof. Wiroon Sriborrirux, Founder of Advance Innovation Center (AIC) and Bangsaen Design House (BDH), Electrical Engineering Department, Faculty of Engineering, Burapha University