Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions include/sys/task.h
Original file line number Diff line number Diff line change
Expand Up @@ -288,6 +288,15 @@ uint64_t mo_uptime(void);
*/
void _sched_block(queue_t *wait_q);

/* Atomically blocks the current task and invokes the scheduler.
*
* This internal kernel primitive is the basis for all blocking operations.
* Support for mutex lock data structure.
*
* @waiters : The wait list to which the current task will be added
*/
void mutex_block_atomic(list_t *waiters);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it necessary to expose this function?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The task transition into the blocked state caused by either a semaphore wait or a mutex lock is conceptually similar.

In the semaphore path, the public internal API _sched_block() is used in mo_sem_wait(). In the mutex path, however, we currently use the inline helper mutex_block_atomic().

My intention is to align both task-blocking paths so that semaphore and mutex follow a consistent blocking workflow. However, _sched_block() and mutex_block_atomic() differ slightly in their input parameters and in the fact that mutex does not need to process deferred tasks, so I cannot directly replace the mutex path with _sched_block() even though their high-level functionality—blocking the current task—is largely the same.

Because of that, I moved mutex_block_atomic() into task.c so that both semaphore and mutex blocking primitives are located under the same task state transition module, keeping the call paths consistent.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, regarding the consistency between _sched_block() and mutex_block_atomic():

I noticed that Semaphore operates on a fixed-size queue_t (Ring Buffer), whereas Mutex uses a list_t (Linked List).

Since the underlying data structures are fundamentally different, I wonder if moving the function achieves architectural consistency, or if it just collocates two distinct implementations? I thought this distinction might be worth considering.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, regarding the consistency between _sched_block() and mutex_block_atomic():

I noticed that Semaphore operates on a fixed-size queue_t (Ring Buffer), whereas Mutex uses a list_t (Linked List).

Since the underlying data structures are fundamentally different, I wonder if moving the function achieves architectural consistency, or if it just collocates two distinct implementations? I thought this distinction might be worth considering.

Hi @HeatCrab,

I’m still a bit unclear about the intended role of _sched_block().

Currently, _sched_block() is declared in task.h and used by the semaphore path to atomically block the current task. Mutex, however, uses its own internal helper to block the current task instead of calling _sched_block().

According to the comment in task.h:

/* Atomically blocks the current task and invokes the scheduler.
 *
 * This internal kernel primitive is the basis for all blocking operations. It
 * must be called from within a NOSCHED_ENTER critical section. It enqueues the
 * current task, sets its state to blocked, and calls the scheduler. The
 * scheduler lock is NOT released by this function; it is released implicitly
 * when the kernel context switches to a new task.
 *
 * @wait_q : The wait queue to which the current task will be added
 */

I'm trying to figure out why block process in mutex use inner helper function rather than this _sched_block API.
I think different block path will cause task state transition to become more complex.

But the difficulty that I'm encountering now, as you say, _sched_block() operates on different data structures
to what block in mutex operated on.


/* Application Entry Point */

/* The main entry point for the user application.
Expand Down
18 changes: 0 additions & 18 deletions kernel/mutex.c
Original file line number Diff line number Diff line change
Expand Up @@ -64,24 +64,6 @@ static bool remove_self_from_waiters(list_t *waiters)
return false;
}

/* Atomic block operation with enhanced error checking */
static void mutex_block_atomic(list_t *waiters)
{
if (unlikely(!waiters || !kcb || !kcb->task_current ||
!kcb->task_current->data))
panic(ERR_SEM_OPERATION);

tcb_t *self = kcb->task_current->data;

/* Add to waiters list */
if (unlikely(!list_pushback(waiters, self)))
panic(ERR_SEM_OPERATION);

/* Block and yield atomically */
self->state = TASK_BLOCKED;
_yield(); /* This releases NOSCHED when we context switch */
}

int32_t mo_mutex_init(mutex_t *m)
{
if (unlikely(!m))
Expand Down
19 changes: 19 additions & 0 deletions kernel/task.c
Original file line number Diff line number Diff line change
Expand Up @@ -1041,3 +1041,22 @@ void _sched_block(queue_t *wait_q)
self->state = TASK_BLOCKED;
_yield();
}

/* Atomic block operation with enhanced error checking, mainly used in mutex
* operations */
void mutex_block_atomic(list_t *waiters)
{
if (unlikely(!waiters || !kcb || !kcb->task_current ||
!kcb->task_current->data))
panic(ERR_SEM_OPERATION);

tcb_t *self = kcb->task_current->data;

/* Add to waiters list */
if (unlikely(!list_pushback(waiters, self)))
panic(ERR_SEM_OPERATION);

/* Block and yield atomically */
self->state = TASK_BLOCKED;
_yield(); /* This releases NOSCHED when we context switch */
}
Loading