mirror of
https://github.com/0xAX/linux-insides
synced 2024-11-17 15:29:56 +00:00
fix broken links related with 'sync, syscall, timer'
This commit is contained in:
parent
3e07f7cf28
commit
fdbf6857a1
@ -76,7 +76,7 @@ In the first case for the `blocking notifier chains`, callbacks will be called/e
|
|||||||
|
|
||||||
The second `SRCU notifier chains` represent alternative form of `blocking notifier chains`. In the first case, blocking notifier chains uses `rw_semaphore` synchronization primitive to protect chain links. `SRCU` notifier chains run in process context too, but uses special form of [RCU](https://en.wikipedia.org/wiki/Read-copy-update) mechanism which is permissible to block in an read-side critical section.
|
The second `SRCU notifier chains` represent alternative form of `blocking notifier chains`. In the first case, blocking notifier chains uses `rw_semaphore` synchronization primitive to protect chain links. `SRCU` notifier chains run in process context too, but uses special form of [RCU](https://en.wikipedia.org/wiki/Read-copy-update) mechanism which is permissible to block in an read-side critical section.
|
||||||
|
|
||||||
In the third case for the `atomic notifier chains` runs in interrupt or atomic context and protected by [spinlock](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/sync-1.html) synchronization primitive. The last `raw notifier chains` provides special type of notifier chains without any locking restrictions on callbacks. This means that protection rests on the shoulders of caller side. It is very useful when we want to protect our chain with very specific locking mechanism.
|
In the third case for the `atomic notifier chains` runs in interrupt or atomic context and protected by [spinlock](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-1.html) synchronization primitive. The last `raw notifier chains` provides special type of notifier chains without any locking restrictions on callbacks. This means that protection rests on the shoulders of caller side. It is very useful when we want to protect our chain with very specific locking mechanism.
|
||||||
|
|
||||||
If we will look at the implementation of the `notifier_block` structure, we will see that it contains pointer to the `next` element from a notification chain list, but we have no head. Actually a head of such list is in separate structure depends on type of a notification chain. For example for the `blocking notifier chains`:
|
If we will look at the implementation of the `notifier_block` structure, we will see that it contains pointer to the `next` element from a notification chain list, but we have no head. Actually a head of such list is in separate structure depends on type of a notification chain. For example for the `blocking notifier chains`:
|
||||||
|
|
||||||
@ -118,7 +118,7 @@ which defines head for loadable modules blocking notifier chain. The `BLOCKING_N
|
|||||||
} while (0)
|
} while (0)
|
||||||
```
|
```
|
||||||
|
|
||||||
So we may see that it takes name of a name of a head of a blocking notifier chain and initializes read/write [semaphore](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/sync-3.html) and set head to `NULL`. Besides the `BLOCKING_INIT_NOTIFIER_HEAD` macro, the Linux kernel additionally provides `ATOMIC_INIT_NOTIFIER_HEAD`, `RAW_INIT_NOTIFIER_HEAD` macros and `srcu_init_notifier` function for initialization atomic and other types of notification chains.
|
So we may see that it takes name of a name of a head of a blocking notifier chain and initializes read/write [semaphore](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-3.html) and set head to `NULL`. Besides the `BLOCKING_INIT_NOTIFIER_HEAD` macro, the Linux kernel additionally provides `ATOMIC_INIT_NOTIFIER_HEAD`, `RAW_INIT_NOTIFIER_HEAD` macros and `srcu_init_notifier` function for initialization atomic and other types of notification chains.
|
||||||
|
|
||||||
After initialization of a head of a notification chain, a subsystem which wants to receive notification from the given notification chain it should register with certain function which is depends on type of notification. If you will look in the [include/linux/notifier.h](https://github.com/torvalds/linux/blob/master/include/linux/notifier.h) header file, you will see following four function for this:
|
After initialization of a head of a notification chain, a subsystem which wants to receive notification from the given notification chain it should register with certain function which is depends on type of notification. If you will look in the [include/linux/notifier.h](https://github.com/torvalds/linux/blob/master/include/linux/notifier.h) header file, you will see following four function for this:
|
||||||
|
|
||||||
@ -331,7 +331,7 @@ static struct notifier_block tracepoint_module_nb = {
|
|||||||
};
|
};
|
||||||
```
|
```
|
||||||
|
|
||||||
When one of the `MODULE_STATE_LIVE`, `MODULE_STATE_COMING` or `MODULE_STATE_GOING` events occurred. For example the `MODULE_STATE_LIVE` the `MODULE_STATE_COMING` notifications will be sent during execution of the [init_module](http://man7.org/linux/man-pages/man2/init_module.2.html) [system call](https://0xax.gitbooks.io/linux-insides/content/SysCall/syscall-1.html). Or for example `MODULE_STATE_GOING` will be sent during execution of the [delete_module](http://man7.org/linux/man-pages/man2/delete_module.2.html) `system call`:
|
When one of the `MODULE_STATE_LIVE`, `MODULE_STATE_COMING` or `MODULE_STATE_GOING` events occurred. For example the `MODULE_STATE_LIVE` the `MODULE_STATE_COMING` notifications will be sent during execution of the [init_module](http://man7.org/linux/man-pages/man2/init_module.2.html) [system call](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-1.html). Or for example `MODULE_STATE_GOING` will be sent during execution of the [delete_module](http://man7.org/linux/man-pages/man2/delete_module.2.html) `system call`:
|
||||||
|
|
||||||
```C
|
```C
|
||||||
SYSCALL_DEFINE2(delete_module, const char __user *, name_user,
|
SYSCALL_DEFINE2(delete_module, const char __user *, name_user,
|
||||||
@ -359,11 +359,11 @@ Links
|
|||||||
* [API](https://en.wikipedia.org/wiki/Application_programming_interface)
|
* [API](https://en.wikipedia.org/wiki/Application_programming_interface)
|
||||||
* [callback](https://en.wikipedia.org/wiki/Callback_(computer_programming))
|
* [callback](https://en.wikipedia.org/wiki/Callback_(computer_programming))
|
||||||
* [RCU](https://en.wikipedia.org/wiki/Read-copy-update)
|
* [RCU](https://en.wikipedia.org/wiki/Read-copy-update)
|
||||||
* [spinlock](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/sync-1.html)
|
* [spinlock](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-1.html)
|
||||||
* [loadable modules](https://en.wikipedia.org/wiki/Loadable_kernel_module)
|
* [loadable modules](https://en.wikipedia.org/wiki/Loadable_kernel_module)
|
||||||
* [semaphore](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/sync-3.html)
|
* [semaphore](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-3.html)
|
||||||
* [tracepoints](https://www.kernel.org/doc/Documentation/trace/tracepoints.txt)
|
* [tracepoints](https://www.kernel.org/doc/Documentation/trace/tracepoints.txt)
|
||||||
* [system call](https://0xax.gitbooks.io/linux-insides/content/SysCall/syscall-1.html)
|
* [system call](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-1.html)
|
||||||
* [init_module system call](http://man7.org/linux/man-pages/man2/init_module.2.html)
|
* [init_module system call](http://man7.org/linux/man-pages/man2/init_module.2.html)
|
||||||
* [delete_module](http://man7.org/linux/man-pages/man2/delete_module.2.html)
|
* [delete_module](http://man7.org/linux/man-pages/man2/delete_module.2.html)
|
||||||
* [previous part](https://0xax.gitbooks.io/linux-insides/content/Concepts/initcall.html)
|
* [previous part](https://0xax.gitbooks.io/linux-insides/content/Concepts/initcall.html)
|
||||||
|
@ -6,7 +6,7 @@ Introduction
|
|||||||
|
|
||||||
Despite the [linux-insides](https://www.gitbook.com/book/0xax/linux-insides/details) described mostly Linux kernel related stuff, I have decided to write this one part which mostly related to userspace.
|
Despite the [linux-insides](https://www.gitbook.com/book/0xax/linux-insides/details) described mostly Linux kernel related stuff, I have decided to write this one part which mostly related to userspace.
|
||||||
|
|
||||||
There is already fourth [part](https://0xax.gitbooks.io/linux-insides/content/SysCall/syscall-4.html) of [System calls](https://en.wikipedia.org/wiki/System_call) chapter which describes what does the Linux kernel do when we want to start a program. In this part I want to explore what happens when we run a program on a Linux machine from userspace perspective.
|
There is already fourth [part](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-4.html) of [System calls](https://en.wikipedia.org/wiki/System_call) chapter which describes what does the Linux kernel do when we want to start a program. In this part I want to explore what happens when we run a program on a Linux machine from userspace perspective.
|
||||||
|
|
||||||
I don't know how about you, but in my university I learn that a `C` program starts executing from the function which is called `main`. And that's partly true. Whenever we are starting to write new program, we start our program from the following lines of code:
|
I don't know how about you, but in my university I learn that a `C` program starts executing from the function which is called `main`. And that's partly true. Whenever we are starting to write new program, we start our program from the following lines of code:
|
||||||
|
|
||||||
@ -123,7 +123,7 @@ Ok, everything looks pretty good up to now. You may already know that there is a
|
|||||||
|
|
||||||
> The exec() family of functions replaces the current process image with a new process image.
|
> The exec() family of functions replaces the current process image with a new process image.
|
||||||
|
|
||||||
All the `exec*` functions are simple frontends to the [execve](http://man7.org/linux/man-pages/man2/execve.2.html) system call. If you have read the fourth [part](https://0xax.gitbooks.io/linux-insides/content/SysCall/syscall-4.html) of the chapter which describes [system calls](https://en.wikipedia.org/wiki/System_call), you may know that the [execve](http://linux.die.net/man/2/execve) system call is defined in the [files/exec.c](https://github.com/torvalds/linux/blob/08e4e0d0456d0ca8427b2d1ddffa30f1c3e774d7/fs/exec.c#L1888) source code file and looks like:
|
All the `exec*` functions are simple frontends to the [execve](http://man7.org/linux/man-pages/man2/execve.2.html) system call. If you have read the fourth [part](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-4.html) of the chapter which describes [system calls](https://en.wikipedia.org/wiki/System_call), you may know that the [execve](http://linux.die.net/man/2/execve) system call is defined in the [files/exec.c](https://github.com/torvalds/linux/blob/08e4e0d0456d0ca8427b2d1ddffa30f1c3e774d7/fs/exec.c#L1888) source code file and looks like:
|
||||||
|
|
||||||
```C
|
```C
|
||||||
SYSCALL_DEFINE3(execve,
|
SYSCALL_DEFINE3(execve,
|
||||||
@ -135,7 +135,7 @@ SYSCALL_DEFINE3(execve,
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
It takes an executable file name, set of command line arguments, and set of enviroment variables. As you may guess, everything is done by the `do_execve` function. I will not describe the implementation of the `do_execve` function in detail because you can read about this in [here](https://0xax.gitbooks.io/linux-insides/content/SysCall/syscall-4.html). But in short words, the `do_execve` function does many checks like `filename` is valid, limit of launched processes is not exceed in our system and etc. After all of these checks, this function parses our executable file which is represented in [ELF](https://en.wikipedia.org/wiki/Executable_and_Linkable_Format) format, creates memory descriptor for newly executed executable file and fills it with the appropriate values like area for the stack, heap and etc. When the setup of new binary image is done, the `start_thread` function will set up one new process. This function is architecture-specific and for the [x86_64](https://en.wikipedia.org/wiki/X86-64) architecture, its definition will be located in the [arch/x86/kernel/process_64.c](https://github.com/torvalds/linux/blob/08e4e0d0456d0ca8427b2d1ddffa30f1c3e774d7/arch/x86/kernel/process_64.c#L239) source code file.
|
It takes an executable file name, set of command line arguments, and set of enviroment variables. As you may guess, everything is done by the `do_execve` function. I will not describe the implementation of the `do_execve` function in detail because you can read about this in [here](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-4.html). But in short words, the `do_execve` function does many checks like `filename` is valid, limit of launched processes is not exceed in our system and etc. After all of these checks, this function parses our executable file which is represented in [ELF](https://en.wikipedia.org/wiki/Executable_and_Linkable_Format) format, creates memory descriptor for newly executed executable file and fills it with the appropriate values like area for the stack, heap and etc. When the setup of new binary image is done, the `start_thread` function will set up one new process. This function is architecture-specific and for the [x86_64](https://en.wikipedia.org/wiki/X86-64) architecture, its definition will be located in the [arch/x86/kernel/process_64.c](https://github.com/torvalds/linux/blob/08e4e0d0456d0ca8427b2d1ddffa30f1c3e774d7/arch/x86/kernel/process_64.c#L239) source code file.
|
||||||
|
|
||||||
The `start_thread` function sets new value to [segment registers](https://en.wikipedia.org/wiki/X86_memory_segmentation) and program execution address. From this point, our new process is ready to start. Once the [context switch](https://en.wikipedia.org/wiki/Context_switch) will be done, control will be returned to userspace with new values of registers and the new executable will be started to execute.
|
The `start_thread` function sets new value to [segment registers](https://en.wikipedia.org/wiki/X86_memory_segmentation) and program execution address. From this point, our new process is ready to start. Once the [context switch](https://en.wikipedia.org/wiki/Context_switch) will be done, control will be returned to userspace with new values of registers and the new executable will be started to execute.
|
||||||
|
|
||||||
|
@ -22,7 +22,7 @@ clocksource_select();
|
|||||||
mutex_unlock(&clocksource_mutex);
|
mutex_unlock(&clocksource_mutex);
|
||||||
```
|
```
|
||||||
|
|
||||||
from the [kernel/time/clocksource.c](https://github.com/torvalds/linux/master/kernel/time/clocksource.c) source code file. This code is from the `__clocksource_register_scale` function which adds the given [clocksource](https://0xax.gitbooks.io/linux-insides/content/Timers/timers-2.html) to the clock sources list. This function produces different operations on a list with registered clock sources. For example, the `clocksource_enqueue` function adds the given clock source to the list with registered clocksources - `clocksource_list`. Note that these lines of code wrapped to two functions: `mutex_lock` and `mutex_unlock` which takes one parameter - the `clocksource_mutex` in our case.
|
from the [kernel/time/clocksource.c](https://github.com/torvalds/linux/master/kernel/time/clocksource.c) source code file. This code is from the `__clocksource_register_scale` function which adds the given [clocksource](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-2.html) to the clock sources list. This function produces different operations on a list with registered clock sources. For example, the `clocksource_enqueue` function adds the given clock source to the list with registered clocksources - `clocksource_list`. Note that these lines of code wrapped to two functions: `mutex_lock` and `mutex_unlock` which takes one parameter - the `clocksource_mutex` in our case.
|
||||||
|
|
||||||
These functions represent locking and unlocking based on [mutex](https://en.wikipedia.org/wiki/Mutual_exclusion) synchronization primitive. As `mutex_lock` will be executed, it allows us to prevent the situation when two or more threads will execute this code while the `mutex_unlock` will not be executed by process-owner of the mutex. In other words, we prevent parallel operations on a `clocksource_list`. Why do we need `mutex` here? What if two parallel processes will try to register a clock source. As we already know, the `clocksource_enqueue` function adds the given clock source to the `clocksource_list` list right after a clock source in the list which has the biggest rating (a registered clock source which has the highest frequency in the system):
|
These functions represent locking and unlocking based on [mutex](https://en.wikipedia.org/wiki/Mutual_exclusion) synchronization primitive. As `mutex_lock` will be executed, it allows us to prevent the situation when two or more threads will execute this code while the `mutex_unlock` will not be executed by process-owner of the mutex. In other words, we prevent parallel operations on a `clocksource_list`. Why do we need `mutex` here? What if two parallel processes will try to register a clock source. As we already know, the `clocksource_enqueue` function adds the given clock source to the `clocksource_list` list right after a clock source in the list which has the biggest rating (a registered clock source which has the highest frequency in the system):
|
||||||
|
|
||||||
@ -417,7 +417,7 @@ Links
|
|||||||
|
|
||||||
* [Concurrent computing](https://en.wikipedia.org/wiki/Concurrent_computing)
|
* [Concurrent computing](https://en.wikipedia.org/wiki/Concurrent_computing)
|
||||||
* [Synchronization](https://en.wikipedia.org/wiki/Synchronization_%28computer_science%29)
|
* [Synchronization](https://en.wikipedia.org/wiki/Synchronization_%28computer_science%29)
|
||||||
* [Clocksource framework](https://0xax.gitbooks.io/linux-insides/content/Timers/timers-2.html)
|
* [Clocksource framework](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-2.html)
|
||||||
* [Mutex](https://en.wikipedia.org/wiki/Mutual_exclusion)
|
* [Mutex](https://en.wikipedia.org/wiki/Mutual_exclusion)
|
||||||
* [Race condition](https://en.wikipedia.org/wiki/Race_condition)
|
* [Race condition](https://en.wikipedia.org/wiki/Race_condition)
|
||||||
* [Atomic operations](https://en.wikipedia.org/wiki/Linearizability)
|
* [Atomic operations](https://en.wikipedia.org/wiki/Linearizability)
|
||||||
|
@ -4,9 +4,9 @@ Synchronization primitives in the Linux kernel. Part 2.
|
|||||||
Queued Spinlocks
|
Queued Spinlocks
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
This is the second part of the [chapter](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/index.html) which describes synchronization primitives in the Linux kernel and in the first [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-1.html) of this chapter we met the first - [spinlock](https://en.wikipedia.org/wiki/Spinlock). We will continue to learn this synchronization primitive in this part. If you have read the previous part, you may remember that besides normal spinlocks, the Linux kernel provides special type of `spinlocks` - `queued spinlocks`. In this part we will try to understand what does this concept represent.
|
This is the second part of the [chapter](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/index.html) which describes synchronization primitives in the Linux kernel and in the first [part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-1.html) of this chapter we met the first - [spinlock](https://en.wikipedia.org/wiki/Spinlock). We will continue to learn this synchronization primitive in this part. If you have read the previous part, you may remember that besides normal spinlocks, the Linux kernel provides special type of `spinlocks` - `queued spinlocks`. In this part we will try to understand what does this concept represent.
|
||||||
|
|
||||||
We saw [API](https://en.wikipedia.org/wiki/Application_programming_interface) of `spinlock` in the previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-1.html):
|
We saw [API](https://en.wikipedia.org/wiki/Application_programming_interface) of `spinlock` in the previous [part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-1.html):
|
||||||
|
|
||||||
* `spin_lock_init` - produces initialization of the given `spinlock`;
|
* `spin_lock_init` - produces initialization of the given `spinlock`;
|
||||||
* `spin_lock` - acquires given `spinlock`;
|
* `spin_lock` - acquires given `spinlock`;
|
||||||
@ -97,7 +97,7 @@ int unlock(lock)
|
|||||||
|
|
||||||
The first thread will execute the `test_and_set` which will set the `lock` to `1`. When the second thread will call the `lock` function, it will spin in the `while` loop, until the first thread will not call the `unlock` function and the `lock` will be equal to `0`. This implementation is not very good for performance, because it has at least two problems. The first problem is that this implementation may be unfair and the thread from one processor may have long waiting time, even if it called the `lock` before other threads which are waiting for free lock too. The second problem is that all threads which want to acquire a lock, must to execute many `atomic` operations like `test_and_set` on a variable which is in shared memory. This leads to the cache invalidation as the cache of the processor will store `lock=1`, but the value of the `lock` in memory may be `1` after a thread will release this lock.
|
The first thread will execute the `test_and_set` which will set the `lock` to `1`. When the second thread will call the `lock` function, it will spin in the `while` loop, until the first thread will not call the `unlock` function and the `lock` will be equal to `0`. This implementation is not very good for performance, because it has at least two problems. The first problem is that this implementation may be unfair and the thread from one processor may have long waiting time, even if it called the `lock` before other threads which are waiting for free lock too. The second problem is that all threads which want to acquire a lock, must to execute many `atomic` operations like `test_and_set` on a variable which is in shared memory. This leads to the cache invalidation as the cache of the processor will store `lock=1`, but the value of the `lock` in memory may be `1` after a thread will release this lock.
|
||||||
|
|
||||||
In the previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-1.html) we saw the second type of spinlock implementation - `ticket spinlock`. This approach solves the first problem and may guarantee order of threads which want to acquire a lock, but still has a second problem.
|
In the previous [part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-1.html) we saw the second type of spinlock implementation - `ticket spinlock`. This approach solves the first problem and may guarantee order of threads which want to acquire a lock, but still has a second problem.
|
||||||
|
|
||||||
The topic of this part is `queued spinlocks`. This approach may help to solve both of these problems. The `queued spinlocks` allows to each processor to use its own memory location to spin. The basic principle of a queue-based spinlock can best be understood by studying a classic queue-based spinlock implementation called the [MCS](http://www.cs.rochester.edu/~scott/papers/1991_TOCS_synch.pdf) lock. Before we will look at implementation of the `queued spinlocks` in the Linux kernel, we will try to understand what is it `MCS` lock.
|
The topic of this part is `queued spinlocks`. This approach may help to solve both of these problems. The `queued spinlocks` allows to each processor to use its own memory location to spin. The basic principle of a queue-based spinlock can best be understood by studying a classic queue-based spinlock implementation called the [MCS](http://www.cs.rochester.edu/~scott/papers/1991_TOCS_synch.pdf) lock. Before we will look at implementation of the `queued spinlocks` in the Linux kernel, we will try to understand what is it `MCS` lock.
|
||||||
|
|
||||||
@ -462,7 +462,7 @@ That's all.
|
|||||||
Conclusion
|
Conclusion
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
This is the end of the second part of the [synchronization primitives](https://en.wikipedia.org/wiki/Synchronization_%28computer_science%29) chapter in the Linux kernel. In the previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-1.html) we already met the first synchronization primitive `spinlock` provided by the Linux kernel which is implemented as `ticket spinlock`. In this part we saw another implementation of the `spinlock` mechanism - `queued spinlock`. In the next part we will continue to dive into synchronization primitives in the Linux kernel.
|
This is the end of the second part of the [synchronization primitives](https://en.wikipedia.org/wiki/Synchronization_%28computer_science%29) chapter in the Linux kernel. In the previous [part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-1.html) we already met the first synchronization primitive `spinlock` provided by the Linux kernel which is implemented as `ticket spinlock`. In this part we saw another implementation of the `spinlock` mechanism - `queued spinlock`. In the next part we will continue to dive into synchronization primitives in the Linux kernel.
|
||||||
|
|
||||||
If you have questions or suggestions, feel free to ping me in twitter [0xAX](https://twitter.com/0xAX), drop me [email](anotherworldofworld@gmail.com) or just create [issue](https://github.com/0xAX/linux-insides/issues/new).
|
If you have questions or suggestions, feel free to ping me in twitter [0xAX](https://twitter.com/0xAX), drop me [email](anotherworldofworld@gmail.com) or just create [issue](https://github.com/0xAX/linux-insides/issues/new).
|
||||||
|
|
||||||
@ -484,4 +484,4 @@ Links
|
|||||||
* [NOP instruction](https://en.wikipedia.org/wiki/NOP)
|
* [NOP instruction](https://en.wikipedia.org/wiki/NOP)
|
||||||
* [PREFETCHW instruction](http://www.felixcloutier.com/x86/PREFETCHW.html)
|
* [PREFETCHW instruction](http://www.felixcloutier.com/x86/PREFETCHW.html)
|
||||||
* [x86_64](https://en.wikipedia.org/wiki/X86-64)
|
* [x86_64](https://en.wikipedia.org/wiki/X86-64)
|
||||||
* [Previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-1.html)
|
* [Previous part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-1.html)
|
||||||
|
@ -4,7 +4,7 @@ Synchronization primitives in the Linux kernel. Part 3.
|
|||||||
Semaphores
|
Semaphores
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
This is the third part of the [chapter](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/index.html) which describes synchronization primitives in the Linux kernel and in the previous part we saw special type of [spinlocks](https://en.wikipedia.org/wiki/Spinlock) - `queued spinlocks`. The previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-2.html) was the last part which describes `spinlocks` related stuff. So we need to go ahead.
|
This is the third part of the [chapter](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/index.html) which describes synchronization primitives in the Linux kernel and in the previous part we saw special type of [spinlocks](https://en.wikipedia.org/wiki/Spinlock) - `queued spinlocks`. The previous [part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-2.html) was the last part which describes `spinlocks` related stuff. So we need to go ahead.
|
||||||
|
|
||||||
The next [synchronization primitive](https://en.wikipedia.org/wiki/Synchronization_%28computer_science%29) after `spinlock` which we will see in this part is [semaphore](https://en.wikipedia.org/wiki/Semaphore_%28programming%29). We will start from theoretical side and will learn what is it `semaphore` and only after this, we will see how it is implemented in the Linux kernel as we did in the previous part.
|
The next [synchronization primitive](https://en.wikipedia.org/wiki/Synchronization_%28computer_science%29) after `spinlock` which we will see in this part is [semaphore](https://en.wikipedia.org/wiki/Semaphore_%28programming%29). We will start from theoretical side and will learn what is it `semaphore` and only after this, we will see how it is implemented in the Linux kernel as we did in the previous part.
|
||||||
|
|
||||||
@ -70,7 +70,7 @@ as we may see, the `DEFINE_SEMAPHORE` macro provides ability to initialize only
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
The `__SEMAPHORE_INITIALIZER` macro takes the name of the future `semaphore` structure and does initialization of the fields of this structure. First of all we initialize a `spinlock` of the given `semaphore` with the `__RAW_SPIN_LOCK_UNLOCKED` macro. As you may remember from the [previous](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-1.html) parts, the `__RAW_SPIN_LOCK_UNLOCKED` is defined in the [include/linux/spinlock_types.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/spinlock_types.h) header file and expands to the `__ARCH_SPIN_LOCK_UNLOCKED` macro which just expands to zero or unlocked state:
|
The `__SEMAPHORE_INITIALIZER` macro takes the name of the future `semaphore` structure and does initialization of the fields of this structure. First of all we initialize a `spinlock` of the given `semaphore` with the `__RAW_SPIN_LOCK_UNLOCKED` macro. As you may remember from the [previous](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-1.html) parts, the `__RAW_SPIN_LOCK_UNLOCKED` is defined in the [include/linux/spinlock_types.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/spinlock_types.h) header file and expands to the `__ARCH_SPIN_LOCK_UNLOCKED` macro which just expands to zero or unlocked state:
|
||||||
|
|
||||||
```C
|
```C
|
||||||
#define __ARCH_SPIN_LOCK_UNLOCKED { { 0 } }
|
#define __ARCH_SPIN_LOCK_UNLOCKED { { 0 } }
|
||||||
@ -106,7 +106,7 @@ The first two functions: `down` and `up` are for acquiring and releasing of the
|
|||||||
|
|
||||||
The `down_killable` function does the same as the `down_interruptible` function, but set the `TASK_KILLABLE` flag for the current process. This means that the waiting process may be interrupted by the kill signal.
|
The `down_killable` function does the same as the `down_interruptible` function, but set the `TASK_KILLABLE` flag for the current process. This means that the waiting process may be interrupted by the kill signal.
|
||||||
|
|
||||||
The `down_trylock` function is similar on the `spin_trylock` function. This function tries to acquire a lock and exit if this operation was unsuccessful. In this case the process which wants to acquire a lock, will not wait. The last `down_timeout` function tries to acquire a lock. It will be interrupted in a waiting state when the given timeout will be expired. Additionally, you may notice that the timeout is in [jiffies](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-1.html)
|
The `down_trylock` function is similar on the `spin_trylock` function. This function tries to acquire a lock and exit if this operation was unsuccessful. In this case the process which wants to acquire a lock, will not wait. The last `down_timeout` function tries to acquire a lock. It will be interrupted in a waiting state when the given timeout will be expired. Additionally, you may notice that the timeout is in [jiffies](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-1.html)
|
||||||
|
|
||||||
We just saw definitions of the `semaphore` [API](https://en.wikipedia.org/wiki/Application_programming_interface). We will start from the `down` function. This function is defined in the [kernel/locking/semaphore.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/locking/semaphore.c) source code file. Let's look on the implementation function:
|
We just saw definitions of the `semaphore` [API](https://en.wikipedia.org/wiki/Application_programming_interface). We will start from the `down` function. This function is defined in the [kernel/locking/semaphore.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/locking/semaphore.c) source code file. Let's look on the implementation function:
|
||||||
|
|
||||||
@ -342,8 +342,8 @@ Links
|
|||||||
* [preemption](https://en.wikipedia.org/wiki/Preemption_%28computing%29)
|
* [preemption](https://en.wikipedia.org/wiki/Preemption_%28computing%29)
|
||||||
* [deadlocks](https://en.wikipedia.org/wiki/Deadlock)
|
* [deadlocks](https://en.wikipedia.org/wiki/Deadlock)
|
||||||
* [scheduler](https://en.wikipedia.org/wiki/Scheduling_%28computing%29)
|
* [scheduler](https://en.wikipedia.org/wiki/Scheduling_%28computing%29)
|
||||||
* [Doubly linked list in the Linux kernel](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/DataStructures/dlist.html)
|
* [Doubly linked list in the Linux kernel](https://0xax.gitbooks.io/linux-insides/content/DataStructures/dlist.html)
|
||||||
* [jiffies](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-1.html)
|
* [jiffies](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-1.html)
|
||||||
* [interrupts](https://en.wikipedia.org/wiki/Interrupt)
|
* [interrupts](https://en.wikipedia.org/wiki/Interrupt)
|
||||||
* [per-cpu](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Concepts/per-cpu.html)
|
* [per-cpu](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Concepts/per-cpu.html)
|
||||||
* [bitmask](https://en.wikipedia.org/wiki/Mask_%28computing%29)
|
* [bitmask](https://en.wikipedia.org/wiki/Mask_%28computing%29)
|
||||||
@ -351,4 +351,4 @@ Links
|
|||||||
* [errno](https://en.wikipedia.org/wiki/Errno.h)
|
* [errno](https://en.wikipedia.org/wiki/Errno.h)
|
||||||
* [API](https://en.wikipedia.org/wiki/Application_programming_interface)
|
* [API](https://en.wikipedia.org/wiki/Application_programming_interface)
|
||||||
* [mutex](https://en.wikipedia.org/wiki/Mutual_exclusion)
|
* [mutex](https://en.wikipedia.org/wiki/Mutual_exclusion)
|
||||||
* [Previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-2.html)
|
* [Previous part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-2.html)
|
||||||
|
@ -13,7 +13,7 @@ So, let's start.
|
|||||||
Concept of `mutex`
|
Concept of `mutex`
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
We already familiar with the [semaphore](https://en.wikipedia.org/wiki/Semaphore_%28programming%29) synchronization primitive from the previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-3.html). It represented by the:
|
We already familiar with the [semaphore](https://en.wikipedia.org/wiki/Semaphore_%28programming%29) synchronization primitive from the previous [part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-3.html). It represented by the:
|
||||||
|
|
||||||
```C
|
```C
|
||||||
struct semaphore {
|
struct semaphore {
|
||||||
@ -77,7 +77,7 @@ struct mutex_waiter {
|
|||||||
};
|
};
|
||||||
```
|
```
|
||||||
|
|
||||||
structure from the [include/linux/mutex.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/mutex.h) header file and will sleep. Before we will consider [API](https://en.wikipedia.org/wiki/Application_programming_interface) which is provided by the Linux kernel for manipulation with `mutexes`, let's consider the `mutex_waiter` structure. If you have read the [previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-3.html) of this chapter, you may notice that the `mutex_waiter` structure is similar to the `semaphore_waiter` structure from the [kernel/locking/semaphore.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/locking/semaphore.c) source code file:
|
structure from the [include/linux/mutex.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/mutex.h) header file and will be sleep. Before we will consider [API](https://en.wikipedia.org/wiki/Application_programming_interface) which is provided by the Linux kernel for manipulation with `mutexes`, let's consider the `mutex_waiter` structure. If you have read the [previous part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-3.html) of this chapter, you may notice that the `mutex_waiter` structure is similar to the `semaphore_waiter` structure from the [kernel/locking/semaphore.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/kernel/locking/semaphore.c) source code file:
|
||||||
|
|
||||||
```C
|
```C
|
||||||
struct semaphore_waiter {
|
struct semaphore_waiter {
|
||||||
@ -403,7 +403,7 @@ That's all. We have considered main `API` for manipulation with `mutexes`: `mute
|
|||||||
* `mutex_lock_killable`;
|
* `mutex_lock_killable`;
|
||||||
* `mutex_trylock`.
|
* `mutex_trylock`.
|
||||||
|
|
||||||
and corresponding versions of `unlock` prefixed functions. This part will not describe this `API`, because it is similar to corresponding `API` of `semaphores`. More about it you may read in the [previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-3.html).
|
and corresponding versions of `unlock` prefixed functions. This part will not describe this `API`, because it is similar to corresponding `API` of `semaphores`. More about it you may read in the [previous part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-3.html).
|
||||||
|
|
||||||
That's all.
|
That's all.
|
||||||
|
|
||||||
@ -437,4 +437,4 @@ Links
|
|||||||
* [JNS instruction](http://unixwiz.net/techtips/x86-jumps.html)
|
* [JNS instruction](http://unixwiz.net/techtips/x86-jumps.html)
|
||||||
* [preemption](https://en.wikipedia.org/wiki/Preemption_%28computing%29)
|
* [preemption](https://en.wikipedia.org/wiki/Preemption_%28computing%29)
|
||||||
* [Unix signals](https://en.wikipedia.org/wiki/Unix_signal)
|
* [Unix signals](https://en.wikipedia.org/wiki/Unix_signal)
|
||||||
* [Previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-3.html)
|
* [Previous part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-3.html)
|
||||||
|
@ -15,7 +15,7 @@ Reader/Writer semaphore
|
|||||||
|
|
||||||
Actually there are two types of operations may be performed on the data. We may read data and make changes in data. Two fundamental operations - `read` and `write`. Usually (but not always), `read` operation is performed more often than `write` operation. In this case, it would be logical to lock data in such a way, that some processes may read locked data in one time, on condition that no one will not change the data. The [readers/writer lock](https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock) allows us to get this lock.
|
Actually there are two types of operations may be performed on the data. We may read data and make changes in data. Two fundamental operations - `read` and `write`. Usually (but not always), `read` operation is performed more often than `write` operation. In this case, it would be logical to lock data in such a way, that some processes may read locked data in one time, on condition that no one will not change the data. The [readers/writer lock](https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock) allows us to get this lock.
|
||||||
|
|
||||||
When a process which wants to write something into data, all other `writer` and `reader` processes will be blocked until the process which acquired a lock, will release it. When a process reads data, other processes which want to read the same data too, will not be locked and will be able to do this. As you may guess, implementation of the `reader/writer semaphore` is based on the implementation of the `normal semaphore`. We already familiar with the [semaphore](https://en.wikipedia.org/wiki/Semaphore_%28programming%29) synchronization primitive from the third [part]((https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-4.html) of this chapter. From the theoretical side everything looks pretty simple. Let's look how `reader/writer semaphore` is represented in the Linux kernel.
|
When a process which wants to write something into data, all other `writer` and `reader` processes will be blocked until the process which acquired a lock, will not release it. When a process reads data, other processes which want to read the same data too, will not be locked and will be able to do this. As you may guess, implementation of the `reader/writer semaphore` is based on the implementation of the `normal semaphore`. We already familiar with the [semaphore](https://en.wikipedia.org/wiki/Semaphore_%28programming%29) synchronization primitive from the third [part]((https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-4.html) of this chapter. From the theoretical side everything looks pretty simple. Let's look how `reader/writer semaphore` is represented in the Linux kernel.
|
||||||
|
|
||||||
The `semaphore` is represented by the:
|
The `semaphore` is represented by the:
|
||||||
|
|
||||||
@ -70,7 +70,7 @@ The `count` field of a `rw_semaphore` structure may have following values:
|
|||||||
* `0xffffffff00000000` - represents situation when there are readers or writers are queued, but no one is active or is in the process of acquire of a lock;
|
* `0xffffffff00000000` - represents situation when there are readers or writers are queued, but no one is active or is in the process of acquire of a lock;
|
||||||
* `0xfffffffe00000001` - a writer is active or attempting to acquire a lock and waiters are in queue.
|
* `0xfffffffe00000001` - a writer is active or attempting to acquire a lock and waiters are in queue.
|
||||||
|
|
||||||
So, besides the `count` field, all of these fields are similar to fields of the `semaphore` structure. Last three fields depend on the two configuration options of the Linux kernel: the `CONFIG_RWSEM_SPIN_ON_OWNER` and `CONFIG_DEBUG_LOCK_ALLOC`. The first two fields may be familiar us by declaration of the [mutex](https://en.wikipedia.org/wiki/Mutual_exclusion) structure from the [previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-4.html). The first `osq` field represents [MCS lock](http://www.cs.rochester.edu/~scott/papers/1991_TOCS_synch.pdf) spinner for `optimistic spinning` and the second represents process which is current owner of a lock.
|
So, besides the `count` field, all of these fields are similar to fields of the `semaphore` structure. Last three fields depend on the two configuration options of the Linux kernel: the `CONFIG_RWSEM_SPIN_ON_OWNER` and `CONFIG_DEBUG_LOCK_ALLOC`. The first two fields may be familiar us by declaration of the [mutex](https://en.wikipedia.org/wiki/Mutual_exclusion) structure from the [previous part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-4.html). The first `osq` field represents [MCS lock](http://www.cs.rochester.edu/~scott/papers/1991_TOCS_synch.pdf) spinner for `optimistic spinning` and the second represents process which is current owner of a lock.
|
||||||
|
|
||||||
The last field of the `rw_semaphore` structure is - `dep_map` - debugging related, and as I already wrote in previous parts, we will skip debugging related stuff in this chapter.
|
The last field of the `rw_semaphore` structure is - `dep_map` - debugging related, and as I already wrote in previous parts, we will skip debugging related stuff in this chapter.
|
||||||
|
|
||||||
@ -193,7 +193,7 @@ void __sched down_write(struct rw_semaphore *sem)
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
We already met the `might_sleep` macro in the [previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-4.html). In short words, Implementation of the `might_sleep` macro depends on the `CONFIG_DEBUG_ATOMIC_SLEEP` kernel configuration option and if this option is enabled, this macro just prints a stack trace if it was executed in [atomic](https://en.wikipedia.org/wiki/Linearizability) context. As this macro is mostly for debugging purpose we will skip it and will go ahead. Additionally we will skip the next macro from the `down_read` function - `rwsem_acquire` which is related to the [lock validator](https://www.kernel.org/doc/Documentation/locking/lockdep-design.txt) of the Linux kernel, because this is topic of other part.
|
We already met the `might_sleep` macro in the [previous part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-4.html). In short words, Implementation of the `might_sleep` macro depends on the `CONFIG_DEBUG_ATOMIC_SLEEP` kernel configuration option and if this option is enabled, this macro just prints a stack trace if it was executed in [atomic](https://en.wikipedia.org/wiki/Linearizability) context. As this macro is mostly for debugging purpose we will skip it and will go ahead. Additionally we will skip the next macro from the `down_read` function - `rwsem_acquire` which is related to the [lock validator](https://www.kernel.org/doc/Documentation/locking/lockdep-design.txt) of the Linux kernel, because this is topic of other part.
|
||||||
|
|
||||||
The only two things that remained in the `down_write` function is the call of the `LOCK_CONTENDED` macro which is defined in the [include/linux/lockdep.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/lockdep.h) header file and setting of owner of a lock with the `rwsem_set_owner` function which sets owner to currently running process:
|
The only two things that remained in the `down_write` function is the call of the `LOCK_CONTENDED` macro which is defined in the [include/linux/lockdep.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/lockdep.h) header file and setting of owner of a lock with the `rwsem_set_owner` function which sets owner to currently running process:
|
||||||
|
|
||||||
@ -292,7 +292,7 @@ if (rwsem_optimistic_spin(sem))
|
|||||||
return sem;
|
return sem;
|
||||||
```
|
```
|
||||||
|
|
||||||
We will skip implementation of the `rwsem_optimistic_spin` function, as it is similar on the `mutex_optimistic_spin` function which we saw in the [previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-4.html). In short words we check existence other tasks ready to run that have higher priority in the `rwsem_optimistic_spin` function. If there are such tasks, the process will be added to the [MCS](http://www.cs.rochester.edu/~scott/papers/1991_TOCS_synch.pdf) `waitqueue` and start to spin in the loop until a lock will be able to be acquired. If `optimistic spinning` is disabled, a process will be added to the and marked as waiting for write:
|
We will skip implementation of the `rwsem_optimistic_spin` function, as it is similar on the `mutex_optimistic_spin` function which we saw in the [previous part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-4.html). In short words we check existence other tasks ready to run that have higher priority in the `rwsem_optimistic_spin` function. If there are such tasks, the process will be added to the [MCS](http://www.cs.rochester.edu/~scott/papers/1991_TOCS_synch.pdf) `waitqueue` and start to spin in the loop until a lock will be able to be acquired. If `optimistic spinning` is disabled, a process will be added to the and marked as waiting for write:
|
||||||
|
|
||||||
```C
|
```C
|
||||||
waiter.task = current;
|
waiter.task = current;
|
||||||
@ -325,7 +325,7 @@ while (true) {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
I will skip explanation of this loop as we already met similar functional in the [previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-4.html).
|
I will skip explanation of this loop as we already met similar functional in the [previous part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-4.html).
|
||||||
|
|
||||||
That's all. From this moment, our `writer` process will acquire or not acquire a lock depends on the value of the `rw_semaphore->count` field. Now if we will look at the implementation of the `down_read` function which executes a try of acquiring of a lock. We will see similar actions which we saw in the `down_write` function. This function calls different debugging and lock validator related functions/macros:
|
That's all. From this moment, our `writer` process will acquire or not acquire a lock depends on the value of the `rw_semaphore->count` field. Now if we will look at the implementation of the `down_read` function which executes a try of acquiring of a lock. We will see similar actions which we saw in the `down_write` function. This function calls different debugging and lock validator related functions/macros:
|
||||||
|
|
||||||
@ -430,4 +430,4 @@ Links
|
|||||||
* [Inline assembly](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Theory/asm.html)
|
* [Inline assembly](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Theory/asm.html)
|
||||||
* [XADD instruction](http://x86.renejeschke.de/html/file_module_x86_id_327.html)
|
* [XADD instruction](http://x86.renejeschke.de/html/file_module_x86_id_327.html)
|
||||||
* [LOCK instruction](http://x86.renejeschke.de/html/file_module_x86_id_159.html)
|
* [LOCK instruction](http://x86.renejeschke.de/html/file_module_x86_id_159.html)
|
||||||
* [Previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-4.html)
|
* [Previous part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-4.html)
|
||||||
|
@ -6,7 +6,7 @@ Introduction
|
|||||||
|
|
||||||
This is the sixth part of the chapter which describes [synchronization primitives](https://en.wikipedia.org/wiki/Synchronization_(computer_science) in the Linux kernel and in the previous parts we finished to consider different [readers-writer lock](https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock) synchronization primitives. We will continue to learn synchronization primitives in this part and start to consider a similar synchronization primitive which can be used to avoid the `writer starvation` problem. The name of this synchronization primitive is - `seqlock` or `sequential locks`.
|
This is the sixth part of the chapter which describes [synchronization primitives](https://en.wikipedia.org/wiki/Synchronization_(computer_science) in the Linux kernel and in the previous parts we finished to consider different [readers-writer lock](https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock) synchronization primitives. We will continue to learn synchronization primitives in this part and start to consider a similar synchronization primitive which can be used to avoid the `writer starvation` problem. The name of this synchronization primitive is - `seqlock` or `sequential locks`.
|
||||||
|
|
||||||
We know from the previous [part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/sync-5.html) that [readers-writer lock](https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock) is a special lock mechanism which allows concurrent access for read-only operations, but an exclusive lock is needed for writing or modifying data. As we may guess, it may lead to a problem which is called `writer starvation`. In other words, a writer process can't acquire a lock as long as at least one reader process which acquired a lock holds it. So, in the situation when contention is high, it will lead to situation when a writer process which wants to acquire a lock will wait for it for a long time.
|
We know from the previous [part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-5.html) that [readers-writer lock](https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock) is a special lock mechanism which allows concurrent access for read-only operations, but an exclusive lock is needed for writing or modifying data. As we may guess, it may lead to a problem which is called `writer starvation`. In other words, a writer process can't acquire a lock as long as at least one reader process which acquired a lock holds it. So, in the situation when contention is high, it will lead to situation when a writer process which wants to acquire a lock will wait for it for a long time.
|
||||||
|
|
||||||
The `seqlock` synchronization primitive can help solve this problem.
|
The `seqlock` synchronization primitive can help solve this problem.
|
||||||
|
|
||||||
@ -17,9 +17,9 @@ So, let's start.
|
|||||||
Sequential lock
|
Sequential lock
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
So, what is a `seqlock` synchronization primitive and how does it work? Let's try to answer on these questions in this paragraph. Actually `sequential locks` were introduced in the Linux kernel 2.6.x. Main point of this synchronization primitive is to provide fast and lock-free access to shared resources. Since the heart of `sequential lock` synchronization primitive is [spinlock](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-1.html) synchronization primitive, `sequential locks` work in situations where the protected resources are small and simple. Additionally write access must be rare and also should be fast.
|
So, what is a `seqlock` synchronization primitive and how does it work? Let's try to answer on these questions in this paragraph. Actually `sequential locks` were introduced in the Linux kernel 2.6.x. Main point of this synchronization primitive is to provide fast and lock-free access to shared resources. Since the heart of `sequential lock` synchronization primitive is [spinlock](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-1.html) synchronization primitive, `sequential locks` work in situations where the protected resources are small and simple. Additionally write access must be rare and also should be fast.
|
||||||
|
|
||||||
Work of this synchronization primitive is based on the sequence of events counter. Actually a `sequential lock` allows free access to a resource for readers, but each reader must check existence of conflicts with a writer. This synchronization primitive introduces a special counter. The main algorithm of work of `sequential locks` is simple: Each writer which acquired a sequential lock increments this counter and additionally acquires a [spinlock](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-1.html). When this writer finishes, it will release the acquired spinlock to give access to other writers and increment the counter of a sequential lock again.
|
Work of this synchronization primitive is based on the sequence of events counter. Actually a `sequential lock` allows free access to a resource for readers, but each reader must check existence of conflicts with a writer. This synchronization primitive introduces a special counter. The main algorithm of work of `sequential locks` is simple: Each writer which acquired a sequential lock increments this counter and additionally acquires a [spinlock](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-1.html). When this writer finishes, it will release the acquired spinlock to give access to other writers and increment the counter of a sequential lock again.
|
||||||
|
|
||||||
Read only access works on the following principle, it gets the value of a `sequential lock` counter before it will enter into [critical section](https://en.wikipedia.org/wiki/Critical_section) and compares it with the value of the same `sequential lock` counter at the exit of critical section. If their values are equal, this means that there weren't writers for this period. If their values are not equal, this means that a writer has incremented the counter during the [critical section](https://en.wikipedia.org/wiki/Critical_section). This conflict means that reading of protected data must be repeated.
|
Read only access works on the following principle, it gets the value of a `sequential lock` counter before it will enter into [critical section](https://en.wikipedia.org/wiki/Critical_section) and compares it with the value of the same `sequential lock` counter at the exit of critical section. If their values are equal, this means that there weren't writers for this period. If their values are not equal, this means that a writer has incremented the counter during the [critical section](https://en.wikipedia.org/wiki/Critical_section). This conflict means that reading of protected data must be repeated.
|
||||||
|
|
||||||
@ -54,7 +54,7 @@ typedef struct {
|
|||||||
} seqlock_t;
|
} seqlock_t;
|
||||||
```
|
```
|
||||||
|
|
||||||
As we may see the `seqlock_t` provides two fields. These fields represent a sequential lock counter, description of which we saw above and also a [spinlock](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-1.html) which will protect data from other writers. Note that the `seqcount` counter represented as `seqcount` type. The `seqcount` is structure:
|
As we may see the `seqlock_t` provides two fields. These fields represent a sequential lock counter, description of which we saw above and also a [spinlock](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-1.html) which will protect data from other writers. Note that the `seqcount` counter represented as `seqcount` type. The `seqcount` is structure:
|
||||||
|
|
||||||
```C
|
```C
|
||||||
typedef struct seqcount {
|
typedef struct seqcount {
|
||||||
@ -114,7 +114,7 @@ So we just initialize counter of the given sequential lock to zero and additiona
|
|||||||
#endif
|
#endif
|
||||||
```
|
```
|
||||||
|
|
||||||
As I already wrote in previous parts of this [chapter](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/) we will not consider [debugging](https://en.wikipedia.org/wiki/Debugging) and [lock validator](https://www.kernel.org/doc/Documentation/locking/lockdep-design.txt) related stuff in this part. So for now we just skip the `SEQCOUNT_DEP_MAP_INIT` macro. The second field of the given `seqlock_t` is `lock` initialized with the `__SPIN_LOCK_UNLOCKED` macro which is defined in the [include/linux/spinlock_types.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/spinlock_types.h) header file. We will not consider implementation of this macro here as it just initialize [rawspinlock](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-1.html) with architecture-specific methods (More abot spinlocks you may read in first parts of this [chapter](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/)).
|
As I already wrote in previous parts of this [chapter](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/) we will not consider [debugging](https://en.wikipedia.org/wiki/Debugging) and [lock validator](https://www.kernel.org/doc/Documentation/locking/lockdep-design.txt) related stuff in this part. So for now we just skip the `SEQCOUNT_DEP_MAP_INIT` macro. The second field of the given `seqlock_t` is `lock` initialized with the `__SPIN_LOCK_UNLOCKED` macro which is defined in the [include/linux/spinlock_types.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/spinlock_types.h) header file. We will not consider implementation of this macro here as it just initialize [rawspinlock](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-1.html) with architecture-specific methods (More abot spinlocks you may read in first parts of this [chapter](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/)).
|
||||||
|
|
||||||
We have considered the first way to initialize a sequential lock. Let's consider second way to do the same, but do it dynamically. We can initialize a sequential lock with the `seqlock_init` macro which is defined in the same [include/linux/seqlock.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/seqlock.h) header file.
|
We have considered the first way to initialize a sequential lock. Let's consider second way to do the same, but do it dynamically. We can initialize a sequential lock with the `seqlock_init` macro which is defined in the same [include/linux/seqlock.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/seqlock.h) header file.
|
||||||
|
|
||||||
@ -149,7 +149,7 @@ static inline void __seqcount_init(seqcount_t *s, const char *name,
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
just initializes counter of the given `seqcount_t` with zero. The second call from the `seqlock_init` macro is the call of the `spin_lock_init` macro which we saw in the [first part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-1.html) of this chapter.
|
just initializes counter of the given `seqcount_t` with zero. The second call from the `seqlock_init` macro is the call of the `spin_lock_init` macro which we saw in the [first part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-1.html) of this chapter.
|
||||||
|
|
||||||
So, now we know how to initialize a `sequential lock`, now let's look at how to use it. The Linux kernel provides following [API](https://en.wikipedia.org/wiki/Application_programming_interface) to manipulate `sequential locks`:
|
So, now we know how to initialize a `sequential lock`, now let's look at how to use it. The Linux kernel provides following [API](https://en.wikipedia.org/wiki/Application_programming_interface) to manipulate `sequential locks`:
|
||||||
|
|
||||||
@ -223,7 +223,7 @@ static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start)
|
|||||||
|
|
||||||
which just compares value of the counter of the given `sequential lock` with the initial value of this counter. If the initial value of the counter which is obtained from `read_seqbegin()` function is odd, this means that a writer was in the middle of updating the data when our reader began to act. In this case the value of the data can be in inconsistent state, so we need to try to read it again.
|
which just compares value of the counter of the given `sequential lock` with the initial value of this counter. If the initial value of the counter which is obtained from `read_seqbegin()` function is odd, this means that a writer was in the middle of updating the data when our reader began to act. In this case the value of the data can be in inconsistent state, so we need to try to read it again.
|
||||||
|
|
||||||
This is a common pattern in the Linux kernel. For example, you may remember the `jiffies` concept from the [first part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-1.html) of the [timers and time management in the Linux kernel](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/) chapter. The sequential lock is used to obtain value of `jiffies` at [x86_64](https://en.wikipedia.org/wiki/X86-64) architecture:
|
This is a common pattern in the Linux kernel. For example, you may remember the `jiffies` concept from the [first part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-1.html) of the [timers and time management in the Linux kernel](https://0xax.gitbooks.io/linux-insides/content/Timers/) chapter. The sequential lock is used to obtain value of `jiffies` at [x86_64](https://en.wikipedia.org/wiki/X86-64) architecture:
|
||||||
|
|
||||||
```C
|
```C
|
||||||
u64 get_jiffies_64(void)
|
u64 get_jiffies_64(void)
|
||||||
@ -337,9 +337,9 @@ If you have questions or suggestions, feel free to ping me in twitter [0xAX](htt
|
|||||||
Links
|
Links
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
* [synchronization primitives](https://en.wikipedia.org/wiki/Synchronization_(computer_science))
|
* [synchronization primitives](https://en.wikipedia.org/wiki/Synchronization_\(computer_science\))
|
||||||
* [readers-writer lock](https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock)
|
* [readers-writer lock](https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock)
|
||||||
* [spinlock](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SyncPrim/sync-1.html)
|
* [spinlock](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-1.html)
|
||||||
* [critical section](https://en.wikipedia.org/wiki/Critical_section)
|
* [critical section](https://en.wikipedia.org/wiki/Critical_section)
|
||||||
* [lock validator](https://www.kernel.org/doc/Documentation/locking/lockdep-design.txt)
|
* [lock validator](https://www.kernel.org/doc/Documentation/locking/lockdep-design.txt)
|
||||||
* [debugging](https://en.wikipedia.org/wiki/Debugging)
|
* [debugging](https://en.wikipedia.org/wiki/Debugging)
|
||||||
@ -349,4 +349,4 @@ Links
|
|||||||
* [interrupt handlers](https://en.wikipedia.org/wiki/Interrupt_handler)
|
* [interrupt handlers](https://en.wikipedia.org/wiki/Interrupt_handler)
|
||||||
* [softirq](https://0xax.gitbooks.io/linux-insides/content/Interrupts/linux-interrupts-9.html)
|
* [softirq](https://0xax.gitbooks.io/linux-insides/content/Interrupts/linux-interrupts-9.html)
|
||||||
* [IRQ](https://en.wikipedia.org/wiki/Interrupt_request_\(PC_architecture\))
|
* [IRQ](https://en.wikipedia.org/wiki/Interrupt_request_\(PC_architecture\))
|
||||||
* [Previous part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/sync-5.html)
|
* [Previous part](https://0xax.gitbooks.io/linux-insides/content/SyncPrim/linux-sync-5.html)
|
||||||
|
@ -4,7 +4,7 @@ System calls in the Linux kernel. Part 2.
|
|||||||
How does the Linux kernel handle a system call
|
How does the Linux kernel handle a system call
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
The previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SysCall/syscall-1.html) was the first part of the chapter that describes the [system call](https://en.wikipedia.org/wiki/System_call) concepts in the Linux kernel.
|
The previous [part](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-1.html) was the first part of the chapter that describes the [system call](https://en.wikipedia.org/wiki/System_call) concepts in the Linux kernel.
|
||||||
In the previous part we learned what a system call is in the Linux kernel, and in operating systems in general. This was introduced from a user-space perspective, and part of the [write](http://man7.org/linux/man-pages/man2/write.2.html) system call implementation was discussed. In this part we continue our look at system calls, starting with some theory before moving onto the Linux kernel code.
|
In the previous part we learned what a system call is in the Linux kernel, and in operating systems in general. This was introduced from a user-space perspective, and part of the [write](http://man7.org/linux/man-pages/man2/write.2.html) system call implementation was discussed. In this part we continue our look at system calls, starting with some theory before moving onto the Linux kernel code.
|
||||||
|
|
||||||
A user application does not make the system call directly from our applications. We did not write the `Hello world!` program like:
|
A user application does not make the system call directly from our applications. We did not write the `Hello world!` program like:
|
||||||
@ -378,7 +378,7 @@ That's all.
|
|||||||
Conclusion
|
Conclusion
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
This is the end of the second part about the system calls concept in the Linux kernel. In the previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SysCall/syscall-1.html) we saw theory about this concept from the user application view. In this part we continued to dive into the stuff which is related to the system call concept and saw what the Linux kernel does when a system call occurs.
|
This is the end of the second part about the system calls concept in the Linux kernel. In the previous [part](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-1.html) we saw theory about this concept from the user application view. In this part we continued to dive into the stuff which is related to the system call concept and saw what the Linux kernel does when a system call occurs.
|
||||||
|
|
||||||
If you have questions or suggestions, feel free to ping me in twitter [0xAX](https://twitter.com/0xAX), drop me [email](anotherworldofworld@gmail.com) or just create [issue](https://github.com/0xAX/linux-insides/issues/new).
|
If you have questions or suggestions, feel free to ping me in twitter [0xAX](https://twitter.com/0xAX), drop me [email](anotherworldofworld@gmail.com) or just create [issue](https://github.com/0xAX/linux-insides/issues/new).
|
||||||
|
|
||||||
@ -406,4 +406,4 @@ Links
|
|||||||
* [general purpose registers](https://en.wikipedia.org/wiki/Processor_register)
|
* [general purpose registers](https://en.wikipedia.org/wiki/Processor_register)
|
||||||
* [ABI](https://en.wikipedia.org/wiki/Application_binary_interface)
|
* [ABI](https://en.wikipedia.org/wiki/Application_binary_interface)
|
||||||
* [x86_64 C ABI](http://www.x86-64.org/documentation/abi.pdf)
|
* [x86_64 C ABI](http://www.x86-64.org/documentation/abi.pdf)
|
||||||
* [previous chapter](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SysCall/syscall-1.html)
|
* [previous chapter](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-1.html)
|
||||||
|
@ -4,7 +4,7 @@ System calls in the Linux kernel. Part 3.
|
|||||||
vsyscalls and vDSO
|
vsyscalls and vDSO
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
This is the third part of the [chapter](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SysCall/index.html) that describes system calls in the Linux kernel and we saw preparations after a system call caused by an userspace application and process of handling of a system call in the previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SysCall/syscall-2.html). In this part we will look at two concepts that are very close to the system call concept, they are called `vsyscall` and `vdso`.
|
This is the third part of the [chapter](http://0xax.gitbooks.io/linux-insides/content/SysCall/index.html) that describes system calls in the Linux kernel and we saw preparations after a system call caused by a userspace application and process of handling of a system call in the previous [part](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-2.html). In this part we will look at two concepts that are very close to the system call concept, they are called `vsyscall` and `vdso`.
|
||||||
|
|
||||||
We already know what `system call`s are. They are special routines in the Linux kernel which userspace applications ask to do privileged tasks, like to read or to write to a file, to open a socket, etc. As you may know, invoking a system call is an expensive operation in the Linux kernel, because the processor must interrupt the currently executing task and switch context to kernel mode, subsequently jumping again into userspace after the system call handler finishes its work. These two mechanisms - `vsyscall` and `vdso` are designed to speed up this process for certain system calls and in this part we will try to understand how these mechanisms work.
|
We already know what `system call`s are. They are special routines in the Linux kernel which userspace applications ask to do privileged tasks, like to read or to write to a file, to open a socket, etc. As you may know, invoking a system call is an expensive operation in the Linux kernel, because the processor must interrupt the currently executing task and switch context to kernel mode, subsequently jumping again into userspace after the system call handler finishes its work. These two mechanisms - `vsyscall` and `vdso` are designed to speed up this process for certain system calls and in this part we will try to understand how these mechanisms work.
|
||||||
|
|
||||||
@ -370,7 +370,7 @@ That's all.
|
|||||||
Conclusion
|
Conclusion
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
This is the end of the third part about the system calls concept in the Linux kernel. In the previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SysCall/syscall-2.html) we discussed the implementation of the preparation from the Linux kernel side, before a system call will be handled and implementation of the `exit` process from a system call handler. In this part we continued to dive into the stuff which is related to the system call concept and learned two new concepts that are very similar to the system call - the `vsyscall` and the `vDSO`.
|
This is the end of the third part about the system calls concept in the Linux kernel. In the previous [part](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-2.html) we discussed the implementation of the preparation from the Linux kernel side, before a system call will be handled and implementation of the `exit` process from a system call handler. In this part we continued to dive into the stuff which is related to the system call concept and learned two new concepts that are very similar to the system call - the `vsyscall` and the `vDSO`.
|
||||||
|
|
||||||
After all of these three parts, we know almost all things that are related to system calls, we know what system call is and why user applications need them. We also know what occurs when a user application calls a system call and how the kernel handles system calls.
|
After all of these three parts, we know almost all things that are related to system calls, we know what system call is and why user applications need them. We also know what occurs when a user application calls a system call and how the kernel handles system calls.
|
||||||
|
|
||||||
@ -399,5 +399,5 @@ Links
|
|||||||
* [instruction pointer](https://en.wikipedia.org/wiki/Program_counter)
|
* [instruction pointer](https://en.wikipedia.org/wiki/Program_counter)
|
||||||
* [stack pointer](https://en.wikipedia.org/wiki/Stack_register)
|
* [stack pointer](https://en.wikipedia.org/wiki/Stack_register)
|
||||||
* [uname](https://en.wikipedia.org/wiki/Uname)
|
* [uname](https://en.wikipedia.org/wiki/Uname)
|
||||||
* [Linkers](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Misc/linkers.html)
|
* [Linkers](http://0xax.gitbooks.io/linux-insides/content/Misc/linkers.html)
|
||||||
* [Previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SysCall/syscall-2.html)
|
* [Previous part](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-2.html)
|
||||||
|
@ -4,7 +4,7 @@ System calls in the Linux kernel. Part 4.
|
|||||||
How does the Linux kernel run a program
|
How does the Linux kernel run a program
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
This is the fourth part of the [chapter](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SysCall/index.html) that describes [system calls](https://en.wikipedia.org/wiki/System_call) in the Linux kernel and as I wrote in the conclusion of the [previous](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SysCall/syscall-3.html) - this part will be last in this chapter. In the previous part we stopped at the two new concepts:
|
This is the fourth part of the [chapter](http://0xax.gitbooks.io/linux-insides/content/SysCall/index.html) that describes [system calls](https://en.wikipedia.org/wiki/System_call) in the Linux kernel and as I wrote in the conclusion of the [previous](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-3.html) - this part will be last in this chapter. In the previous part we stopped at the two new concepts:
|
||||||
|
|
||||||
* `vsyscall`;
|
* `vsyscall`;
|
||||||
* `vDSO`;
|
* `vDSO`;
|
||||||
@ -73,7 +73,7 @@ So, a user application (`bash` in our case) calls the system call and as we alre
|
|||||||
execve system call
|
execve system call
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
We saw preparation before a system call called by a user application and after a system call handler finished its work in the second [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SysCall/syscall-2.html) of this chapter. We stopped at the call of the `execve` system call in the previous paragraph. This system call defined in the [fs/exec.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/fs/exec.c) source code file and as we already know it takes three arguments:
|
We saw preparation before a system call called by a user application and after a system call handler finished its work in the second [part](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-2.html) of this chapter. We stopped at the call of the `execve` system call in the previous paragraph. This system call defined in the [fs/exec.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/fs/exec.c) source code file and as we already know it takes three arguments:
|
||||||
|
|
||||||
```
|
```
|
||||||
SYSCALL_DEFINE3(execve,
|
SYSCALL_DEFINE3(execve,
|
||||||
@ -427,4 +427,4 @@ Links
|
|||||||
* [Linkers](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Misc/linkers.html)
|
* [Linkers](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Misc/linkers.html)
|
||||||
* [Processor register](https://en.wikipedia.org/wiki/Processor_register)
|
* [Processor register](https://en.wikipedia.org/wiki/Processor_register)
|
||||||
* [instruction pointer](https://en.wikipedia.org/wiki/Program_counter)
|
* [instruction pointer](https://en.wikipedia.org/wiki/Program_counter)
|
||||||
* [Previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SysCall/syscall-3.html)
|
* [Previous part](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-3.html)
|
||||||
|
@ -49,7 +49,7 @@ So let's start.
|
|||||||
Definition of the open system call
|
Definition of the open system call
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
If you have read the [fourth part](https://github.com/0xAX/linux-insides/blob/master/SysCall/syscall-4.md) of the [linux-insides](https://0xax.gitbooks.io/linux-insides/content/index.html) book, you should know that system calls are defined with the help of `SYSCALL_DEFINE` macro. So, the `open` system call is not exception.
|
If you have read the [fourth part](https://github.com/0xAX/linux-insides/blob/master/SysCall/linux-syscall-4.md) of the [linux-insides](https://0xax.gitbooks.io/linux-insides/content/index.html) book, you should know that system calls are defined with the help of `SYSCALL_DEFINE` macro. So, the `open` system call is not exception.
|
||||||
|
|
||||||
Definition of the `open` system call is located in the [fs/open.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/fs/open.c) source code file and looks pretty small for the first view:
|
Definition of the `open` system call is located in the [fs/open.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/fs/open.c) source code file and looks pretty small for the first view:
|
||||||
|
|
||||||
@ -400,4 +400,4 @@ Links
|
|||||||
* [inode](https://en.wikipedia.org/wiki/Inode)
|
* [inode](https://en.wikipedia.org/wiki/Inode)
|
||||||
* [RCU](https://www.kernel.org/doc/Documentation/RCU/whatisRCU.txt)
|
* [RCU](https://www.kernel.org/doc/Documentation/RCU/whatisRCU.txt)
|
||||||
* [read](http://man7.org/linux/man-pages/man2/read.2.html)
|
* [read](http://man7.org/linux/man-pages/man2/read.2.html)
|
||||||
* [previous part](https://0xax.gitbooks.io/linux-insides/content/SysCall/syscall-4.html)
|
* [previous part](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-4.html)
|
||||||
|
@ -4,7 +4,7 @@ Timers and time management in the Linux kernel. Part 1.
|
|||||||
Introduction
|
Introduction
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
This is yet another post that opens a new chapter in the [linux-insides](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/) book. The previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SysCall/syscall-4.html) described [system call](https://en.wikipedia.org/wiki/System_call) concepts, and now it's time to start new chapter. As one might understand from the title, this chapter will be devoted to the `timers` and `time management` in the Linux kernel. The choice of topic for the current chapter is not accidental. Timers (and generally, time management) are very important and widely used in the Linux kernel. The Linux kernel uses timers for various tasks, for example different timeouts in the [TCP](https://en.wikipedia.org/wiki/Transmission_Control_Protocol) implementation, the kernel knowing current time, scheduling asynchronous functions, next event interrupt scheduling and many many more.
|
This is yet another post that opens a new chapter in the [linux-insides](http://0xax.gitbooks.io/linux-insides/content/) book. The previous [part](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-4.html) described [system call](https://en.wikipedia.org/wiki/System_call) concepts, and now it's time to start new chapter. As one might understand from the title, this chapter will be devoted to the `timers` and `time management` in the Linux kernel. The choice of topic for the current chapter is not accidental. Timers (and generally, time management) are very important and widely used in the Linux kernel. The Linux kernel uses timers for various tasks, for example different timeouts in the [TCP](https://en.wikipedia.org/wiki/Transmission_Control_Protocol) implementation, the kernel knowing current time, scheduling asynchronous functions, next event interrupt scheduling and many many more.
|
||||||
|
|
||||||
So, we will start to learn implementation of the different time management related stuff in this part. We will see different types of timers and how different Linux kernel subsystems use them. As always, we will start from the earliest part of the Linux kernel and go through the initialization process of the Linux kernel. We already did it in the special [chapter](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Initialization/index.html) which describes the initialization process of the Linux kernel, but as you may remember we missed some things there. And one of them is the initialization of timers.
|
So, we will start to learn implementation of the different time management related stuff in this part. We will see different types of timers and how different Linux kernel subsystems use them. As always, we will start from the earliest part of the Linux kernel and go through the initialization process of the Linux kernel. We already did it in the special [chapter](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Initialization/index.html) which describes the initialization process of the Linux kernel, but as you may remember we missed some things there. And one of them is the initialization of timers.
|
||||||
|
|
||||||
|
@ -4,7 +4,7 @@ Timers and time management in the Linux kernel. Part 2.
|
|||||||
Introduction to the `clocksource` framework
|
Introduction to the `clocksource` framework
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
The previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-1.html) was the first part in the current [chapter](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/index.html) that describes timers and time management related stuff in the Linux kernel. We got acquainted with two concepts in the previous part:
|
The previous [part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-1.html) was the first part in the current [chapter](https://0xax.gitbooks.io/linux-insides/content/Timers/index.html) that describes timers and time management related stuff in the Linux kernel. We got acquainted with two concepts in the previous part:
|
||||||
|
|
||||||
* `jiffies`
|
* `jiffies`
|
||||||
* `clocksource`
|
* `clocksource`
|
||||||
@ -92,7 +92,7 @@ Within this framework, each clock source is required to maintain a representatio
|
|||||||
The clocksource structure
|
The clocksource structure
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
The fundamental of the `clocksource` framework is the `clocksource` structure that defined in the [include/linux/clocksource.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/clocksource.h) header file. We already saw some fields that are provided by the `clocksource` structure in the previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-1.html). Let's look on the full definition of this structure and try to describe all of its fields:
|
The fundamental of the `clocksource` framework is the `clocksource` structure that defined in the [include/linux/clocksource.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/clocksource.h) header file. We already saw some fields that are provided by the `clocksource` structure in the previous [part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-1.html). Let's look on the full definition of this structure and try to describe all of its fields:
|
||||||
|
|
||||||
```C
|
```C
|
||||||
struct clocksource {
|
struct clocksource {
|
||||||
@ -197,7 +197,7 @@ That's all. From this moment we know all fields of the `clocksource` structure.
|
|||||||
New clock source registration
|
New clock source registration
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
We saw only one function from the `clocksource` framework in the previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-1.html). This function was - `__clocksource_register`. This function defined in the [include/linux/clocksource.h](https://github.com/torvalds/linux/tree/master/include/linux/clocksource.h) header file and as we can understand from the function's name, main point of this function is to register new clocksource. If we will look on the implementation of the `__clocksource_register` function, we will see that it just makes call of the `__clocksource_register_scale` function and returns its result:
|
We saw only one function from the `clocksource` framework in the previous [part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-1.html). This function was - `__clocksource_register`. This function defined in the [include/linux/clocksource.h](https://github.com/torvalds/linux/tree/master/include/linux/clocksource.h) header file and as we can understand from the function's name, main point of this function is to register new clocksource. If we will look on the implementation of the `__clocksource_register` function, we will see that it just makes call of the `__clocksource_register_scale` function and returns its result:
|
||||||
|
|
||||||
```C
|
```C
|
||||||
static inline int __clocksource_register(struct clocksource *cs)
|
static inline int __clocksource_register(struct clocksource *cs)
|
||||||
@ -241,7 +241,7 @@ int __clocksource_register_scale(struct clocksource *cs, u32 scale, u32 freq)
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
First of all we can see that the `__clocksource_register_scale` function starts from the call of the `__clocksource_update_freq_scale` function that defined in the same source code file and updates given clock source with the new frequency. Let's look on the implementation of this function. In the first step we need to check given frequency and if it was not passed as `zero`, we need to calculate `mult` and `shift` parameters for the given clock source. Why do we need to check value of the `frequency`? Actually it can be zero. if you attentively looked on the implementation of the `__clocksource_register` function, you may have noticed that we passed `frequency` as `0`. We will do it only for some clock sources that have self defined `mult` and `shift` parameters. Look in the previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-1.html) and you will see that we saw calculation of the `mult` and `shift` for `jiffies`. The `__clocksource_update_freq_scale` function will do it for us for other clock sources.
|
First of all we can see that the `__clocksource_register_scale` function starts from the call of the `__clocksource_update_freq_scale` function that defined in the same source code file and updates given clock source with the new frequency. Let's look on the implementation of this function. In the first step we need to check given frequency and if it was not passed as `zero`, we need to calculate `mult` and `shift` parameters for the given clock source. Why do we need to check value of the `frequency`? Actually it can be zero. if you attentively looked on the implementation of the `__clocksource_register` function, you may have noticed that we passed `frequency` as `0`. We will do it only for some clock sources that have self defined `mult` and `shift` parameters. Look in the previous [part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-1.html) and you will see that we saw calculation of the `mult` and `shift` for `jiffies`. The `__clocksource_update_freq_scale` function will do it for us for other clock sources.
|
||||||
|
|
||||||
So in the start of the `__clocksource_update_freq_scale` function we check the value of the `frequency` parameter and if is not zero we need to calculate `mult` and `shift` for the given clock source. Let's look on the `mult` and `shift` calculation:
|
So in the start of the `__clocksource_update_freq_scale` function we check the value of the `frequency` parameter and if is not zero we need to calculate `mult` and `shift` for the given clock source. Let's look on the `mult` and `shift` calculation:
|
||||||
|
|
||||||
@ -448,4 +448,4 @@ Links
|
|||||||
* [clock rate](https://en.wikipedia.org/wiki/Clock_rate)
|
* [clock rate](https://en.wikipedia.org/wiki/Clock_rate)
|
||||||
* [mutex](https://en.wikipedia.org/wiki/Mutual_exclusion)
|
* [mutex](https://en.wikipedia.org/wiki/Mutual_exclusion)
|
||||||
* [sysfs](https://en.wikipedia.org/wiki/Sysfs)
|
* [sysfs](https://en.wikipedia.org/wiki/Sysfs)
|
||||||
* [previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-1.html)
|
* [previous part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-1.html)
|
||||||
|
@ -4,7 +4,7 @@ Timers and time management in the Linux kernel. Part 3.
|
|||||||
The tick broadcast framework and dyntick
|
The tick broadcast framework and dyntick
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
This is third part of the [chapter](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/index.html) which describes timers and time management related stuff in the Linux kernel and we stopped on the `clocksource` framework in the previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-2.html). We have started to consider this framework because it is closely related to the special counters which are provided by the Linux kernel. One of these counters which we already saw in the first [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-1.html) of this chapter is - `jiffies`. As I already wrote in the first part of this chapter, we will consider time management related stuff step by step during the Linux kernel initialization. Previous step was call of the:
|
This is third part of the [chapter](https://0xax.gitbooks.io/linux-insides/content/Timers/index.html) which describes timers and time management related stuff in the Linux kernel and we stopped on the `clocksource` framework in the previous [part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-2.html). We have started to consider this framework because it is closely related to the special counters which are provided by the Linux kernel. One of these counters which we already saw in the first [part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-1.html) of this chapter is - `jiffies`. As I already wrote in the first part of this chapter, we will consider time management related stuff step by step during the Linux kernel initialization. Previous step was call of the:
|
||||||
|
|
||||||
```C
|
```C
|
||||||
register_refined_jiffies(CLOCK_TICK_RATE);
|
register_refined_jiffies(CLOCK_TICK_RATE);
|
||||||
@ -441,4 +441,4 @@ Links
|
|||||||
* [APIC](https://en.wikipedia.org/wiki/Advanced_Programmable_Interrupt_Controller)
|
* [APIC](https://en.wikipedia.org/wiki/Advanced_Programmable_Interrupt_Controller)
|
||||||
* [percpu](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Concepts/per-cpu.html)
|
* [percpu](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Concepts/per-cpu.html)
|
||||||
* [context switches](https://en.wikipedia.org/wiki/Context_switch)
|
* [context switches](https://en.wikipedia.org/wiki/Context_switch)
|
||||||
* [Previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-2.html)
|
* [Previous part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-2.html)
|
||||||
|
@ -4,7 +4,7 @@ Timers and time management in the Linux kernel. Part 4.
|
|||||||
Timers
|
Timers
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
This is fourth part of the [chapter](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/index.html) which describes timers and time management related stuff in the Linux kernel and in the previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-3.html) we knew about the `tick broadcast` framework and `NO_HZ` mode in the Linux kernel. We will continue to dive into the time management related stuff in the Linux kernel in this part and will be acquainted with yet another concept in the Linux kernel - `timers`. Before we will look at timers in the Linux kernel, we have to learn some theory about this concept. Note that we will consider software timers in this part.
|
This is fourth part of the [chapter](https://0xax.gitbooks.io/linux-insides/content/Timers/index.html) which describes timers and time management related stuff in the Linux kernel and in the previous [part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-3.html) we knew about the `tick broadcast` framework and `NO_HZ` mode in the Linux kernel. We will continue to dive into the time management related stuff in the Linux kernel in this part and will be acquainted with yet another concept in the Linux kernel - `timers`. Before we will look at timers in the Linux kernel, we have to learn some theory about this concept. Note that we will consider software timers in this part.
|
||||||
|
|
||||||
The Linux kernel provides a `software timer` concept to allow to kernel functions could be invoked at future moment. Timers are widely used in the Linux kernel. For example, look in the [net/netfilter/ipset/ip_set_list_set.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/net/netfilter/ipset/ip_set_list_set.c) source code file. This source code file provides implementation of the framework for the managing of groups of [IP](https://en.wikipedia.org/wiki/Internet_Protocol) addresses.
|
The Linux kernel provides a `software timer` concept to allow to kernel functions could be invoked at future moment. Timers are widely used in the Linux kernel. For example, look in the [net/netfilter/ipset/ip_set_list_set.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/net/netfilter/ipset/ip_set_list_set.c) source code file. This source code file provides implementation of the framework for the managing of groups of [IP](https://en.wikipedia.org/wiki/Internet_Protocol) addresses.
|
||||||
|
|
||||||
@ -45,7 +45,7 @@ Now let's continue to research source code of Linux kernel which is related to t
|
|||||||
Introduction to dynamic timers in the Linux kernel
|
Introduction to dynamic timers in the Linux kernel
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
As I already wrote, we knew about the `tick broadcast` framework and `NO_HZ` mode in the previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-3.html). They will be initialized in the [init/main.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/init/main.c) source code file by the call of the `tick_init` function. If we will look at this source code file, we will see that the next time management related function is:
|
As I already wrote, we knew about the `tick broadcast` framework and `NO_HZ` mode in the previous [part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-3.html). They will be initialized in the [init/main.c](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/init/main.c) source code file by the call of the `tick_init` function. If we will look at this source code file, we will see that the next time management related function is:
|
||||||
|
|
||||||
```C
|
```C
|
||||||
init_timers();
|
init_timers();
|
||||||
@ -142,7 +142,7 @@ The `tvec_bases` represents [per-cpu](https://proninyaroslav.gitbooks.io/linux-i
|
|||||||
static DEFINE_PER_CPU(struct tvec_base, tvec_bases);
|
static DEFINE_PER_CPU(struct tvec_base, tvec_bases);
|
||||||
```
|
```
|
||||||
|
|
||||||
First of all we're getting the address of the `tvec_bases` for the given processor to `base` variable and as we got it, we are starting to initialize some of the `tvec_base` fields in the `init_timer_cpu` function. After initialization of the `per-cpu` dynamic timers with the [jiffies](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-1.html) and the number of a possible processor, we need to initialize a `tstats_lookup_lock` [spinlock](https://en.wikipedia.org/wiki/Spinlock) in the `init_timer_stats` function:
|
First of all we're getting the address of the `tvec_bases` for the given processor to `base` variable and as we got it, we are starting to initialize some of the `tvec_base` fields in the `init_timer_cpu` function. After initialization of the `per-cpu` dynamic timers with the [jiffies](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-1.html) and the number of a possible processor, we need to initialize a `tstats_lookup_lock` [spinlock](https://en.wikipedia.org/wiki/Spinlock) in the `init_timer_stats` function:
|
||||||
|
|
||||||
```C
|
```C
|
||||||
void __init init_timer_stats(void)
|
void __init init_timer_stats(void)
|
||||||
@ -256,7 +256,7 @@ static void run_timer_softirq(struct softirq_action *h)
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
At the beginning of the `run_timer_softirq` function we get a `dynamic` timer for a current processor and compares the current value of the [jiffies](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-1.html) with the value of the `timer_jiffies` for the current structure by the call of the `time_after_eq` macro which is defined in the [include/linux/jiffies.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/jiffies.h) header file:
|
At the beginning of the `run_timer_softirq` function we get a `dynamic` timer for a current processor and compares the current value of the [jiffies](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-1.html) with the value of the `timer_jiffies` for the current structure by the call of the `time_after_eq` macro which is defined in the [include/linux/jiffies.h](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/include/linux/jiffies.h) header file:
|
||||||
|
|
||||||
```C
|
```C
|
||||||
#define time_after_eq(a,b) \
|
#define time_after_eq(a,b) \
|
||||||
@ -420,8 +420,8 @@ Links
|
|||||||
* [network](https://en.wikipedia.org/wiki/Computer_network)
|
* [network](https://en.wikipedia.org/wiki/Computer_network)
|
||||||
* [cpumask](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Concepts/cpumask.html)
|
* [cpumask](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Concepts/cpumask.html)
|
||||||
* [interrupt](https://en.wikipedia.org/wiki/Interrupt)
|
* [interrupt](https://en.wikipedia.org/wiki/Interrupt)
|
||||||
* [jiffies](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-1.html)
|
* [jiffies](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-1.html)
|
||||||
* [per-cpu](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Concepts/per-cpu.html)
|
* [per-cpu](https://0xax.gitbooks.io/linux-insides/content/Concepts/per-cpu.html)
|
||||||
* [spinlock](https://en.wikipedia.org/wiki/Spinlock)
|
* [spinlock](https://en.wikipedia.org/wiki/Spinlock)
|
||||||
* [procfs](https://en.wikipedia.org/wiki/Procfs)
|
* [procfs](https://en.wikipedia.org/wiki/Procfs)
|
||||||
* [previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-3.html)
|
* [previous part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-3.html)
|
||||||
|
@ -4,7 +4,7 @@ Timers and time management in the Linux kernel. Part 5.
|
|||||||
Introduction to the `clockevents` framework
|
Introduction to the `clockevents` framework
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
This is fifth part of the [chapter](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/index.html) which describes timers and time management related stuff in the Linux kernel. As you might noted from the title of this part, the `clockevents` framework will be discussed. We already saw one framework in the [second](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-2.html) part of this chapter. It was `clocksource` framework. Both of these frameworks represent timekeeping abstractions in the Linux kernel.
|
This is fifth part of the [chapter](https://0xax.gitbooks.io/linux-insides/content/Timers/index.html) which describes timers and time management related stuff in the Linux kernel. As you might noted from the title of this part, the `clockevents` framework will be discussed. We already saw one framework in the [second](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-2.html) part of this chapter. It was `clocksource` framework. Both of these frameworks represent timekeeping abstractions in the Linux kernel.
|
||||||
|
|
||||||
At first let's refresh your memory and try to remember what is it `clocksource` framework and and what its purpose. The main goal of the `clocksource` framework is to provide `timeline`. As described in the [documentation](https://github.com/0xAX/linux/blob/0a07b238e5f488b459b6113a62e06b6aab017f71/Documentation/timers/timekeeping.txt):
|
At first let's refresh your memory and try to remember what is it `clocksource` framework and and what its purpose. The main goal of the `clocksource` framework is to provide `timeline`. As described in the [documentation](https://github.com/0xAX/linux/blob/0a07b238e5f488b459b6113a62e06b6aab017f71/Documentation/timers/timekeeping.txt):
|
||||||
|
|
||||||
@ -412,4 +412,4 @@ Links
|
|||||||
* [CPU masks in the Linux kernel](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Concepts/cpumask.html)
|
* [CPU masks in the Linux kernel](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Concepts/cpumask.html)
|
||||||
* [deadlock](https://en.wikipedia.org/wiki/Deadlock)
|
* [deadlock](https://en.wikipedia.org/wiki/Deadlock)
|
||||||
* [CPU hotplug](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt)
|
* [CPU hotplug](https://www.kernel.org/doc/Documentation/cpu-hotplug.txt)
|
||||||
* [previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-3.html)
|
* [previous part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-3.html)
|
||||||
|
@ -4,7 +4,7 @@ Timers and time management in the Linux kernel. Part 6.
|
|||||||
x86_64 related clock sources
|
x86_64 related clock sources
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
This is sixth part of the [chapter](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/index.html) which describes timers and time management related stuff in the Linux kernel. In the previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-5.html) we saw `clockevents` framework and now we will continue to dive into time management related stuff in the Linux kernel. This part will describe implementation of [x86](https://en.wikipedia.org/wiki/X86) architecture related clock sources (more about `clocksource` concept you can read in the [second part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-2.html) of this chapter).
|
This is sixth part of the [chapter](https://0xax.gitbooks.io/linux-insides/content/Timers/index.html) which describes timers and time management related stuff in the Linux kernel. In the previous [part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-5.html) we saw `clockevents` framework and now we will continue to dive into time management related stuff in the Linux kernel. This part will describe implementation of [x86](https://en.wikipedia.org/wiki/X86) architecture related clock sources (more about `clocksource` concept you can read in the [second part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-2.html) of this chapter).'
|
||||||
|
|
||||||
First of all we must know what clock sources may be used at `x86` architecture. It is easy to know from the [sysfs](https://en.wikipedia.org/wiki/Sysfs) or from content of the `/sys/devices/system/clocksource/clocksource0/available_clocksource`. The `/sys/devices/system/clocksource/clocksourceN` provides two special files to achieve this:
|
First of all we must know what clock sources may be used at `x86` architecture. It is easy to know from the [sysfs](https://en.wikipedia.org/wiki/Sysfs) or from content of the `/sys/devices/system/clocksource/clocksource0/available_clocksource`. The `/sys/devices/system/clocksource/clocksourceN` provides two special files to achieve this:
|
||||||
|
|
||||||
@ -31,7 +31,7 @@ $ cat /sys/devices/system/clocksource/clocksource0/current_clocksource
|
|||||||
tsc
|
tsc
|
||||||
```
|
```
|
||||||
|
|
||||||
For me it is [Time Stamp Counter](https://en.wikipedia.org/wiki/Time_Stamp_Counter). As we may know from the [second part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-2.html) of this chapter, which describes internals of the `clocksource` framework in the Linux kernel, the best clock source in a system is a clock source with the best (highest) rating or in other words with the highest [frequency](https://en.wikipedia.org/wiki/Frequency).
|
For me it is [Time Stamp Counter](https://en.wikipedia.org/wiki/Time_Stamp_Counter). As we may know from the [second part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-2.html) of this chapter, which describes internals of the `clocksource` framework in the Linux kernel, the best clock source in a system is a clock source with the best (highest) rating or in other words with the highest [frequency](https://en.wikipedia.org/wiki/Frequency).
|
||||||
|
|
||||||
Frequency of the [ACPI](https://en.wikipedia.org/wiki/Advanced_Configuration_and_Power_Interface) power management timer is `3.579545 MHz`. Frequency of the [High Precision Event Timer](https://en.wikipedia.org/wiki/High_Precision_Event_Timer) is at least `10 MHz`. And the frequency of the [Time Stamp Counter](https://en.wikipedia.org/wiki/Time_Stamp_Counter) depends on processor. For example On older processors, the `Time Stamp Counter` was counting internal processor clock cycles. This means its frequency changed when the processor's frequency scaling changed. The situation has changed for newer processors. Newer processors have an `invariant Time Stamp counter` that increments at a constant rate in all operational states of processor. Actually we can get its frequency in the output of the `/proc/cpuinfo`. For example for the first processor in the system:
|
Frequency of the [ACPI](https://en.wikipedia.org/wiki/Advanced_Configuration_and_Power_Interface) power management timer is `3.579545 MHz`. Frequency of the [High Precision Event Timer](https://en.wikipedia.org/wiki/High_Precision_Event_Timer) is at least `10 MHz`. And the frequency of the [Time Stamp Counter](https://en.wikipedia.org/wiki/Time_Stamp_Counter) depends on processor. For example On older processors, the `Time Stamp Counter` was counting internal processor clock cycles. This means its frequency changed when the processor's frequency scaling changed. The situation has changed for newer processors. Newer processors have an `invariant Time Stamp counter` that increments at a constant rate in all operational states of processor. Actually we can get its frequency in the output of the `/proc/cpuinfo`. For example for the first processor in the system:
|
||||||
|
|
||||||
@ -410,4 +410,4 @@ Links
|
|||||||
* [IRQ0](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29#Master_PIC)
|
* [IRQ0](https://en.wikipedia.org/wiki/Interrupt_request_%28PC_architecture%29#Master_PIC)
|
||||||
* [i8259](https://en.wikipedia.org/wiki/Intel_8259)
|
* [i8259](https://en.wikipedia.org/wiki/Intel_8259)
|
||||||
* [initcall](http://www.compsoc.man.ac.uk/~moz/kernelnewbies/documents/initcall/kernel.html)
|
* [initcall](http://www.compsoc.man.ac.uk/~moz/kernelnewbies/documents/initcall/kernel.html)
|
||||||
* [previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-5.html)
|
* [previous part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-5.html)
|
||||||
|
@ -4,7 +4,7 @@ Timers and time management in the Linux kernel. Part 7.
|
|||||||
Time related system calls in the Linux kernel
|
Time related system calls in the Linux kernel
|
||||||
--------------------------------------------------------------------------------
|
--------------------------------------------------------------------------------
|
||||||
|
|
||||||
This is the seventh and last part [chapter](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/index.html), which describes timers and time management related stuff in the Linux kernel. In the previous [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-6.html), we discussed timers in the context of [x86_64](https://en.wikipedia.org/wiki/X86-64): [High Precision Event Timer](https://en.wikipedia.org/wiki/High_Precision_Event_Timer) and [Time Stamp Counter](https://en.wikipedia.org/wiki/Time_Stamp_Counter). Internal time management is an interesting part of the Linux kernel, but of course not only the kernel needs the `time` concept. Our programs also need to know time. In this part, we will consider implementation of some time management related [system calls](https://en.wikipedia.org/wiki/System_call). These system calls are:
|
This is the seventh and last part [chapter](https://0xax.gitbooks.io/linux-insides/content/Timers/index.html), which describes timers and time management related stuff in the Linux kernel. In the previous [part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-6.html), we discussed timers in the context of [x86_64](https://en.wikipedia.org/wiki/X86-64): [High Precision Event Timer](https://en.wikipedia.org/wiki/High_Precision_Event_Timer) and [Time Stamp Counter](https://en.wikipedia.org/wiki/Time_Stamp_Counter). Internal time management is an interesting part of the Linux kernel, but of course not only the kernel needs the `time` concept. Our programs also need to know time. In this part, we will consider implementation of some time management related [system calls](https://en.wikipedia.org/wiki/System_call). These system calls are:
|
||||||
|
|
||||||
* `clock_gettime`;
|
* `clock_gettime`;
|
||||||
* `gettimeofday`;
|
* `gettimeofday`;
|
||||||
@ -57,7 +57,7 @@ The second parameter of the `gettimeofday` function is a pointer to the `timezon
|
|||||||
Current date/time: 03-26-2016/16:42:02
|
Current date/time: 03-26-2016/16:42:02
|
||||||
```
|
```
|
||||||
|
|
||||||
As you may already know, a userspace application does not call a system call directly from the kernel space. Before the actual system call entry will be called, we call a function from the standard library. In my case it is [glibc](https://en.wikipedia.org/wiki/GNU_C_Library), so I will consider this case. The implementation of the `gettimeofday` function is located in the [sysdeps/unix/sysv/linux/x86/gettimeofday.c](https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/x86/gettimeofday.c;h=36f7c26ffb0e818709d032c605fec8c4bd22a14e;hb=HEAD) source code file. As you already may know, the `gettimeofday` is not a usual system call. It is located in the special area which is called `vDSO` (you can read more about it in the [part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/SysCall/syscall-3.html), which describes this concept).
|
As you may already know, a userspace application does not call a system call directly from the kernel space. Before the actual system call entry will be called, we call a function from the standard library. In my case it is [glibc](https://en.wikipedia.org/wiki/GNU_C_Library), so I will consider this case. The implementation of the `gettimeofday` function is located in the [sysdeps/unix/sysv/linux/x86/gettimeofday.c](https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/x86/gettimeofday.c;h=36f7c26ffb0e818709d032c605fec8c4bd22a14e;hb=HEAD) source code file. As you already may know, the `gettimeofday` is not a usual system call. It is located in the special area which is called `vDSO` (you can read more about it in the [part](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-3.html), which describes this concept).
|
||||||
|
|
||||||
The `glibc` implementation of `gettimeofday` tries to resolve the given symbol; in our case this symbol is `__vdso_gettimeofday` by the call of the `_dl_vdso_vsym` internal function. If the symbol cannot be resolved, it returns `NULL` and we fallback to the call of the usual system call:
|
The `glibc` implementation of `gettimeofday` tries to resolve the given symbol; in our case this symbol is `__vdso_gettimeofday` by the call of the `_dl_vdso_vsym` internal function. If the symbol cannot be resolved, it returns `NULL` and we fallback to the call of the usual system call:
|
||||||
|
|
||||||
@ -369,7 +369,7 @@ static inline bool timespec_valid(const struct timespec *ts)
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
which just checks that the given `timespec` does not represent date before `1970` and nanoseconds does not overflow `1` second. The `nanosleep` function ends with the call of the `hrtimer_nanosleep` function from the same source code file. The `hrtimer_nanosleep` function creates a [timer](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-4.html) and calls the `do_nanosleep` function. The `do_nanosleep` does main job for us. This function provides loop:
|
which just checks that the given `timespec` does not represent date before `1970` and nanoseconds does not overflow `1` second. The `nanosleep` function ends with the call of the `hrtimer_nanosleep` function from the same source code file. The `hrtimer_nanosleep` function creates a [timer](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-4.html) and calls the `do_nanosleep` function. The `do_nanosleep` does main job for us. This function provides loop:
|
||||||
|
|
||||||
```C
|
```C
|
||||||
do {
|
do {
|
||||||
@ -412,10 +412,10 @@ Links
|
|||||||
* [register](https://en.wikipedia.org/wiki/Processor_register)
|
* [register](https://en.wikipedia.org/wiki/Processor_register)
|
||||||
* [System V Application Binary Interface](http://www.x86-64.org/documentation/abi.pdf)
|
* [System V Application Binary Interface](http://www.x86-64.org/documentation/abi.pdf)
|
||||||
* [context switch](https://en.wikipedia.org/wiki/Context_switch)
|
* [context switch](https://en.wikipedia.org/wiki/Context_switch)
|
||||||
* [Introduction to timers in the Linux kernel](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-4.html)
|
* [Introduction to timers in the Linux kernel](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-4.html)
|
||||||
* [uptime](https://en.wikipedia.org/wiki/Uptime#Using_uptime)
|
* [uptime](https://en.wikipedia.org/wiki/Uptime#Using_uptime)
|
||||||
* [system calls table for x86_64](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/arch/x86/entry/syscalls/syscall_64.tbl)
|
* [system calls table for x86_64](https://github.com/torvalds/linux/blob/16f73eb02d7e1765ccab3d2018e0bd98eb93d973/arch/x86/entry/syscalls/syscall_64.tbl)
|
||||||
* [High Precision Event Timer](https://en.wikipedia.org/wiki/High_Precision_Event_Timer)
|
* [High Precision Event Timer](https://en.wikipedia.org/wiki/High_Precision_Event_Timer)
|
||||||
* [Time Stamp Counter](https://en.wikipedia.org/wiki/Time_Stamp_Counter)
|
* [Time Stamp Counter](https://en.wikipedia.org/wiki/Time_Stamp_Counter)
|
||||||
* [x86_64](https://en.wikipedia.org/wiki/X86-64)
|
* [x86_64](https://en.wikipedia.org/wiki/X86-64)
|
||||||
* [previous part](https://proninyaroslav.gitbooks.io/linux-insides-ru/content/Timers/timers-6.html)
|
* [previous part](https://0xax.gitbooks.io/linux-insides/content/Timers/linux-timers-6.html)
|
||||||
|
Loading…
Reference in New Issue
Block a user