-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathREADME.ISOLCPUS
278 lines (227 loc) · 10.7 KB
/
README.ISOLCPUS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
# Copyright (C) 2005-2017 The RTAI project
# This [file] is free software; the RTAI project
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
*** EXPLOITING CPUs ISOLATION ***
Written by:
Bernhard Pfund <[email protected]>
and
Paolo Mantegazza <[email protected]>
RTAI can take advantage of the possibility Linux affords to isolate CPUs
from any of its scheduling activity on multi processor (MP) machines.
Contents:
=========
1. Isolcpus
2. IsolCpuMask in RTAI
3. Cpusets & CPU hotplug
4. Example init script
1. Isolcpus
-----------
There should be no better explanation of what it is than the following
excerpts from Linux 'Documentation/kernel-parameters.txt':
isolcpus= [KNL,SMP] Isolate CPUs from the general scheduler.
Format:
<cpu number>,...,<cpu number>
or
<cpu number>-<cpu number>
(must be a positive range in ascending order) or a mixture
<cpu number>,...,<cpu number>-<cpu number>
This option can be used to specify one or more CPUs
to isolate from the general SMP balancing and scheduling
algorithms. The only way to move a process onto or off
an "isolated" CPU is via the CPU affinity syscalls.
<cpu number> begins at 0 and the maximum value is
"number of CPUs in system - 1".
This option is the preferred way to isolate CPUs. The
alternative -- manually setting the CPU mask of all
tasks in the system -- can cause problems and
suboptimal load balancer performance.
Add the following to the above, from the very same source:
acpi_irq_nobalance [HW,ACPI]
ACPI will not move active IRQs (default)
default in PIC mode
Irq balancing can be disabled directly at kernel configuration and has little
effect on what will follow. Nonetheless you should prefer to be sure that Linux
does not manipulate hardware related stuff on its own. So if you forgot to
disable irq balancing when you made your kernel there is no need to remake it,
just set it at boot time.
From what above it should be easy to infer that by using "isolcpus" you can
be sure that Linux will have none of its tasks running on the isolated cpus.
That is not entirely true since a few kernel threads and kworkers will have
a copy of each of them repeated and assigned to all of the available CPUs.
It is assumed that they will have very little to do if no interrupt arrives
on the isolated CPUs. Thus if RTAI is able to care avoiding any hard interrupt
on the isolated CPUs they will be fairly well isolated from Linux.
Nevertheless, to further help isolation, RTAI removes any kernel thread and
kworker at the insmoding of its HAL. Even if such an action does not impede
their further dynamic creations there is some evidence that it is useful
anyhow. It would also be possible to block such dynamic creations, an action
that is postponed till more experience is gathered with the use of what above.
To easily check if any following dynamic insertion of something within the
isolated cpus has happened the following might be of help:
cat /proc/*/task/*/status |grep "allowed:" |grep <mask>,
where mask is the mask of what you want either to see not or see.
With what above only Linux rescheduling interrupts should appear on the
isolated CPUs. They are not difficult to block too, but it does not seem
that much more has to be gained by doing so. Therefore, as for what above,
it is momentarily set aside as a possible future TODO action.
Finally, combining all of the above with the possibility of forcing RTAI
enabled Linux process/thread/kthreads to stay on the isolated CPUs the
latter will find themselves processing just RTAI real time stuff, with
a significant reduction of latencies/jitters. In the case of many RTAI
tasks running on the isolated CPUs all issues producing latency/jitter
will still have an effect, but it will be reduced to the least possible,
without any added bus/pipe/cache interference from Linux.
It should be noticed however that there will remain a single interrupt still
managed by Linux, i.e. the one to the local APIC timer. Such an interrupt
will not come from the hardware but from a software inter processor interrupt
(IPI) generated by RTAI to keep Linux happy. It is likely that such an IPI
could be sent only to non isolated CPUs but no testing of such a solution
has been done up to now. The possible change requires just the substitution
of a single line of code but I'm afraid it could damage Linux somehow.
2. IsolCpuMask in RTAI
----------------------
Then what you have to do on the RTAI side is to load the core RTAI HAL module
using something like: "insmod rtai_hal.ko IsolCpusMask=<xxx>", where <xxx>
is the mask of isolated CPUs. Please notice that Linux uses a list if isolated
CPUs while RTAI requires the corresponding mask (I'm lazy and let you do it).
Suppose you have a quadcore system with cores #0 - #3 and want everything but
core #0 to be used by RTAI. In that case, just like in the cpuset example below,
your IsolCpusMask would be 0xE (1110).
What will happen next is that RTAI will divert all of Linux interrupts away
from the isolated CPUs, keeping those requested for RTAI (rt_request_irq)
on the isolated CPUs, without you having to care for it. In such a way there
will be no Linux activity on the isolated CPUs whatsoever and they will remain
within complete RTAI ownership.
Finally it is up to you to exploit such a feature by assigning all of your
tasks to the isolated CPUs, according to your needs, by using:
"rt_task_init_cpuid", "rt_thread_init_cpuid", "rt_task_init_schmod".
Notice that you have also the possibility of further specialising your CPUs
isolation scheme by diverting any real time interrupt you use to a single
specific CPU, or CPU cluster within the isolated CPUs, by using the RTAI
function: rt_assign_irq_to_cpu.
Naturally you can set RTAI "IsolCpusMask" even without setting the Linux
"isolcpus" list, still with some beneficial effects though, for sure, they
will not be as good as with the complete isolation setting described
previously.
Starting from RTAI-4.1 rtai_hal.ko can inheredit the cpu isolation mask from
the kernel. So there is nomore the need to set IsolCpuMask at the insmoding of
rtai_hal.ko. Nonetheless if IsolCpuMask is not null it will supersede the one
assigned by Linux at boot time.
3. Cpusets & CPU hotplug
------------------------
If you want to make sure nothing unwanted is _ever_ scheduled on a specific
CPU or core the hotplug system is your friend. (Logically) offlined CPUs are
removed from the Linux scheduler and thus no longer get tasks assigned. Online
again and assigned to a cpuset, RTAI tasks can be scheduled on these CPUs. See
'Documentation/cpusets.txt' and 'Documentation/cgroup.txt'.
Consequently you should inform RTAI about the allowed CPUs (1-3 in the example)
and initialise your tasks accordingly (see section 2).
The example below shows a quadcore CPU partitioned into three sections.
+-Quadcore CPU---------------------+
| |
| +-Production-+ |
| +--------+ | +--------+ | |
| | | | | | | |
| | Core 0 | | | Core 2 | | |
| | | | | | | |
| +--------+ | +--------+ | |
| | | |
| | | |
| +-RTnet------+ | | |
| | +--------+ | | +--------+ | |
| | | | | | | | | |
| | | Core 1 | | | | Core 3 | | |
| | | | | | | | | |
| | +--------+ | | +--------+ | |
| +------------+ +------------+ |
+----------------------------------+
- The root set where only core #0 remains. The kernel and all non real-time
tasks run on that single core.
- The 'RTnet' cpuset on core #1 where all RTDM related tasks are scheduled on.
- The 'Production' cpuset where the real-time controller tasks are executed.
This cpuset has load-balancing enabled, hence tasks assigned to that set are
scheduled on either core #3 or #4 and can be moved within the scheduler domain
if necessary.
Memory isolation is not used in this example and no overlapping sets exist, it's
even actively prevented by using the cpu_exclusive attribute.
By appropriately exploiting both isolation methods illustrated above it is mostly
possible to reduce latencies to single microsec digit figures, especially if the
hard timer periodic mode is adopted.
4. Example init script
----------------------
############# START OF SCRIPT #############
# Environment
MKDIR=/bin/mkdir
MOUNT=/bin/mount
ECHO=/bin/echo
CPUSET=/dev/cpuset
DEVICES=/sys/devices/system/cpu
SLEEP=/bin/sleep
NAME=cpuset
reset_cpus()
{
# Shutdown CPU cores 1 - 3
$ECHO 0 > $DEVICES/cpu1/online
$ECHO 0 > $DEVICES/cpu2/online
$ECHO 0 > $DEVICES/cpu3/online
# Noop
$SLEEP 1
# Enable CPU cores 1 - 3
$ECHO 1 > $DEVICES/cpu1/online
$ECHO 1 > $DEVICES/cpu2/online
$ECHO 1 > $DEVICES/cpu3/online
}
mount_cpuset()
{
$MKDIR $CPUSET
# Configure the root set
if [ -e $CPUSET ]; then
$MOUNT -t cgroup -ocpuset cpuset $CPUSET
# Disable load balancing in the root set
$ECHO 0 > $CPUSET/cpuset.sched_load_balance
fi
}
create_rtnet_set()
{
# Configure the RTnet cpuset
if [ -e $CPUSET ]; then
$MKDIR $CPUSET/rtnet
$ECHO 1 > $CPUSET/rtnet/cpuset.cpus
$ECHO 1 > $CPUSET/rtnet/cpuset.cpu_exclusive
# Disable load balancing in the rtnet set
$ECHO 0 > $CPUSET/rtnet/cpuset.sched_load_balance
fi
}
create_production_set()
{
# Configure the production cpuset
if [ -e $CPUSET ]; then
$MKDIR $CPUSET/production
$ECHO 2,3 > $CPUSET/production/cpuset.cpus
$ECHO 1 > $CPUSET/production/cpuset.cpu_exclusive
# Enable load balancing in the production set
$ECHO 1 > $CPUSET/production/cpuset.sched_load_balance
fi
}
case "$1" in
start)
reset_cpus
mount_cpuset
create_rtnet_set
create_production_set
;;
stop)
reset_cpus
;;
*)
echo "Usage: /etc/init.d/$NAME {start|stop}" >&2
exit 2
;;
esac
exit 0
############# END OF SCRIPT #############