forked from RussTedrake/manipulation
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathrobot.html
1025 lines (861 loc) · 56.8 KB
/
robot.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html>
<head>
<title>Ch. 2 - Let's get you a robot</title>
<meta name="Ch. 2 - Let's get you a robot" content="text/html; charset=utf-8;" />
<link rel="canonical" href="http://manipulation.csail.mit.edu/robot.html" />
<script src="https://hypothes.is/embed.js" async></script>
<script type="text/javascript" src="chapters.js"></script>
<script type="text/javascript" src="htmlbook/book.js"></script>
<script src="htmlbook/mathjax-config.js" defer></script>
<script type="text/javascript" id="MathJax-script" defer
src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js">
</script>
<script>window.MathJax || document.write('<script type="text/javascript" src="htmlbook/MathJax/es5/tex-chtml.js" defer><\/script>')</script>
<link rel="stylesheet" href="htmlbook/highlight/styles/default.css">
<script src="htmlbook/highlight/highlight.pack.js"></script> <!-- http://highlightjs.readthedocs.io/en/latest/css-classes-reference.html#language-names-and-aliases -->
<script>hljs.initHighlightingOnLoad();</script>
<link rel="stylesheet" type="text/css" href="htmlbook/book.css" />
</head>
<body onload="loadChapter('manipulation');">
<div data-type="titlepage" pdf="no">
<header>
<h1><a href="index.html" style="text-decoration:none;">Robotic Manipulation</a></h1>
<p data-type="subtitle">Perception, Planning, and Control</p>
<p style="font-size: 18px;"><a href="http://people.csail.mit.edu/russt/">Russ Tedrake</a></p>
<p style="font-size: 14px; text-align: right;">
© Russ Tedrake, 2020-2022<br/>
Last modified <span id="last_modified"></span>.</br>
<script>
var d = new Date(document.lastModified);
document.getElementById("last_modified").innerHTML = d.getFullYear() + "-" + (d.getMonth()+1) + "-" + d.getDate();</script>
<a href="misc.html">How to cite these notes, use annotations, and give feedback.</a><br/>
</p>
</header>
</div>
<p pdf="no"><b>Note:</b> These are working notes used for <a
href="http://manipulation.csail.mit.edu/Fall2022/">a course being taught
at MIT</a>. They will be updated throughout the Fall 2022 semester. <!-- <a
href="https://www.youtube.com/channel/UChfUOAhz7ynELF-s_1LPpWg">Lecture videos are available on YouTube</a>.--></p>
<table style="width:100%;" pdf="no"><tr style="width:100%">
<td style="width:33%;text-align:left;"><a class="previous_chapter" href=intro.html>Previous Chapter</a></td>
<td style="width:33%;text-align:center;"><a href=index.html>Table of contents</a></td>
<td style="width:33%;text-align:right;"><a class="next_chapter" href=pick.html>Next Chapter</a></td>
</tr></table>
<script type="text/javascript">document.write(notebook_header('robot'))
</script>
<!-- EVERYTHING ABOVE THIS LINE IS OVERWRITTEN BY THE INSTALL SCRIPT -->
<chapter style="counter-reset: chapter 1"><h1>Let's get you a robot</h1>
<p>In this chapter we're going to outfit your <a
href="https://en.wikipedia.org/wiki/Mecha">mech</a>. I want to make sure you
understand the robot hardware that we've selected for these notes, and how it
compares to the other hardware available today. You should also come away
with an understanding of how we simulate the robot and what commands you
can send to the robot interface.</p>
<section><h1>Robot description files</h1>
<p>Although we are going to focus primarily on one particular set of
hardware for the remainder of these notes, in this chapter I'll provide
software examples with a number of different robots. One of the great
things about modern robotics is that many of the tools we will develop over
the course of these notes are quite general, and can be transferred from
one robot to another easily. I could imagine a future version of these
notes where you really do get to build out your robot in this chapter, and
use your customized robot for the remaining chapters!</p>
<p>The ability to easily simulate/control a variety of robots is made
possible in part by the proliferation of common file formats for describing
our robots. Unfortunately, the field has not converged on a single
preferred format (yet), and each of them have their quirks. Drake currently
loads <a href="http://wiki.ros.org/urdf">Universal Robot Description
Format</a> (URDF), <a href="http://sdformat.org/">Simulation Description
Format</a> (SDF), and has limited support for the <a
href="https://mujoco.readthedocs.io/en/latest/XMLreference.html">MuJoCo
format</a> (MJCF). The Drake developers have been trying to upstream
improvements to SDF rather than start yet another format, but we do have a
very simple YAML specification called <a
href="https://drake.mit.edu/doxygen_cxx/structdrake_1_1multibody_1_1parsing_1_1_model_directives.html">Drake
Model Directives</a> which makes it very quick and easy to load multiple
robots/objects from these different file formats into one simulation; you
saw an example of this in the
<script>document.write(notebook_link('force', deepnote, "introduction chapter notebook"))</script>.</p>
</section>
<section><h1>Arms</h1>
<p>There appear to be a lot of robotics arms available on the market. So
how does one choose? Cost, reliability, usability, payload, range of
motion, ...; there are many important considerations. And the choices we
might make for a research lab might be very different than the choices we
might make as a startup.</p>
<example><h1>Robot arms</h1>
<p>I've put together a simple example to let you explore some of the
various robot arms that are popular today. Let me know if your favorite arm isn't on the list yet!</p>
<script>document.write(notebook_link('robot', d=deepnote, link_text="", notebook='inspector'))</script>
</example>
<p>There is one particular requirement which, if we want our robot to
satisfy, quickly winnows the field to only a few viable platforms: that
requirement is joint-torque sensing and control. Out of the
torque-controlled robots on the market, I've chosen to use the Kuka LBR
iiwa robot to use throughout these notes (I will try to use the lower case
"iiwa" to be <a
href="https://www.kuka.com/en-us/products/robotics-systems/industrial-robots/lbr-iiwa">consistent
with the manufacturer</a>, but it looks wrong to me every time!). </p>
<figure>
<img width="60%" src="https://www.robots.com/images/robots/KUKA/Collaborative/KUKA_LBR_IIWA_7_0001.png"/>
<figcaption><a href="https://www.kuka.com/en-us/products/robotics-systems/industrial-robots/lbr-iiwa">Kuka LBR iiwa robot</a>. This one has a 7kg payload.</figcaption>
</figure>
<p>It's not absolutely clear that the joint-torque sensing and control
feature is required, even for very advanced applications, but as a
researcher who cares a great deal about the contact interactions between my
robots and the world, I prefer to have the capability and explore whether I
need it rather than wonder what life might have been like. To better
understand why, let us start by understanding the difference between most
robots, which are position-controlled, and the handful of robots that have
accepted the additional cost and complexity to provide torque sensing and
control.</p>
<subsection><h1>Position-controlled robots</h1>
<figure>
<img width="35%" src="https://www.robots.com/images/robots/Universal/Universal_UR10_0002.jpg" />
<img width="55%" src="https://www07.abb.com/api/ir/getimage/36fb710b-54e0-4e53-8383-aacde553ec56/1" />
<figcaption>Two popular position controlled manipulators. (Left) The UR10 from Universal Robotics. (Right)The ABB Yumi.</figcaption>
</figure>
<p>Most robot arms today are "position controlled" -- given a desired
joint position (or joint trajectory), the robot executes it with
relatively high precision. Basically all arms can be position controlled
-- if the robot offers a torque control interface (with sufficiently high
bandwidth) then we can certainly regulate positions, too. In practice,
calling a robot "position controlled" is a polite way of saying that it
does not offer torque control. Do you know why position control and not
torque control is the norm?</p>
<p>Lightweight arms like the examples above are actuated with electric
motors. For a reasonably high-quality electric motor (with windings
designed to minimize torque ripple, etc), we expect the torque that the
motor outputs to be directly proportional to the current that we apply:
$$\tau_{motor} = k_t i,$$ where $\tau_{motor}$ is the motor torque, $i$ is
the applied current, and $k_t$ is the "<a
href="https://en.wikipedia.org/wiki/Motor_constants">motor torque
constant</a>". (Similarly, applied voltage has a simple (affine)
relationship with the motor's steady-state velocity). If we can control
the current, then why can we not control the torque?
</p>
<p>The short answer is that to achieve reasonable cost and weight, we
typically choose small electric motors with large gear reductions, and
gear reductions come with a number of dynamic effects that are very
difficult to model -- including backlash, vibration, and friction. So the
simple relationship between current and torque breaks down. Conventional
wisdom is that for large gear ratios (say $\gg 10$), the unmodeled terms
are significant enough that they cannot be ignored, and torque is no
longer simply related to current.</p>
<subsubsection><h1>Position Control.</h1>
<p>How can we overcome this challenge of not having a good model of the
transmission dynamics? Regulating the current or speed <i>of the
motor</i> only requires sensors on the motor side of the transmission.
To accurately regulate the joint, we typically need add more sensors on
the output side of the transmission. Importantly, although the torques
due to the transmission are not known precisely, they are also not
arbitrary -- for instance they will never add energy into the system.
Most importantly, we can be confident that there is a <i>monotonically
increasing</i> relationship between the current that we put into the
motor and the torque at the joint, and ultimately the acceleration of
the joint. Note that I chose the term monotonic carefully, meaning
"non-decreasing" but <i>not</i>
implying "strictly increasing", because, for instance, when a joint is
starting from rest, static friction will resist small torques without
having any acceleration at the output.</p>
<p>The most common sensor to add to the joint is a position sensor --
typically an encoder or potentiometer -- these are inexpensive,
accurate, and robust. In practice, we think of these as providing
(after some signal filtering/conditioning) accurate measurements of the
joint position and joint velocity -- joint accelerations can also be
obtained via differentiating twice but are generally considered more
noisy and less suitable for use in tight feedback loops. Position
sensors are sufficient for accurately tracking desired position
trajectories of the arm. For each joint, if we denote the joint
position as $q$ and we are given a desired trajectory $q^d(t)$, then I
can track this using <a
href="https://en.wikipedia.org/wiki/PID_controller">proportional-integral-derivative
(PID) control</a>: $$\tau = k_p (q^d - q) + k_d (\dot{q}^d - \dot{q}) +
k_i \int (q^d - q) dt,$$ with $k_p$, $k_d$, and $k_i$ being the
position, velocity, and integral gains. PID control has a rich theory,
and a trove of knowledge about how to choose the gains, which I will
not reproduce here. I will note, however, that when we simulate
position-controlled robots we often need to use different gains for the
physical robot and for our simulations. This is due to the transmission
dynamics, but also the fact that PID controllers in hardware typically
output voltage commands (via <a
href="https://en.wikipedia.org/wiki/Pulse-width_modulation">pulse-width
modulation</a>) instead of current commands. Closing this modeling gap
has traditionally not been a priority for robot simulation -- there are
enough other details to get right which dominate the "sim-to-real" gap
-- but I suspect that as the field matures the mainstream robotics
simulators will eventually capture this, too.</p>
<p>Some of you are thinking, "I can train a neural network to model
<i>anything</i>, I'm not afraid of difficult-to-model transmissions!" I
do think there is reason to be optimistic about this approach; there
are a number of initial demonstrations in this direction (e.g.
<elib>Hwangbo19</elib>). This is not quite as useful as if we can have
a first-principles model that can generalize to new actuators from a
few parameters in a description file, but could be very productive.</p>
<todo>Think through the implications of the PWM voltage command instead
of direct motor current.</todo>
<todo>Add a simulation of a single joint against gravity with PID
control gains on sliders, following a sinusoidal trajectory.</todo>
</subsubsection>
<subsubsection><h1>An aside: link dynamics with a transmission.</h1>
<p>One thing that might be surprising is that, despite the fact that
the joint dynamics of a manipulator are highly coupled and state
dependent, the PID gains are often chosen independently for each joint,
and are constant (not <a
href="https://en.wikipedia.org/wiki/Gain_scheduling">gain-scheduled</a>
). Wouldn't you expect for the motor commands required for e.g. a robot
arm at full extension holding a milk jug might be very different than
the motor commands required when it is unloaded in a vertical hanging
position? Surprisingly, the required gains/commands might not be as
different as one would think.</p>
<p>Electric motors are most efficient at high speeds (often > 100 or
1,000 rotations per minute). We probably don't actually want our robots
to move that fast even if they could! So nearly all electric robots
have fairly substantial gear reductions, often on the order of 100:1;
the transmission output turns one revolution for every 100 rotations of
the motor, and the output torque is 100 times greater than the motor
torque. For a gear ratio, $n$, actuating a joint $q$, we have
$$q_{motor} = n q,\quad \dot{q}_{motor} = n \dot{q}, \quad
\ddot{q}_{motor} = n \ddot{q}, \qquad \tau_{motor} = \frac{1}{n} \tau.$$
Interestingly, this has a fairly profound impact on the resulting
dynamics (given by $f = ma$), even for a single joint. Writing the
relationship between joint torque and joint acceleration (no motors
yet), we can write $ma = \sum f$ in the rotational coordinates as
$$I_{arm} \ddot{q} = \tau_{gravity} + \tau,$$ where $I_{arm}$ is the
moment of inertia. For example, for a <a
href="http://underactuated.mit.edu/pend.html"
target="underactuated">simple pendulum</a>, we might have $$ml^2
\ddot{q} = - mgl\sin{q} + \tau.$$ But the applied joint torque $\tau$
actually comes from the motor -- if we write this equation in terms of
motor coordinates we get: $$\frac{I_{arm}}{n} \ddot{q}_{motor} =
\tau_{gravity} + n\tau_{motor}.$$ If we divide through by $n$, and take
into account the fact that the motor itself has inertia (e.g. from the
large spinning magnets) that is not affected by the gear ratio, then we
obtain: $$\left(I_{motor} + \frac{I_{arm}}{n^2}\right) \ddot{q}_{motor}
= \frac{\tau_{gravity}}{n} + \tau_{motor}.$$</p>
<p>It's interesting to note that, even though the mass of the motors
might make up only a small fraction of the total mass of the robot, for
highly geared robots they can play a significant role in the dynamics
of the joints. We use the term <i>reflected inertia</i> to denote the
inertial load that is felt on the opposite side of a transmission, due
to the scaling effect of the transmission. The "reflected inertia" of
the arm at the motor is cut by the square of the gear ratio; or the
"reflected inertia" of the motor at the arm is multiplied by the square
of the gear ratio. This has interesting consequences -- as we move to
the multi-link case, we will see that $I_{arm}$ is a <a
href="http://underactuated.mit.edu/multibody.html"
target="underactuated">state-dependent function that captures the
inertia of the actuated link and also the inertial coupling of the
other joints in the manipulator</a>. $I_{motor}$, on the other hand, is
constant and only effects the local joint. For large gear ratios, the
$I_{motor}$ terms dominate the other terms, which has two important
effects: 1) it effectively diagonalizes the manipulator equations (the
inertial coupling terms are relatively small), and 2) the dynamics are
relatively constant throughout the workspace (the state-dependent terms
are relatively small). These effects make it relatively easy to tune
constant feedback gains for each joint individually that perform well
in all configurations.</p>
<todo>The WSG is a great example of reflected inertia!</todo>
</subsubsection>
</subsection>
<subsection><h1>Torque-controlled robots</h1>
<p>Although not as common, there are a number of robots that do support
direct control of the joint torques. There are a handful of ways that
this capability can be realized.</p>
<p>It <i>is</i> possible to actuate a robot using electric motors that
require only a small gear reduction (e.g. $\le$ 10:1) where the frictional
forces are negligible. In the past, these "direct-drive
robots"<elib>Asada87</elib> had enormous motors and limited payloads.
More recently, robots like the <a
href="https://robots.ieee.org/robots/wam/">Barrett WAM</a> arm used cable
drives to keep the arm light but having large motors in the base. And
just in the last few years, we've seen progress in high-torque outrunner
and frameless motors bringing in a new generation of low-cost,
"quasi-direct-drive" robots: e.g. MIT Cheetah <elib>Wensing17</elib>, <a
href="http://rll.berkeley.edu/blue/">Berkeley Blue</a>, and <a
href="https://www.halodi.com/">Halodi Eve</a>.</p>
<p>Hydraulic actuators provide another solution for generating large
torques without large transmissions. Sarcos had a series of <a
href="https://www.youtube.com/watch?v=VDxWHNtZvyI">torque-controlled
arms</a> (and humanoids), and many of the most famous robots from <a
href="https://www.bostondynamics.com/robots">Boston Dynamics</a> are based
on hydraulics (though there is an increasing trend towards electric
motors). These robots typically have a single central pump and each
actuator has a (lightweight) valve that can shunt fluid through the
actuator or through a bypass; the differential pressure across the
actuator is at least approximately related to the resulting
force/torque.</p>
<p>Another approach to torque control is to keep the large gear-ratio
motors, but add sensors to directly measure the torque at the joint side
of the actuator. This is the approach used by the Kuka iiwa robot that we
use in the example throughout this text; the iiwa actuators have <a
href="https://en.wikipedia.org/wiki/Strain_gauge">strain gauges</a>
integrated into the transmission. However there is a trade-off between
the stiffness of the transmission and the accuracy of the force/torque
measurement <elib>Kashiri17</elib> -- the iiwa transmission includes an
explicit "Flex Spline" with a stiffness around 5000 Nm/rad
<elib>Wedler12</elib>. Taking this idea to an extreme, Gill Pratt
proposed "series-elastic actuators" that have even lower stiffness springs
in the transmission, and proposed measuring joint position on both the
motor and joint sides of the transmission to estimate the applied torques
<elib>Pratt95b</elib>. For example, the <a
href="https://en.wikipedia.org/wiki/Baxter_(robot)">Baxter</a> and Sawyer
robots from Rethink used series-elastic actuators; I don't think they
ever published the spring stiffness values but similarly-motivated
series-elastic actuators from <a
href=http://docs.hebi.us/hardware.html>HEBI robotics are closer to 100
Nm/rad</a>. Even for the iiwa actuators, the joint elasticity is
significant enough that the low-level controllers go to great length to
take it into account explicitly in order to achieve high-performance
control of the joints<elib>Albu-Schaffer07</elib>. We will discuss these
details when we get to the chapter covering <a href="force.html">force
control</a>.</p>
</subsection>
<subsection><h1>A proliferation of hardware</h1>
<p>The low-cost torque-controlled arms that I mentioned above are just
the beginnings in what promises to be a massive proliferation of robotic
arms. During the pandemic, I saw a number of people using inexpensive
robots like the <a
href="https://www.ufactory.cc/xarm-collaborative-robot">xArm</a>
at home. As demand increases, costs will continue to come down.</p>
<p>Let me just say that, compared to working on legged robots, where for
decades we did our research on laboratory prototypes built by graduate
students (and occasionally professors!) in the machine shop down the hall,
the availability of professionally engineered, high-quality, high-uptime
hardware is an absolute treat. This also means that we can test
algorithms in one lab and have another lab perhaps at another university
testing algorithms on almost identical hardware; this facilitates levels
of repeatability and sharing that were impossible before. The fact that
the prices are coming down, which will mean many more similar robots in
many more labs/environments, is one of the big reasons why I am so
optimistic about the next few years in the field.</p>
<p>It's a good time to be working on manipulation!</p>
</subsection>
<subsection><h1>Simulating the Kuka iiwa</h1>
<p>It's time to simulate our chosen robotic arm. The first step is to
obtain a robot description file (typically URDF or SDF). For convenience,
we <a
href="https://github.com/RobotLocomotion/drake/tree/master/manipulation/models">ship</a>
the models for a few robots, including iiwa, with Drake. If you're
interested in simulating a different robot, you can find either a URDF or
SDF describing most commercial robots somewhere online. But a word of
warning: the quality of these models can vary wildly. We've seen
surprising errors in even the kinematics (link lengths, geometries, etc),
but the dynamics properties (inertia, friction, etc) in particular are
often not accurate at all. Sometimes they are not even mathematically
consistent (e.g. it is possible to specify an inertial matrix in URDF/SDF
which is not physically realizable by any rigid body). Drake will
complain if you ask it to load a file with this sort of violation; we
would rather alert you early than start generating bogus simulations.
There is also increasingly good support for exporting to a robot format
directly from CAD software like <a
href="http://wiki.ros.org/sw_urdf_exporter">Solidworks</a>.</p>
<p>Now we have to import this robot description file into our physics
engine. In Drake, the physics engine is called
<code>MultibodyPlant</code>. The term "plant" may seem odd but it is
pervasive; it is the word used in the controls literature to represent a
physical system to be controlled, which originated in the control of
chemical plants. This connection to control theory is very important to
me. Not many physics engines in the world go to the lengths that Drake
does to make the physics engine compatible with control-theoretic design
and analysis.</p>
<p>The <a
href="https://drake.mit.edu/doxygen_cxx/classdrake_1_1multibody_1_1_multibody_plant.html"><code>MultibodyPlant</code></a>
has a class interface with a rich library of methods to work with the
kinematics and dynamics of the robot. If you need to compute the location
of the center of mass, or a kinematic Jacobian, or any similar queries,
then you'll be using this class interface. A
<code>MultibodyPlant</code> also implements the interface to be used as a
<code>System</code>, with input and output ports, in Drake's <a
href="https://medium.com/toyotaresearch/drake-model-based-design-in-the-age-of-robotics-and-machine-learning-59938c985515">systems
framework</a>. In order to simulate, or analyze, the combination of a
<code>MulitbodyPlant</code> with other systems like our perception, planning, and
control systems, we will be assembling <a
href="https://en.wikibooks.org/wiki/Control_Systems/Block_Diagrams">block
diagrams</a>.</p>
<div>
<script src="htmlbook/js-yaml.min.js"></script>
<script type="text/javascript">
var sys = jsyaml.load(`
name: MultibodyPlant
input_ports:
- applied_generalized_force
- applied_spatial_force
- <em style="color:gray">model_instance_name[i]</em>_actuation
- <span style="color:green">geometry_query</span>
output_ports:
- continuous_state
- body_poses
- body_spatial_velocities
- body_spatial_accelerations
- generalized_acceleration
- reaction_forces
- contact_results
- <em style="color:gray">model_instance_name[i]</em>_continuous_state
- '<em style="color:gray">
model_instance_name[i]</em>_generalized_acceleration'
- '<em style="color:gray">
model_instance_name[i]</em>_generalized_contact_forces'
- <span style="color:green">geometry_pose</span>`);
document.write(system_html(sys, "https://drake.mit.edu/doxygen_cxx/classdrake_1_1multibody_1_1_multibody_plant.html"));
</script>
</div>
<p>As you might expect for something as complex and general as a physics
engine, it has many input and output ports; most of them are optional.
I'll illustrate the mechanics of using these in the following example.</p>
<example><h1>Simulating the passive iiwa</h1>
<p>It's worth spending a few minutes with this example, which should
help you understand not only the physics engine, but some of the basic
mechanics of working with simulations in Drake.</p>
<script>document.write(notebook_link('robot', d=deepnote, link_text="",notebook='simulation'))</script>
</example>
<p>The best way to visualize the results of a physics engine is with a 2D
or 3D visualizer. For that, we need to add the system which curates the
geometry of a scene; in Drake we call it the <code>SceneGraph</code>.
Once we have a <code>SceneGraph</code>, then there are a number of
different visualizers and sensors that we can add to the system to
actually render the scene.</p>
<div>
<script type="text/javascript">
var sys = jsyaml.load(`
name: SceneGraph
input_ports:
- source_pose{0}
- ...
- source_pose{N-1}
output_ports:
- lcm_visualization
- query`);
document.write(system_html(sys, "https://drake.mit.edu/doxygen_cxx/classdrake_1_1geometry_1_1_scene_graph.html"));
</script>
</div>
<example><h1>Visualizing the scene</h1>
<p>This example is far more interesting to watch. Now we have the 3D visualization!</p>
<script>document.write(notebook_link('robot', d=deepnote, link_text="",notebook='simulation'))</script>
</example>
<p>You might wonder why <code>MultibodyPlant</code> doesn't handle the
geometry of the scene as well. Well, there are many applications in
which we'd like to render complex scenes, and use complex sensors, but
supply custom dynamics instead of using the default physics engine.
Autonomous driving is a great example; in that case we want to populate a
<code>SceneGraph</code> with all of the geometry of the vehicles and
environment, but we often want to simulate the vehicles with very simple
vehicle models that stop well short of adding tire mechanics into our
physics engine. We also have a number of examples of this workflow in my
<a href="http://underactuated.mit.edu">Underactuated Robotics</a> course,
where we make extensive use of "simple models".</p>
<p>We now have a basic simulation of the iiwa, but already some subtleties
emerge. The physics engine needs to be told what torques to apply at the
joints. In our example, we apply zero torque, and the robot falls down.
In reality, that never happens; in fact there is essentially never a
situation where the physical iiwa robot experiences zero torque at the
joints, even when the controller is turned off. Like many mature
industrial robot arms, iiwa has mechanical brakes at each joint that are
engaged whenever the controller is turned off. To simulate the robot with
the controller turned off, we would need to tell our physics engine about
the torques produced by these brakes.</p>
<p>In fact, even when the controller is turned on, and despite the fact
that it is a torque-controlled robot, we can never actually send zero
torques to the motors. The iiwa software interface accepts "feed-forward
torque" commands, but it will always add these as additional torques to
its low-level controller which is compensating for gravity and the
motor/transmission mechanics. This often feels frustrating, but probably
we don't actually want to get into the details of simulating the drive
mechanics.</p>
<p>As a result, the simplest reasonable simulation we can provide of the
iiwa must include a simulation of Kuka's low-level controller. We will
use the iiwa's "joint impedance control" mode, and will describe the
details of that once they become important for getting the robot to
perform better. For now, we can treat it as given, and produce our
simplest reasonable iiwa simulation.</p>
<example><h1>Adding the iiwa low-level controller</h1>
<p>This example adds the iiwa controller and sets the desired <i>positions</i> (no longer the desired torques) to be the current state of the robot. It's a more faithful simulation of the real robot. I'm sorry that it is boring once again!</p>
<script>document.write(notebook_link('robot', d=deepnote, link_text="",notebook='simulation'))</script>
</example>
<p>As a final note, you might think that simulating the <i>dynamics</i> of
the robot is overkill, if our only goal is to simulate manipulation tasks
where the robot is moving only relatively slowly, and effects of mass,
inertia and forces might be less important than just the positions that
the robot (and the objects) occupy in space. I would actually agree with
you. But it's surprisingly tricky to get a <i>kinematic</i> simulation to
respect the basic rules of interaction; e.g. to know when the object gets
picked up or when it does not (see, for instance <elib>Pang18</elib>).
Currently, in Drake, we mostly use the full physics engine for simulation,
but often use simpler models for manipulation planning and control.</p>
</subsection>
</section>
<section><h1>Hands</h1>
<p>You might have noticed that the iiwa model does not actually have a hand
attached; the robot ships with a mounting plate so that you can attach the
"end-effector" of your choice (and some options on access ports so you can
connect your end-effector to the computer without wires running down the
outside of the robot). So now we have another decision to make: what hand
should we use?</p>
<example><h1>Robot hands</h1>
<p>We can explore different hand models in Drake using the same sort of
interface we used for the arms, though I don't have as many hand models
here yet. Let me know if your favorite hand isn't on the list!</p>
<script>document.write(notebook_link('robot', d=deepnote, link_text="",notebook='inspector'))</script>
</example>
<p>It is interesting that, when it comes to robot end effectors, researchers
in manipulation tend to partition themselves into a few distinct camps.</p>
<subsection><h1>Dexterous hands</h1>
<figure>
<table><tr><td>
<img style="height:250px" src="figures/shadow_dexterous_hand.jpg"/>
</td><td style="width:50px"></td><td>
<img style="height:250px" src="figures/allegro_hand.png"/>
</td></tr>
</table>
<figcaption>Dexterous hands. Left: the <a href="https://www.shadowrobot.com/products/dexterous-hand/">Shadow Dexterous Hand</a>. Right: the <a href="http://www.wonikrobotics.com/Allegro-Hand.htm">Allegro Hand</a>.</figcaption>
</figure>
<p>Of course, our fascination with the human hand is well placed, and we
dream of building robotic hands that are as dexterous and sensor-rich.
But the reality is that we aren't there yet. Some people choose to pursue
this dream and work with the best dexterous hands on the market, and
struggle with the complexity and lack of robustness that ensues. The famous <a href="https://openai.com/blog/learning-dexterity/">"learning dexterity"</a> project from OpenAI used the Shadow hand for playing with a Rubik's cube, and the work that had to go into the hand in order to support the endurance learning experiments was definitely a part of the story. There is a chance that new manufacturing techniques could really disrupt this space -- videos like <a href="https://www.youtube.com/watch?v=cZuzXdMyJsA">this one of FLLEX v2</a> look amazing<elib>Kim19</elib> -- and I am very optimistic that we'll have more capable and robust dexterous hands in the not-so-distant future.</p>
</subsection>
<subsection><h1>Simple grippers</h1>
<figure>
<iframe width="420" height="330" src="https://www.youtube.com/embed/oyHWkQcin7I" frameborder="0" allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen pdf="no"></iframe>
<p pdf="only"><a href="https://www.youtube.com/embed/oyHWkQcin7I">Click here to watch the video.</a></p>
<figcaption>This video of tele-operation with the PR1 from Ken
Salisbury's group is now a classic example of doing amazingly useful
things with a very simple hand. Check out their <a
href="https://sr.stanford.edu/?page_id=509">website</a> for more
videos, including sweeping, fetching a beer, and unloading a
dishwasher.</figcaption>
</figure>
<p><img style="width:150px;float:right;margin-left:10px"
src="figures/toy_robot_hand.jpg"/>Another camp points out that dexterous
hands are not necessary -- I can give you a simple gripper from the toy
store and you can still accomplish amazingly useful tasks around the home.
The PR1 videos above are a great demonstration of this.</p>
<p>Another important argument in favor of simple hands is the elegance
and clarity that comes from reducing the complexity. If thinking clearly
about simple grippers helps us understand more deeply <i>why</i> we need
more dexterous hands (I think it will), then great. For most of these
notes, a simple two-fingered gripper will serve our pedagogical goals the
best. In particular, I've selected the Schunk WSG 050, which we have
used extensively in our research over the last few years. We'll also
explore a few different end-effectors in later chapters, when they help
to explain the concepts.</p>
<p>To be clear: just because a hand is simple (few degrees of freedom)
does not mean that it is low quality. On the contrary, the Schunk WSG is
a very high-quality gripper with force control and force measurement at
its single degree of freedom that surpasses the fidelity of the Kuka. It
would be hard to achieve the same in a dexterous hand with many
joints.</p>
</subsection>
<subsection><h1>Soft/underactuated hands</h1></subsection>
<p>Finally, the third and newest camp is promoting clever mechanical
designs for hands, which are often called "underactuated hands". The
basic idea is that, for many tasks, you might not need as many actuators
in your hand has you have joints. Many underactuated hands use a
cable-drive mechanism to close the fingers, where a single tendon can
cause multiple joints in the finger to bend. When designed correctly,
these mechanisms can allow the finger to <a
href="https://www.youtube.com/watch?v=C340gbK3sZc">conform passively to
the shape of an object being grasped</a> with no change in the actuator
command (c.f. <elib>Odhner14</elib>). Cables are not required for this
concept to work; qualitatively similar behavoir can be achieved using
clever rigid mechanical linkages, as well.</p>
<figure>
<table><tr><td>
<img style="height:250px" src="figures/RHR_Reflex_s.png"/>
</td><td style="width:50px"></td><td>
<img style="height:250px" src="figures/robotiq-3-finger-gripper.jpeg"/>
</td></tr>
</table>
<figcaption>Underactuated hands. Left: the RightHand Robotics Reflex2 is a descendant of the i-HY hand<elib>Odhner14</elib>. Right: the Robotiq 3-fingered gripper.</figcaption>
</figure>
<figure>
<img width="560" src="figures/robotiq_3_finger_mechanism.jpeg">
<figcaption>A <a
href="https://blog.robotiq.com/3-finger-adaptive-gripper-simulation-data">clever mechanical linkage</a> allows the underactuated Robotiq 3-fingered gripper to comply to an object being grasped.</figcaption>
</figure>
<p>Taking the idea of underactuation and passive compliance to an
extreme, recent years have also seen a number of hands (or at least
fingers) that are completely soft. The "soft robotics community" is
rapidly changing the state of the art in terms of robot fabrication, with
appendages, actuators, sensors, and even power sources that can be
completely soft. These technologies promise to improve durability,
decrease cost, and potentially be more safe for operating around
people.</p>
<figure>
<table><tr><td>
<img style="width:250px" src="figures/truby_soft_hand.jpeg"/>
</td><td style="width:50px"></td><td>
<img style="height:200px" src="figures/rbohand_disney.png"/>
</td></tr>
</table>
<figcaption>Underactuated hands. Left: A <a href="https://doi.org/10.1038/d41586-018-02778-5">3D-printed soft hand from Harvard</a> (Image credit: Ryan Truby). Right: The <a href="http://www.robotics.tu-berlin.de/menue/research/soft_hands/">RBO Hand 2</a> (Image credit: Disney Research Zurich).</figcaption>
</figure>
<p>Underactuated hands can be excellent examples of mechanical design
reducing the burden on the actuators / control system. Often these hands
are amazingly good at some range of tasks (most often "enveloping
grasps"), but not as general purpose. It would be very hard to use one
of these to, for instance, button my shirt. They are, however, becoming more and more dexterous; check out the video below!</p>
<figure>
<iframe width="560" height="315" src="https://www.youtube.com/embed/Z6ECG3KHibI" frameborder="0"
allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen pdf="no"></iframe>
<p pdf="only"><a href="https://www.youtube.com/embed/Z6ECG3KHibI">Click here to watch the video.</a></p>
</figure>
<subsection><h1>Other end effectors</h1>
<p>Not all end effectors need to operate like dexterous or simplified
human hands. Many industrial applications these days are doing a form of
pick and place manipulation using vacuum grippers (also known as
suction-cup grippers). Suction cups work extremely well on many but not all objects. Some objects are too soft or porous to be suctioned effectively. Some objects are too fragile or heavy to be lifted from a vacuum at the top of the object, and must be supported from below. Some hands have suction in the palms to achieve an initial pick, but still use more traditional fingers to stabilize a grasp.</p>
<p>There are numerous other clever gripper technologies. One of my
favorites is the <a
href="https://www.creativemachineslab.com/jamming-gripper.html">jamming
gripper</a>. These grippers are made of a balloon filled with coffee
grounds, or some other granular media; pushing down the balloon around an
object allows the granular media to flow around the object, but then
applying a vacuum to the balloon causes the granular media to "jam",
quickly hardening around the object to make a stable grasp
<elib>Brown10</elib>. </p>
<figure>
<iframe width="420" height="315" src="https://www.youtube.com/embed/bFW7VQpY-Ik" frameborder="0"
allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen pdf="no"></iframe>
<p pdf="only"><a href="https://www.youtube.com/embed/-KxjVlaLBmk">Click here to watch the video.</a></p>
</figure>
<p><a href="https://www.youtube.com/watch?v=r_HaJfANyT8">Here</a> is another clever design with actuated rollers at the finger tips to help with in-hand reorientation.</p>
<p>Finally, a reasonable argument against dexterous hands is that even
humans often do some of their most interesting manipulation not with the
hand directly, but through tools. I particularly liked the response that
Matt Mason, one of the main advocates for simple grippers throughout the
years, gave to
<a href="https://youtu.be/LfWiBdOc2FI?t=4025">a question at the end of
one of our robotics seminars</a>: he argued that useful robots in e.g.
the kitchen will probably have special purpose tools that can be changed
quickly. In applications where the primary job of the dexterous hand is
to change tools, we might skip the complexity by mounting a <a
href="https://blog.robotiq.com/bid/72926/Top-Manufacturers-of-Robotic-Tool-Changers">"tool
changer"</a>
directly to the robot and using tool-changer-compatible tools.</p>
</subsection>
<subsection><h1>If you haven't seen it...</h1>
<p>One time I was attending an event where the registration form asked us
"what is your favorite robot of all time, real or fictional". That is a
tough question for someone who loves robots! But the answer I gave was a super cool "high-speed multifingered hand" by
the <a href="http://ishikawa-vision.org/fusion/index-e.html">Ishikawa group</a>; a project that started turning out
amazing results back in 2004! They "overclocked" the hand -- sending more current for short durations than would be
reasonable for any longer applications -- and also used high-speed cameras to achieve these results. And they had a <a
href="http://ishikawa-vision.org/fusion/RubikManipulation/index-e.html">Rubik's cube demo</a>, too, in 2017.</p>
<figure>
<iframe width="560" height="315" src="https://www.youtube.com/embed/-KxjVlaLBmk" frameborder="0"
allow="accelerometer; encrypted-media; gyroscope; picture-in-picture" allowfullscreen pdf="no"></iframe>
<p pdf="only"><a href="https://www.youtube.com/embed/-KxjVlaLBmk">Click here to watch the video.</a></p>
</figure>
<p>So good!</p>
</subsection>
</section>
<todo>Section on mobile manipulators. PR-2. HSR. Fetch. Everyday robot. TTT.</todo>
<section><h1>Sensors</h1>
<p>I haven't said much yet about sensors. In fact, sensors are going to be
a major topic for us when we get to perception with (depth) cameras, and
when we think about <a
href="https://en.wikipedia.org/wiki/Tactile_sensor">tactile sensing</a>. But
I will defer those topics until we need them.</p>
<p>For now, let us focus on the joint sensors on the robot. Both the iiwa and the Schunk WSG provide joint feedback -- the iiwa driver gives "measured position", "estimated velocity", and "measured torque" at each of its seven joints; remember that joint accelerations are typically considered too noisy to rely on. Similarly the Schunk WSG outputs "measured state" (position + velocity) and "measured force". We can make all of these available as ports in a block diagram.</p>
</section>
<section><h1>Putting it all together</h1>
<p>If you've worked through the examples, you've seen that a proper
simulation of our robot is more than just a physics engine -- it requires
assembling physics, actuator and sensor models, and low-level robot
controllers into a common framework. In practice, in Drake, that means that
we are assembling increasingly sophisticated block diagrams.</p>
<p>One of the best things about the block-diagram modeling paradigm is the
power of abstraction and encapsulation. We can assemble a
<code>Diagram</code> that contains all of the components necessary to
simulate our hardware platform and its environment, which we will refer to
affectionately as the "Manipulation Station". All together,
the <code>ManipulationStation</code> system looks like this:</p>
<div id="manipulation_station">
<script type="text/javascript">
var sys = jsyaml.load(`
name: ManipulationStation
input_ports:
- iiwa_position
- iiwa_feedforward_torque (optional)
- wsg_position
- wsg_force_limit (optional)
output_ports:
- iiwa_position_commanded
- iiwa_position_measured
- iiwa_velocity_estimated
- iiwa_state_estimated
- iiwa_torque_commanded
- iiwa_torque_measured
- iiwa_torque_external
- wsg_state_measured
- wsg_force_measured
- camera_[NAME]_rgb_image
- camera_[NAME]_depth_image
- <b style="color:orange">camera_[NAME]_label_image</b>
- ...
- camera_[NAME]_rgb_image
- camera_[NAME]_depth_image
- <b style="color:orange">camera_[NAME]_label_image</b>
- <b style="color:orange">query_object</b>
- <b style="color:orange">contact_results</b>
- <b style="color:orange">plant_continuous_state</b>
- <b style="color:orange">body_poses</b>`);
document.write(system_html(sys, "https://github.com/RussTedrake/manipulation/blob/ceb817b527cbf1826c5b9a573ffbef415cb0f013/manipulation/scenarios.py#L453"));
</script>
</div>
<p>This diagram itself can then be used as a <code>System</code> in
additional diagrams, which can include our perception, planning, and,
higher-level control systems. This model also defines the abstraction
between the simulation and the real hardware. We offer an almost identical
system, the
<code>ManipulationStationHardwareInterface</code>. If you replace this
directly in place of the <code>ManipulationStation</code>, then the same
code you've developed in simulation can be run directly on the real robot.
The ports that are available only in simulation, but not in reality, are
colored <b><span style="color:orange">orange</span></b> on the
<code>ManipulationStation</code> system.</p>
<div>
<script type="text/javascript">
var sys = jsyaml.load(`
name: ManipulationStationHardwareInterface
input_ports:
- iiwa_position
- iiwa_feedforward_torque
- wsg_position
- wsg_force_limit (optional)
output_ports:
- iiwa_position_commanded
- iiwa_position_measured
- iiwa_velocity_estimated
- iiwa_torque_commanded
- iiwa_torque_measured
- iiwa_torque_external
- wsg_state_measured
- wsg_force_measured
- camera_[NAME]_rgb_image
- camera_[NAME]_depth_image
- ...
- camera_[NAME]_rgb_image
- camera_[NAME]_depth_image`);
document.write(system_html(sys, "https://drake.mit.edu/doxygen_cxx/classdrake_1_1examples_1_1manipulation__station_1_1_manipulation_station_hardware_interface.html"));
</script>
</div>
<p>The <code>ManipulationStationHardwareInterface</code> is also a diagram,
but rather than being made up of the simulation components like
<code>MultibodyPlant</code> and <code>SceneGraph</code>, it is made up of
systems that perform network message passing to interface with the small
executables that talk to the individual hardware drivers. If you dig under
the covers, you will see that we use <a
href="https://lcm-proj.github.io/">LCM</a> for this instead of ROS messages,
precisely because LCM is a lighter-weight dependency for our public
repository. But many Drake developers/users use <a href="https://github.com/RobotLocomotion/drake-ros">Drake in a ROS/ROS2 ecosystem</a>.</p>
<p>If you do have your own similar robot hardware available, and want to run
the hardware interface on your machines, I've started putting together a list
of drivers and bill of materials <a href="station.html">in the
appendix</a>.</p>
</section>
<section><h1>Exercises</h1>
<exercise><h1>Role of Reflected Inertia</h1>
<p> For this exercise you will investigate the effect of reflected inertia on the joint-space dynamics of the robot, and how it affects simple position control laws. You will work exclusively in <script>document.write(notebook_link('reflected_inertia', deepnote['exercises/robot'], link_text='this notebook'))</script>. You will be asked to complete the following steps: </p>
<ol type="a">
<li> Derive the first-order state-space dynamics $\dot{\bx} = f(\bx, \bu)$ of a simple pendulum with a motor and gearbox.
</li>
<li> Compare the behavior of the direct-driven simple pendulum and the simple pendulum with a high-ratio gearbox, under the same position control law.</li>
</ol>
</exercise>
<exercise><h1>Input and Output Ports on the Manipulation Station</h1>
<p> For this exercise you will investigate how a manipulation station is abstracted in Drake's system-level framework. You will work exclusively in <script>document.write(notebook_link('manipulation_station_io', deepnote['exercises/robot'], link_text='this notebook'))</script>. You will be asked to complete the following steps: </p>
<ol type="a">
<li> Learn how to probe into inputs and output ports of the manipulation station and evaluate their contents.
</li>
<li> Explore what different ports correspond to by probing their values.</li>
</ol>
</exercise>
</h1></section>
<exercise><h1>Direct Joint Teleop in Drake</h1>
<p> For this exercise you will implement a method for controlling the joints of a robot in Drake. You will work exclusively in <script>document.write(notebook_link('direct_joint_control', deepnote['exercises/robot'], link_text='this notebook'))</script>, and should use the <script>document.write(notebook_link('intro', deepnote, link_text='example notebook in chapter 1'))</script> as a reference. You will be asked to complete the following steps: </p>
<ol type="a">
<li> Replace the teleop interface in the chapter 1 example with different Drake functions that allow for directly controlling the joints of the robot.</li>
</ol>
</exercise>
</h1></section>
</chapter>
<!-- EVERYTHING BELOW THIS LINE IS OVERWRITTEN BY THE INSTALL SCRIPT -->
<div id="references"><section><h1>References</h1>
<ol>
<li id=Hwangbo19>
<span class="author">Jemin Hwangbo and Joonho Lee and Alexey Dosovitskiy and Dario Bellicoso and Vassilios Tsounis and Vladlen Koltun and Marco Hutter</span>,
<span class="title">"Learning agile and dynamic motor skills for legged robots"</span>,
<span class="publisher">Science Robotics</span>, vol. 4, no. 26, pp. eaau5872, <span class="year">2019</span>.
</li><br>
<li id=Asada87>
<span class="author">Haruhiko Asada and Kamal Youcef-Toumi</span>,
<span class="title">"Direct-Drive Robots - Theory and Practice"</span>, The MIT Press
, <span class="year">1987</span>.
</li><br>
<li id=Wensing17>
<span class="author">P. M. Wensing and A. {Wang} and S. {Seok} and D. {Otten} and J. {Lang} and S. {Kim}</span>,
<span class="title">"Proprioceptive Actuator Design in the MIT Cheetah: Impact Mitigation and High-Bandwidth Physical Interaction for Dynamic Legged Robots"</span>,
<span class="publisher">IEEE Transactions on Robotics</span>, vol. 33, no. 3, pp. 509-522, June, <span class="year">2017</span>.
</li><br>
<li id=Kashiri17>
<span class="author">Navvab Kashiri and Jörn Malzahn and Nikos Tsagarakis</span>,
<span class="title">"On the Sensor Design of Torque Controlled Actuators: A Comparison Study of Strain Gauge and Encoder Based Principles"</span>,
<span class="publisher">IEEE Robotics and Automation Letters</span>, vol. PP, 02, <span class="year">2017</span>.
</li><br>
<li id=Wedler12>
<span class="author">A Wedler and M Chalon and K Landzettel and M G{\"o}rner and E Kr{\"a}mer and R Gruber and A Beyer and HJ Sedlmayr and B Willberg and W Bertleff and others</span>,
<span class="title">"DLRs dynamic actuator modules for robotic space applications"</span>,
<span class="publisher">Proceedings of the 41st Aerospace Mechanisms Symposium</span> , May 16-18, <span class="year">2012</span>.
</li><br>
<li id=Pratt95b>
<span class="author">G. A. Pratt and M. M. {Williamson}</span>,
<span class="title">"Series elastic actuators"</span>,
<span class="publisher">Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots</span> , vol. 1, pp. 399-406 vol.1, Aug, <span class="year">1995</span>.
</li><br>
<li id=Albu-Schaffer07>
<span class="author">Alin Albu-Schaffer and Christian Ott and Gerd Hirzinger</span>,
<span class="title">"A unified passivity-based control framework for position, torque and impedance control of flexible joint robots"</span>,
<span class="publisher">The international journal of robotics research</span>, vol. 26, no. 1, pp. 23--39, <span class="year">2007</span>.
</li><br>
<li id=Pang18>
<span class="author">Tao Pang and Russ Tedrake</span>,
<span class="title">"A Robust Time-Stepping Scheme for Quasistatic Rigid Multibody Systems"</span>,
<span class="publisher">IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</span> , <span class="year">2018</span>.
[ <a href="http://groups.csail.mit.edu/robotics-center/public_papers/Pang18.pdf">link</a> ]
</li><br>
<li id=Kim19>
<span class="author">Yong-Jae Kim and Junsuk Yoon and Young-Woo Sim</span>,
<span class="title">"Fluid Lubricated Dexterous Finger Mechanism for Human-Like Impact Absorbing Capability"</span>,
<span class="publisher">IEEE Robotics and Automation Letters</span>, vol. 4, no. 4, pp. 3971--3978, <span class="year">2019</span>.
</li><br>
<li id=Odhner14>
<span class="author">Lael U. Odhner and Leif P. Jentoft and Mark R. Claffee and Nicholas Corson and Yaroslav Tenzer and Raymond R. Ma and Martin Buehler and Robert Kohout and Robert D. Howe and Aaron M. Dollar</span>,
<span class="title">"A Compliant, Underactuated Hand for Robust Manipulation"</span>,
<span class="publisher">International Journal of Robotics Research (IJRR)</span>, vol. 33, no. 5, pp. 736-752, <span class="year">2014</span>.
</li><br>
<li id=Brown10>
<span class="author">Eric Brown and Nicholas Rodenberg and John Amend and Annan Mozeika and Erik Steltz and Mitchell R Zakin and Hod Lipson and Heinrich M Jaeger</span>,