-
Notifications
You must be signed in to change notification settings - Fork 0
/
limitations.tex
44 lines (38 loc) · 3.31 KB
/
limitations.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
\section{Limitations}
Even though our system has produced a number of successful dressing
animations, our current approach has some limitations. One such
limitation is due to the nature of the feedback from the cloth position.
In our system, this feedback is performed using visibility calculations.
When the control system is guiding a hand to enter a sleeve, this is done
by finding the closest point on the cloth to the sleeve entry that is
visible to the tip of the hand. It is probable that when real people
dress themselves, much of their knowledge about the cloth state is due
to tactile feedback, instead of from visual information.
Another limitation of our dressing controller is that it uses kinematic
motion instead of calculating the dynamics of the human body. This has
two repercussions. First, our system is one-way coupled, so that the
cloth does not exert forces on the human. Certain character motions can
occasionally cause the cloth to exceed its strain limit, producing
unrealistic cloth behavior. This problem could be eliminated using
feedback forces from the cloth. A second consequence is that the
kinematically controlled human has no notion of balance, and thus may
carry out physically impossible motions. This is especially important in
dressing tasks such as putting on a pair of pants while standing up. The
lack of balance control could also have more subtle effects on the stance
of the person during upper body dressing.
A subset of human dressing motions involves long-term planning with approximate knowledge of the future state of the garment. This subset includes motions such as re-gripping, navigating the end effector around a moving garment and catching a thrown or dropped garment. Our current algorithm is shortsighted and therefore relies on user input to handle this class of dressing motions. It is reasonable to imagine incorporating faster, more approximate cloth simulation models to increase planning accuracy in these situations.
In all of our examples, the initial configuration of the cloth has been
set in a way that is favorable to dressing. If the initial garment were
tangled, our dressing actions would most likely fail. A possible avenue
for addressing this may be found in the robotics research that has
investigated picking up and folding cloth~\cite{Cusumano:2011:BCD}.
In our current implementation, actions are executed sequentially with the assumption that user input is sufficiently plausible for each action to be completed successfully. As such, our current system does not automatically respond to failure scenarios. Similarly, our system requires the user to specify
the set of actions for a given dressing task. When given a new garment,
the system has no way of determining a sequence of actions that will
successfully place it on a body. We can imagine a more sophisticated
system that would address both of these limitations by analyzing a garment, forming a plan of
actions to properly dress a character and executing this plan with backtracking, re-planning, and other corrective procedures in place to respond to failures or adjust to environmental perturbation.
% \karen{Perhaps we can add a paragraph about initial state of
% cloth. Currently, we start the simulation with the cloth in
% hand. Roboticists have been trying to fold laundry from random
% initial cloth configurations.}