###
Adaptive Grasp Policies

The minimum grasp force required to pick an object up is
bounded between object slip acceleration and gripper upwards acceleration (2.5 m/s^{2} for the UR5 robot arm),
given an object's mass, m, and friction coefficient, µ.

Assuming a highly capable reasoning agent can estimate these two values,
we can define a closed-loop controller to achieve a minimally-deforming grasp:
increasing gripper output force F

_{out} and decreasing gripper
aperture

*x* until sensing a contact force F

_{c} greater than
the target F

_{min}. To determine the gain terms, ∆F

_{out}
and ∆

*x*, i.e. how fast we close the gripper and ratchet up
force, the controller utilizes an agent-determined k and ∆x to
compute ∆F

_{out} =

*c* ·

*k*∆

*x* (manual dampening
constant

*c* = 0.1).

We task an LLM (GPT-4) with predicting these quantities for an arbitrary object.
To generate grasp policies, we leverage a
dual-prompt structure similar to that of Language to Rewards,
with an initial grasp “descriptor” prompt which estimates
object characteristics and special accommodations, if needed, from
the input object description. The “descriptor” prompt produces a
structured description, which the subsequent “coder”
prompt translates into an executable Python grasp policy that
modulates gripper compliance, force, and aperture according
to the controller described above.