Proprioceptive Sensing and Locomotion

Introduction

Sensory capabilities of a biological organism is the core of its intelligence. All the input required for any sort of perception comes from various sensory modalities available to the organism. These modalities can be grouped into three categories: Exteroception, interoception and proprioception. In this experiment, we will be focusing on the last category. According to Wikipedia, proprioception is "… the sensory modality that provides feedback solely on the status of the body internally". What makes proprioception different from interoception is that proprioceptive sensors can provide insights into the interactions between the organism and its environment.

Two important set of proprioceptive sensors available on Junior platform are optical encoders per actuator and an inertial measurement unit (IMU) located at the geometric center of the platform. There are various attempts in the literature that focus on using these two type of sensory inputs to obtain better intelligence [1], [2], [3], in addition to several attempts on fusing proprioceptive sensors with exteroceptive sensors to combine the advantages of both parts [4].

Encoders are already at the heart of the low level per-leg position control. For more information about how such sensors work, you may refer to the Wikipedia page for rotary encoders. Also, for an introductory information on how a generic control loop feedback mechanism works, you may look at the PID Controller page. One recent work available on our platform is the work of Johnson et. al. on using a leg observer to detect environmental or mechanical-failure related disturbances [1]. A specific application of their work is, with a proper interpretation of the leg state observer, to use the legs of RHex family for detection of obstacles such as walls. Junior platform is currently equipped with this capability.

The family of IMUs used on our robotic platforms are built by MicroStrain. The model available on Junior platform is able to provide relative body pose estimation given an initial reference. This body pose information is provided in terms of yaw, pitch, and roll.

Lab Task: Integrate given sensory capabilities, wall detection and body pose estimation, with locomotive skills developed in the previous experiments to perform a simple planning scheme that lets Junior platform to traverse a known path.

Prelab

path1.png
The Path for Scenario-#1

The goal of this prelab exercise is to get you ready for working on the actual paths you will be performing the planning on. We will focus on two different scenarios.

Scenario 1

Assume you are given the path shown in the corresponding figure where the path width is 2 feet. Note that there is NO absolute position information, in other words localization, that you can rely on. Your goal is to have your robot traverse this path. Write down a short pdf file that discusses:

  1. How you tune your steering behavior to make sure Junior can take such a turn,
  2. Alternatively, how you tune your turning in place behavior to make sure Junior can take such a turn,
  3. For both turning options, how you will perform the transitions between successive phases of your plan,
  4. For both turning options, how you can incorporate the wall detector, and relative body pose estimates for such transitions and online corrections on the plan

Scenario 2

path2.png
The Path for Scenario-#2

Assume you are given the path shown in the corresponding figure where the path width is 2 feet. Note that there is NO absolute position information, in other words localization, that you can rely on. Your goal is to have your robot traverse this path. Write down a short pdf file that discusses:

  1. How you tune your steering behavior to make sure Junior can take such a turn,
  2. Alternatively, how you tune your turning in place behavior to make sure Junior can take such a turn,
  3. For both turning options, how you will perform the transitions between successive phases of your plan,
  4. For both turning options, how you can incorporate the wall detector, and relative body pose estimates for such transitions and online corrections on the plan

IMU and Body Pose

The inertial measurement unit Junior platform is equipped with contains 3 degrees of accelerometer and gyro in addition to a magnetometer. At this point, to avoid the magnetic disturbances introduced by the motors themselves, the magnetometer is disabled. The problem with this setup is the fact that, without a correction step involving a magnetometer, estimation of the angle in yaw direction (which is orthogonal to gravity vector) drifts in time, especially when there is no movement in this direction. In this set of tasks, we will try to regulate the heading of the robot by only using the yaw readings although this drifting effect might cause you trouble. You are free to pick your own control policy for this regulation.

NOTE: For network synchronization, instead of calling dy.network.pull within the for loop, we will use another function provided by Dynamism interface. You may use either of the two functions:

dy.network.start_receive_every(timestep,'%s.data1' % hostname)
dy.network.start_receive_from_every(host, timestep, '%s.data1' % hostname)

The main idea is to set up a prescheduled synchronization and to run only dy.data.get_float calls within the loop. You may want to do the same for your pushes to the robot. For the syntax, please refer to Dynamism Tutorial Page.
Task 1:
Write a piece of code that uses the Dynamism interface to get yaw angle readings, stores them in an array and plots the readings in the end of the process. Optionally, you can plot these readings online.

Task 2:
Write a piece of code that uses your previous script for Buehler clock to run an alternating tripod gait where the script also incorporates yaw angle readings as feedback to maintain the initial heading.

Task 3:
Write a piece of code that uses your previous script for turning in place but this time incorporates the yaw angle readings for turning for a desired angle. The resolution can be as good as your previous discretization is. As the worst case, you should be able to turn for half a revolution and a full revolution by using this feedback.

Note:
you need to read imu.yaw for yaw angle readings.

Wall Detector

The simple but very effective idea behind the leg observer developed in [1] is to come up with a state observer that models the physical system and provides an estimate for the internal state given the observations on input and output of the actual system. Under some external disturbance or some internal faulty condition, the model can not agree with the actual system anymore, and one can investigate the nature of this disagreement to detect the reason behind.

In this part of the experiment, due to time constraint, instead of using the full observer, we will shoot for a much simpler approach and try to use per leg current measurements to detect walls. The principle idea is, during the flight phase, a leg should not draw as much current as it does during the slow phase.

NOTE: For network synchronization, instead of calling dy.network.pull within the for loop, we will use another function provided by Dynamism interface. You may use either of the two functions:

dy.network.start_receive_every(timestep,'%s.data1' % hostname)
dy.network.start_receive_from_every(host, timestep, '%s.data1' % hostname)

The main idea is to set up a prescheduled synchronization and to run only dy.data.get_float calls within the loop. You may want to do the same for your pushes to the robot. For the syntax, please refer to Dynamism Tutorial Page.

Task 4:
Write a piece of code that uses the Dynamism interface to get motor current readings and leg positions, stores them in an array, logs them and plots them online.

Task 5:
Write a piece of code that uses your previous script for Buehler clock to run an alternating tripod gait where the script also incorporates your work for the previous task. Run this script on the robot while it is on the bench and block one of the legs for a short period of time while its on flight phase. Plot online and log the desired and actual leg positions and current dissipation.

Task 6:
Modify your work for the previous task to detect a wall/obstacle, stop the robot, move back till the corresponding leg is free to move, again.

IMU & Wall Detector

Task 7:
Combine your wall detector with your turning-in-place script from Task 3. Start your robot looking straight at the wall. Modify your script in such a way that, first, the robot detects the wall, then goes back a little, rotates to one side, tries to go forward again, and repeats this till the point where it is moving parallel to the wall.

For the following two tasks, you are free to use any behavior you have developed on the robot so far. You are not restricted to use the behaviors developed in this experiment.

Task 8:
Write a script that can traverse the path given in scenario 1 of prelab exercise without any intervention by the user.

Task 9:
Write a script that can traverse the path given in scenario 1 of prelab exercise without any intervention by the user.

Deliverables

Prelab

To be completed and posted to the Blackboard by class on Thursday, 2/26.

Demonstration

  • Show the instructor that your group can use your scripts to have Junior accomplish all the required tasks.

Competition

  • There will be a competition among the groups which will have two stages:
    • Traverse the path in scenario 1 of prelab as fast as you can
    • Traverse the path in scenario 2 of prelab as fast as you can.
  • You can ask TA for extra robot time to get prepared for the competition.

Report

After completing Tasks 1 through 9, write a report summarizing your procedure and results. The goal of your report is to inform the reader of what you did and convince them of any conclusions you have made. In this report, be sure to:

  • Write an introduction to bring the reader into the report.
  • Briefly explain your implementations for all the behaviors. You are encouraged to go through the blocks of your code and explain the idea behind.
  • Discuss your procedure on tuning the behaviors.
  • Discuss your conclusions:
    • How reliable were the yaw readings coming from the IMU? Can you come up with any evidence suggesting possible problems? If so, can you explain the causes? You are free to use any reference as long as you cite them properly.
    • If you are restricted to yaw readings coming from the IMU as the only feedback, can you come up with a control scheme that not only regulates the heading but also guarantees to go back to the straight path the robot was initially following under some reasonable disturbance on the heading. You are allowed to be anecdotal in your answer for this question. Also, you are free to use any reference as long as you cite them properly.
    • For parts of this experiment, you have used a very simple approach instead of the leg state observer presented in [1]. Can you come up with any other potential uses of this capability than the examples mentioned in their paper?
    • Which behaviors did you prefer to use to accomplish tasks 8 and 9? Especially, explain your reasoning for not using other behaviors that you developed throughout the lab warmup experiments.
    • In tasks 8 and 9, we have provided you the metric details about the path you needed to traverse and let you tune your script on the path. If you were asked to accomplish this task without much metric detail but you were allowed to ask for some other clues that you can use to traverse the path successfully, what clues you would like to have and, once these clues were provided, how would you solve this problem?
  • Write a conclusion to wrap up your ideas and present your results one last time.
Bibliography
1. Aaron M. Johnson, G. Clark Heynes, and Daniel E. Koditschek. "Disturbance Detection, Identification, and Recovery by Gait Transition in Legged Robots", Proceedings of the 2010 IEEE/RSJ Intl. Conference on Intelligent Robots and Systems (IROS), October, 2010.
2. H. Komsuoglu, D. McMordie, U. Saranli, N. Moore, M. Buehler, and D. E. Koditschek, “Proprioception based behavioral advances in a hexapod robot,” Proceedings of IEEE International Conference on Robotics and Automation (ICRA), 2001
3. Sachin Chitta, Paul Vernaza, Roman Geykhman, and Daniel D. Lee. "Proprioceptive Localization for a Quadrupedal Robot on Known Terrain" Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2007
4. Paul Vernaza and Daniel D. Lee, "Rao-Blackwellized particle filtering for 6-DOF estimation of attitude and position via GPS and inertial sensors" In IEEE International Conference on Robotics and Automation (ICRA), 2006.