Team Finland

Week of March 27, 2011


While in London with Clark Haynes, Matt discussed with him possible existing Nelder-Mead implementations used by the KodLab. Something of them will require edits, though they are certainly doable. In addition, Clark and Matt were able to eliminate a speed limit on the XRL robot by having the AMC drives completely ignore position following errors. This will be tested on X-RHex and may eliminate one artificial limit on speed. This is not explicitly part of our project, though will certainly be a benefit to our project as it has removed external constraints upon the areas of gaitspace which we may explore.

We hope to have our code finished by the middle of next week so that we can begin doing Nelder-Mead descent in the gaitspace.


In class, Dalton discussed possible lecture topics with Deniz. In the coming weeks, we may schedule a lecture with Prof. Koditschek on topology to better understand what's truly being done by the Nelder-Mead algorithm to avoid common problems and to hopefully foresee any problems that may take up time later. Again, while we are not writing our own implementation of Nelder-Mead, we want to understand it so that we are not "flying blind."

We both have spent time since our last journal entry looking for Nelder-Mead implementations. One common one is by C. T. Kelley, the author of "Iterative Methods for Optimization."


Today, we read about Nelder-Mead descent and found Python, MATLAB and C algorithms implementing it. We will likely use Python as it is a language we both know well and allows for rapid changes due to the fact that it is an interpreted language; though we both know C, it requires recompilation which would take up time.


Planning to have working code next week (4/4-4/8) and begin the main body of our investigation, which will consist of repeated gait optimizations using Nelder-Mead descent on the X-RHex platform. Our (somewhat ambitious, though doable) goal is to have our experimental setup working this week, and to begin optimizations.

Week of April 3, 2011


Today we began to write code. We have opted to use fmin from the scipy.optimize module in Python. This is an implementation of Nelder-Mead that we were referred to by Dr. Shai Revzen, a postdoctoral researcher for Professor Koditschek. We began to write code which would interface this with our calculations of specific resistance.


Today we discussed the future scope of our project for the full class period with Dr. Koditschek. In writing our code and attempting to coordinate GPS data with fmin and have the outputs of fmin control the robot, we realized that implementing this architecture would not leave us with much time, if any, for actual experimentation.

We will move ahead while keeping in mind that we will not be able to do everything we initially intended to do with the depth that we were hoping for; time constraints have made it so that we must triage the various tasks within our project in a way that will let us substantively explore each of them without spending too much time on any single task. We will continue looking at Nelder-Mead until roughly the middle of next week. We will pay particular attention to how the shape of the space can affect the outcome of Nelder-Mead and how the noise in cost function measurements affects Nelder-Mead's ability to explore the space.

We will next look at different methods of interpolation, especially multivariate splines. While the interpolation of our data will be done near the end, we want to have understood it before then so that we do not flail in the latter portions of this project.


Today we read online about how Nelder-Mead actually optimizes. We are realizing that our decision to not attempt to implement Nelder-Mead on our own was a good one, as writing and debugging this algorithm would make an ambitious two-week project on its own. However, we are still somewhat concerned with how much we have chosen to do for this project, especially with code writing.


Today we briefly talked with Deniz about using existing Nelder-Mead code to do GPS tuning. He has told us that this code has been implemented several times in the KodLab and that recreating it would not be the best use of our time as we would simply be reinventing the proverbial wheel. As such, we have chosen to use the existing KodLab architecture for this part of our project. Though this is not our own code, we have reached our first milestone in that we now have the code necessary to begin optimizations.

We attempted to use an older GPS from the KodLab to begin our optimizations, as the new one could not be found. We were not able to gather any data; Dynamism appeared to be connecting to the GPS, but we were not able to get any readings out of it other than infinity. We will discuss this matter with Clark Haynes on Monday and attempt to resolve the issue so that we can gather data in the coming week.

Week of April 10th, 2011


Today we discussed our upcoming plans with Professor Koditschek and Deniz Ilhan. In particular, we discussed ways of testing Nelder-Mead and understanding how it can fail. We will first look at how Nelder-Mead converges on a simple, convex function. Then, we will add noise to this function to see how noise affects the Nelder-Mead algorithm's convergence and the number of trials required to converge. We expect that adding noise will require more trials to converge, though if the noise is sufficiently small, the algorithm will still converge to the global minimum. If the magnitude of the noise is great enough, the algorithm may reach stagnation. If time permits, we will explore this noise threshold.

Then, we will assess how Nelder-Mead behaves on a very rugged surface; we expect that in instances such as these, Nelder-Mead will easily become stuck in local minima and never reach the global minimum, and will test this hypothesis by running the Nelder-Mead algorithm with several different starting simplices. If the spread of points of convergence reached from a given starting simplex is comparable to the magnitude of the noise we add, then we can conclude that the noise we have added is responsible for the differences in final values. However, if the spread is much greater than the magnitude of the noise, it is likely that the different minima are a result of the ruggedness of the specific resistance space.


Today we implemented our modified Nelder-Mead code (which is based on the existing KodLab code) on the X-RHex robot. Our code was working fairly quickly, but the robot failed repeatedly (by losing communication with Leg 4 and then failing to reboot when commanded to do so) so we did not have enough runtime to capture meaningful data. In the coming days, we hope to successfully complete more trials.


Today we spent time exploring the Nelder-Mead algorithm using several test functions. First, we ran the Nelder-Mead algoritm on a 4-dimensional version of the Rosenbrock function. This function is frequently used to test the efficacy of optimization algorithms and we chose to use a 4-dimensional version because we will be using Nelder-Mead in a 4-dimensional space. We found that while the Nelder-Mead algorithm was able to converge to the global minimum of the Rosenbrock function without noise, adding very small amounts of noise led to stagnation in the algorithm.

We also tested the Nelder-Mead algorithm in a 3-dimensional version of the Himmelblau function (as very little information could be found about the 4-dimensional version); this function is also frequently used to test optimization algorithms. Again, the Nelder-Mead implementation we are using was able to find the minimum, though the introduction of noise made the algorithm stagnate. In the coming days, we will explore Nelder-Mead's response to noise on "friendlier" functions so that we have a better sense of how noise affects Nelder-Mead; adding noise to a function which is difficult to optimize does not give us a good sense of what noise alone can do.


Today we ran our code on X-RHex. We ran it near the Towne building to ensure that it was working; we were communicating with the GPS and changing gaits properly, though our specific resistance values were not decreasing very quickly. We moved to Hill field and ran our code again; we were achieving low specific resistance values at minimum but again there was a wide fluctuation. However, as on Thursday, the robot lost communication and failed to boot when restarted. Unfortunately, this means that we were not able to see the Nelder-Mead algorithm converge. In the coming week, provided the robot cooperates, we will run more experiments, both optimization and gathering specific resistance data.

Week of April 17th, 2011


Today we repaired the robot. The onboard CPU was not getting enough inrush current at start-up, meaning that the robot would not boot. To fix this problem, we connected the CPU to the +5V regulator on the other battery board. We also identified a replacement battery board, so if this new +5V connection fails, we will still have options before having to put together a new battery board, in which case due to time constraints we would most likely have to abort the experimental aspect of our project. We discussed with Dan the notion that this is not meant to be a project about robot maintenance and that if repairs become too time-consuming we will simply focus more on theoretical work, and he agreed that this would be an appropriate course of action. The robot is currently functional, so provided good weather we will be able to collect more experimental data.


Today we were able to successfully perform some Nelder-Mead optimizations on the robot. While we initially ran the robot on Hill field, we decided to move off of Hill field for three reasons— there is high traffic due to pick-up sports, meaning that having enough space to conduct experiments is often hit-or-miss; the robot tends to get dusty, which can be bad for the electronics and generally requires time to clean the robot; and the variations in the ruggedness of the terrain as opposed to a flat, hard surface undoubtedly produce more noise in the specific resistance, affecting the Nelder-Mead algorithm's convergence. We are instead using the main walkway that runs along Hill field, turning the robot around once it reaches the end. We were able to get the Nelder-Mead algorithm to reach stagnation for two different runs with the same leg speed and starting simplex, one using S.R. as the objective function and one using S.R./v^2 as in the Weingarten paper. Based on our theoretical work and exploring Nelder-Mead's convergence on known functions, we have seen that reaching stagnation can be a result of the Nelder-Mead algorithm's tolerance being smaller than the magnitude of the noise in the measurements. Experimentally, we believe that this means that the noise encountered in our specific resistance measurements was of greater magnitude than the tolerance for convergence; despite this, we believe that it is better to be too strict on convergence (in which case the tolerance can be manually adjusted during data analysis) than to allow the optimization to finish prematurely, as setting the tolerance to a higher value may cause the algorithm to stop despite (potentially) being able to find better gaits.


Today we performed more Nelder-Mead optimizations on the walkway through Hill field, one with the same starting simplex and leg speed as yesterday, and one with a different starting simplex. Ideally we will get enough optimizations using a single starting simplex to begin to analyze the noise in the objective function at that particular leg speed, and get enough optimizations using different starting simplices to get an idea of the ruggedness of the gait parameter space (i.e. do we different starting simplices still lead to the same optimized gait, or are there many local minima?).