AlexanderXydesDiary

From seed
Jump to: navigation, search

DuCTT Navigation

Goals: The goal of this thesis is to develop controllers for navigating a robot within multiple different types of ducts or pipes. These environments include:

  • a horizontal duct
  • a vertical duct
  • a horizontal plane (no duct, or duct with walls bigger than robot)

Github repo

DuCTT Robot

DuCTT Mechanical Test & Validation

Alex Xydes Master's Thesis (current version)

Spring 2015 Work

May 17th, 2015

  • Keep writing, finish learning algorithms section (and as many more as possible).
  • Do robustness testing:
    • start <x,y,z> position
    • duct sizes
    • change parameters slightly-> can it still climb/traverse?

What I actually got done since last time:

  • Finished first draft of thesis! (file updated!)
    • Realized that I wasn't using a neural network, but a genetic algorithm.
    • Waiting on final results of some learning runs for control strategy 1 (sine waves)
  • Did robustness testing:
    • start <x,y,z> position (not much change)
    • duct sizes (controller works in ducts that fit the geometry)

This week:

  • Do robustness testing:
    • change parameters slightly-> can it still climb/traverse? It should be able to. will vary params with normal distribution (0 mean, .5 std dev)
  • Add final results from learning control strategy 1 (sine waves)
  • Make changes to thesis as necessary

May 11th, 2015

What I wanted to do last time:

  • Look at smoothing again. Get video, play side-by-side with old video. Smooth climbing motion Side-by-side comparision
  • Implement multiprocess learning.
  • Start robustness testing:
    • start <x,y,z> position
    • duct sizes
    • cable lengths
    • motor power? don't think this is applicable as I'm not sure I'm using the motor simulation code. look into this.
    • change parameters slightly-> can it still climb/traverse?

What I actually got done since last time:

  • Got side-by-side smooth vs old video. Smooth climbing motion Side-by-side comparision
  • Implemented multiprocess learning. Not going to use unless I get in a time crunch.
  • Touched up a lot of sections of the thesis (file above, updated).

This week:

  • Keep writing, finish learning algorithms section (and as many more as possible).
  • Do robustness testing:
    • start <x,y,z> position
    • duct sizes
    • change parameters slightly-> can it still climb/traverse?

May 4th, 2015

What I wanted to do last time:

  • Finish lit review.
  • Add velocity parameters to controller to try to help smooth out robot movement.
    • Maybe add acceleration as well?

What I actually got done since last time:

  • Finished 1st draft of lit review (thesis file updated).
  • I'm not sure if new cost function made the climbing movement smoother or not, will get another video to compare.

This week:

  • Look at smoothing again. Get video, play side-by-side with old video. Smooth climbing motion Side-by-side comparision
  • Implement multiprocess learning.
  • Start robustness testing:
    • start <x,y,z> position
    • duct sizes
    • cable lengths
    • motor power? don't think this is applicable as I'm not sure I'm using the motor simulation code. look into this.
    • change parameters slightly-> can it still climb/traverse?

April 28th, 2015

What I wanted to do last time:

  • Continue writing lit review.
  • Get video/results from using neural network for gen 2 of climbing.
  • Continue learning trials for traversing horizontal plane and duct.

What I actually got done since last time:

  • Continued writing lit review (thesis file updated).
  • Continued learning trials.
    • horizontal duct is much improved (2.13cm/s) by using the corners of the duct just like in vertical climbing.
    • horizontal plane seems to max out at 0.75 cm/s even with better friction coefficients.
    • climbing still not smooth even with updated cost function.
      • I think that's because my controller doesn't allow for smooth movement, it uses the max velocity it can in every state.

This week:

  • Finish lit review.
  • Add velocity parameters to controller to try to help smooth out robot movement.
    • Maybe add acceleration as well?

April 21st, 2015

What I wanted to do last time:

  • Start writing lit review.
  • Get video of 2.55 cm/s climbing. Video
  • 6.1cm/s climbing
  • Start second generation of machine learning on state machine controller.

What I actually got done since last time:

  • Started writing lit review.
  • Got video of 2.55 cm/s climbing. Video
  • Started second generation of machine learning on state machine controller.
  • Experimented with using the neural network for gen 1 of climbing, it didn't work as well as the monte carlo.
  • Also experimented with the neural network for gen 2 of climbing and it's working much better than monte carlo for gen 2.

This week:

  • Continue writing lit review.
  • Get video/results from using neural network for gen 2 of climbing.
  • Continue learning trials for traversing horizontal plane and duct.

April 13th, 2015

What I wanted to do last time:

  • Continue reading papers for lit review, start writing.
  • Meet with Yoav to discuss status of code.
  • Work on more feedback for controller to solve falling down issue?
    • Start by using 60 seconds instead of 30 during training.

What I actually got done since last time:

  • Continued reading papers for lit review.
  • Met with Yoav to discuss state of code.
    • Determined plan of attack:
      • use half-sphere touch sensors, rotate so they are in the corners of the duct, this will keep them from sliding horizontally
      • put all actuators on one frequency
      • use the major points in time (like contact between touch sensor and wall) as parameters
    • Adapted my state-machine based controller so that the major points in time could be tuned by machine learning
      • after 1 generation of Monte Carlo learning: a climbing speed of about 2.3 cm/s has been achieved.
      • after 2nd generation of monte carlo learning: a climbing speed of 2.55 cm/s was achieved. Video
      • this is compared to the 1.4 cm/s speed of the physical robot with it's inverse kinematic controls

This week:

  • Start writing lit review.
  • Get video of 2.55 cm/s climbing. Video
  • Start second generation of machine learning on state machine controller.

April 6th, 2015

What I wanted to do last time:

  • Write Winter 2015 Status Report
  • Continue working on vertical climbing.
    • Experiment with historisis effect.

What I actually got done since last time:

  • Wrote Winter 2015 Status Report
  • Continued reading papers for lit review.
  • Continued working on vertical climbing.
    • Experimented with historisis effect. Using machine learning to discover the best length of time.
    • Having an issue where the robot starts tilting and then either gets stuck or falls down.
      • Also switched to 4 controllers: 1 for vertical cables, 1 for saddle cables, 1 for each linear actuator
        • this was done because the falling seems to be caused by instability in the back and forth movement, and I thought I could minimize that by having all the saddle cables use the same controller.
      • Tried having a second set of control parameters for when the robot is tilting.
      • This helped a bit, but it's still falling down. Climb then fall

This week:

  • Continue reading papers for lit review, start writing.
  • Meet with Yoav to discuss status of code.
  • Work on more feedback for controller to solve falling down issue?
    • Start by using 60 seconds instead of 30 during training.

Winter 2015 Work

Fall 2014 Work

Summer 2014 Work