Dec 20 2010

First Successful X-Bee Test w/ EasyStar

In order to keep the Ecosynth team updated on the EasyStar’s progress I’ll continue to post blogs about any important milestones made over winter break. Today I had decided to get the X-Bees working properly such that they could send and receive data to/from one another. I used X-CTU to bind both the ground and air X-Bees to the same channel and set the baud rate to 38400 to match the baud rate used by the U-block GPS. I had also changed the source code for the ArduPilot to enable live data transfer and edited my computer’s designated port options to accept any incoming data from the X-Bee device. After insuring that the HappyKillmore’s Ground Control Station was able to process the incoming X-Bee data I decided to take the plane for a walk. I left my computer along with the ground X-Bee in my dorm room while I went outside with all of the ArduPilot components. The following picture shows both the recorded path (orange) and actual path (yellow) that I had walked during the test.

X-Bee Test Walking 2

After looking at the recorded output it was clear that some of the data was a bit off. The altitude reading remained well above 100 ft. throughout the test and the magnetometer readings had put the plane in a few spirals which I don’t recall making. This may have been caused by an EMF created by the densely packed electrical wires. The pitch and roll data on the other hand, appeared to be spot on. I plan on flying the EasyStar in manual mode within the next week or so to make sure that the CG is correct and that the servos respond properly when routed through the ArduPilot. If all goes well I’ll begin testing in both the Fly-By-Wire modes, eventually working my way up to the first autonomous flight with the EasyStar.

Dec 10 2010

Summary Statistics for Herbert Run Field Work 2010

DBH to Height

After last week’s meeting the idea was proposed that some summary statistics be prepared to sum up the data collected within the past year. The first step for all of this was to finish entering data from Jonathan’s past data collections to make sure the data totally up to date.

The data essentially speaks for itself and based on the trend line seen in the first photograph they seem to be essentially exactly what we expected to see. The numbers bolded in the second photo represent the largest values for the given variable. All in all, the data collection yielded a large amount of excellent data and more is on the way with the coming spring and fresh batch of eager interns. This will be my last blog post for EcoSynth, but I hope to stay involved with the flights and watch the project grow.

CLICK PICTURES TO ENLARGE

Dataaverage height of species

Tree distribution

 

 

 

 

 

END

Dec 09 2010

Flight Simulation with ArduPilot

Due to the upcoming exam week I was been unable to spend much time working on the ArduPilot this week so for my last blog post of the semester I’ve decided to look into the flight simulators available for the ArduPilot.  As of now there are three different methods of simulating a flight, each of which will be explained below.

     Built-in sim mode:

This mode is pre-configured in the ArduPilot code and enables you to simulate 3-D waypoint flight of a mission. The output of this simulation can be seen on the ground station. The roll, climb and fall rate of your plane can be edited by changing the respective values located at the bottom of the AP_Config.h tab of the ArduPilot code. The text will appear as follows and are shown in their original form.

/*****************/
/*Debugging Stuff for Sim Mode*/
/*****************/
//12-1
#define TURNRATE 90 // (degrees) how fast we turn per second in degrees at full bank
//12-2
#define CLIMBRATE_UP 1000 // (meters * 100) how fast we climb in simulator at 90°
//12-3
#define CLIMBRATE_DOWN 3000 // (meters * 100) how fast we climb in simulator at 90°

     Xplane:

This mode requires that you purchase the Xplane online flight simulator for around $30. You will also need a communication program which will sent the ArduPilot’s outputs to the flight simulator through the FTDI cable. This simulator provides a more aesthetic simulation of the flight which enables you to fly the plane with the transmitter as well as switch between test modes. DIY drones has an image of the required Xplane settings such that it will work with the ArduPilot as well as the communications program.

     Flightgear:

Flightgear is a free online flight simulator that can be used much like the Xplane software to simulate a flight. For this you need to download the Items listed on the DIY Drones webpage. I plan on working with this simulator as well as the built-in sim mode on the ArduPilot throughout the testing process.       

Dec 08 2010

Final Progress Report Fall 2010

Red = Un-measureable

Green = Tree Data collected

progress-map-12-3_thumb5

The past week or so have been very busy around school so only a little bit of time was able to be dedicated to actual data collection, but some progress was made. At this point in the season I find it very difficult to identify many of the trees due to their lack of foliage, but some of the tree’s bark has made this relatively easy. For this reason I have made the decision to most likely end my data collection (at least tree species) for the remainder of the year to avoid botching any data.

Another important aspect of this week was the data entry, which hadn’t been caught up in several weeks as most of my time was spent outside. The data has now been entered and organized by plot number to allow for easy reading. I hope to begin to compile statistics about the tree data this week with the help of Jonathan.

Dec 06 2010

Near-Infrared Structure from Motion?

TTC00007Some time ago we purchased a calibrated digital camera for the purpose of capturing reflectance of near-infrared (NIR) light from vegetation for our computer vision remote sensing research.  The goal was to make 3D structure from motion point clouds with images recording light in a part of the spectrum that is known to provide very useful information about vegetation.

We purchased a Tetracam ADC Lite for use with our small aerial photography equipment.  This camera has a small image sensor similar to what might be found in the off-the-shelf digital cameras we use for our regular applications, but has a modified light filter that allows it to record light reflected in the near-infrared portion of the electromagnetic spectrum.  Plants absorb red and blue light for photosynthesis and reflect green light, hence why we see most plants as green.  Plants are also highly reflective of near-infrared light: light in that portion of the spectrum just beyond visible red.  This portion of light is reflected by the structure of plant cell walls and this characteristic can be captured using a camera or sensor sensitive to that part of the spectrum.  For example, in the image above the green shrubbery is seen as bright red because the Tetracam is displaying near-infrared reflectance as red color.  Below is a normal looking (red-green-blue) photo of the same scene.

Capturing NIR reflectance can be useful for discriminating between types of vegetation cover or for interpreting vegetation health when combined with values of reflected light in other ‘channels’ (e.g., Red, Green, or Blue).  A goal would be to use NIR imagery in the computer vision workflow to be able to make use of the additional information for scene analysis. 

We have just started to play around with this camera, but unfortunately all the leaves are gone off of the main trees in our study areas.  The new researcher to our team, Chris Leeney, took these photos recently as he was experimenting on how best to use the camera for our applications.

It was necessary to import the images as DCM format into the included proprietary software to be able to see the ‘false-color’ image seen above.  I also ran a small set of images through Photosynth, with terrible results and few identified features, link here.  I wonder if there is such poor reconstruction quality because of the grey scale transformation applied prior to SIFT?  It is likely impossible to say what is being done within Photosynth, but I ran some initial two image tests on my laptop with more promising results. 

I am running OpenCV on my Mac and am working with an open source OpenCV implementation of the SIFT algorithm written in C, written by Rob Hess and blogged about previously, 27 October 2010 “Identifying SIFT features in the forest”.  Interestingly Mr. Hess recently won 2nd place for this implementation in an open source software competition, congratulations!  

Initial tests showed about 50 or so correspondences between two adjacent images and when I ran the default RGB to gray scale conversion it was not readily apparent that a large amount of detail was lost and a round of the SIFT feature detector turned up thousands of potential features.  The next thing to do will be to get things running in Bundler and perhaps take more photos with the camera.

Sorry to scoop the story Chris, I was playing with the camera software and got the false-color images out and just had to test it out.  I owe you one!

Dec 06 2010

Tech Update: Depth Cameras and Kinect

 Lidar systems rely on time of travel, and stereoscopic sensors rely on differential feature detection across a baseline - but these aren't the only way to get 3D information from a scene. Hizook covers several models of "Depth Camera".

The Kinect is based on the PrimeSense platform that MS acquired a year or two ago.  As is often the case with MS, a competing technology involving time of travel, called ZCam, was acquired at the same time.

Kinect Diagram from Wikimedia Commons, CC-Attribution

Despite the fact that there are three lenses on the front of the thing, this is not a stereoscopic solution.  One is an infrared laser projector,  the second is an infrared camera, and the third is an RGB camera.  The projector puts out a grid pattern of dots, and the infrared camera appears to measure density / apparent size and also spatial relationships of the dots in order to extract a very low-precision Z dimension.  I'm not certain if there is any high-speed processing going on with the timing of the dots versus the infrared sensor - if it's high speed then it is possible to substitute an active light sensor for a second camera in stereoscopy solutions. The RGB cam maps color values to each dot.

This might be quite useful for indoor mapping, and potentially even for outdoor mapping if they figure out how to manipulate the hardware a bit.  Whether it's possible to integrate this with some high-resolution pictures and interpolate to get a better coverage is an open question.  We do know that SIFT + image bundling falls flat in areas of low surface texture and low dynamic range, where theoretically depth cameras and lidar work just fine.  I have two videos from Dieter Fox's Intel / University of Washington team working with this hardware:

 


Kinect hacks are making progress *rapidly*, with new implementations every day since they made it usable with computers.


Edit: There is a great deal of confusion about how the Kinect works.  The precise implementation isn't well-understood, but the PrimeSense patents involve a "Light Encoded" matrix of dots.  There doesn't seem to be a high enough geometric or temporal resolution in the image sensor to account for all of these dots precisely enough to determine depth - there must be some importance to the stereo separation of the light and the projector in their algorithm, but even then they've got some problems.

They could make a tradeoff there, though.  Given the constraints, one way I might rig this to achieve 60hz depth imaging in agreement with their patent would be grouping the dots into 16-dot 4x4 grids, and then selecting one dot from each grid to light up for the duration of a 1/960 second subframe.  Each subframe is solved for depth based on stereo separation, and 16 subframes are composited together into one frame, then smoothed for intensity and written as the Z dimension to the RGB data.  With less closely spaced dots, and the potential to arrange the selection of their dots into discrete, detectable patterns (one for each subframe), determination of relative position would be plausibly achievable in a low-resolution sensor.

Some discussion on the topic:

Wired

Ars Technica Forums

Mirror Image

 

Edit2:

I found the paper that gives a brief overview of their process.  It's based on a SLAM algorithm which combines the RGB sparse feature recognition that our system uses and a pointcloud generated by the depth layer along with a number of techniques to simplify/align surfaces and close loops to reduce errors.

Some additional links:

http://www.ros.org/wiki/kinect

http://liu-cv.blogspot.com/2009/04/open-source-slam-software.html

http://openslam.org/

Edit3:

While it's definitely not to the same usability level as Dieter Fox's stuff, here's the first Kinect SLAM demo with code available:

 

Dec 03 2010

Corrected MK GPS tracks

The GPS tracks for the Mikrokopter were somewhat mystifying.  The altitude of calibration didn't seem to be zero, and the altitude of height didn't seem to be programmed correctly.  GPS is relatively weak on the Z dimension compared to X and Y,  but this didn't look like random noise.  We understand that Google Earth's DEM isn't perfect, but even the flight paths were distorted.  Viewing them in Google Earth by default, they exhibited a strange curve that seemed to follow the terrain at height.  When you tried to put it relative to the ground (Above Ground Level) the GPs track started in midair, and when you tried to put it in an absolute frame (Above Mean Sea Level) the track disappeared into the ground.

[Image of 0m relative to ground - distorted by DEM and starts in the air]

 

[Image of 0m absolute - begins belowground]



Initially I thought that things were simply measured relative to the GPS calibration site.  The offset mode in Google Earth, the natural place to go to fix this, does not work.  Assuming that you want a flat surface at a fixed elevation AGL or AM, Google strips altitude information and replaces it with a fixed amount which produces, respectively a flat surface for AMSL or a mirror image of the digital elevation model for AGL.  Examining the data in plaintext, I saw that the first elevation recorded was 6 meters or so... not alleged ground level of 49 meters where Navi-Ctrl was calibrated, *or* sea level.

[100m offset absolute produces a flat surface above ground]

[100m relative to ground produces a mirror image of the DEM, with no vertical information]

 

The air-pressure-based altimeter only makes things more mysterious, because it’s difficult to tie the numbers to the elevation.  According to a post on the MK website, the altimeter measures ticks that are adjustable in height with a default of 22.5 ticks per meter.  Examination of the data suggests that it, unlike the GPS tracks, does zero out when the Navi-Ctrl is calibrated.  Further investigation of its accuracy is pending.

It turns out, the GPS unit we have in the MK records according to the WGS datum.  This is an ellipsoid, which approximates the earth with a simple equation-derived curve stretching in a slightly-less-than-spherical fashion from the poles to the equator.

Since we entered the era of GPS and satellite determination of location, we've realized that an ellipsoid doesn't work very well for altitude in relation to sea level.  The Earth's entire mass determines gravity levels, which results in a sea surface which smooths out to fill a particular gravitational level.  Unfortunately, that mass isn't homogenously distributed, even if we can average it out to a very precise center-of-mass.  There are significant regional distortions to the density of the earth, and thus to the iso-gravity surface we call mean sea level.  Correcting for these results in a gravity measurement of MSL called a Geoid.

The standard international geoide for representing elevation relative to MSL seems to be WGS84 + EGM96.  This is what Google Earth uses, and why none of the tracks fit well.

I eventually settled on GPX Editor, which allows you to select the track and shift altitude, to fix this.  But what altitude to use?  A website run by the NGA gives us the answer: http://earth-info.nga.mil/nga-bin/gandg-bin/intptW.cgi .  At Herbert Run, Latitude 43.257389 & Longitude -76.705690 offers a Geoid height of: -34.73 Meters.



Given the low resolution and accuracy of Google's DEM this is most suitable for flat areas rather than the steep hills at Herbert Run, but it's a large improvement when viewed in Google.

GPX Editor:

[GPX Editor allows you to offset the track]

The corrected result:

[The corrected track]

Google has an elevation profile tool that could be useful:

[An altitude profile of the corrected track]

Dec 02 2010

ArduPilot Fixed Once Again

Just prior to Thanksgiving break the ArduPilot board had stopped powering on once again. I had spent this past week troubleshooting the board and had found that another one of the imbedded wires had become disconnected. Unlike the last time this had happened, this wire appeared to have be charred indicating that the board had shorted out while I was working on it. After double checking the polarity of the cables and reviewing the diagrams on DIY drones I was unable to find and connection errors. I was able to fix the board by using yet another jumper cable to complete the circuit. The image below shows the location of the two jumper wires that are currently attached to the board (indicated by white lines). Once again, the ArduPilot powered up and had shown no signs of further damage. It wasn’t until later on in the week that I had noticed that the desk I  had been working on had collected a large amount of solder globs and loose wires. This had led me to believe that while I was working on the ArduPilot I had placed in on top of a loose wires or A solder glob, causing the board to short out. Yet another reason to keep your soldering area clean.    

ArduPilot Jumpers