Bee Dances Diary

From seed
Jump to: navigation, search

Waggle Detection

As last discussed, we wanted to focus on the bee orientation to detect a waggle. The initial results weren't promising, as the method was prone to small variations in the movement. So we changed our approach of computing the orientation of the bee. Instead of tracking the direction of motion of the head, we now try to fit a rectangle on the bee and compute the angle of the rectangle with x-axis. By observing this orientation, we clearly see a pattern with the waggles. In the below image, the red line represents the orientation angle of the bee and the waggles happen at the peaks and the valleys of this plot. Note: the peak and the valley represent roughly same angle (360+\theta), they are just better visualized this way.

Angle vs sharpness.png

From the plot, the blue line represents the sharpness of the bee, which decreases when the waggle happens (consistent with red peaks and valleys). We use these two parameters: 1) peak/valley of red curve, 2) decrease in sharpness (bottom 30%), to detect whether a waggle is happening or not. The above graph is a special case with a very clear pattern. For other videos, the bees don't waggle with such high frequency but these distinguishing parameters were still observed for them. The waggle detection results for this particular plot are in this video. [1]*The text waggle appears on the top right corner when our algorithm predicts a waggle.

There is still room for improvement and we are exploring other features that can make this method more robust.


--Tushar Bansal 20:17, 4 March 2017 (PST)

Discussed Approach

As we last discussed, analysis and predicting the waggle of the bee along a line segment and then predicting the line (not done currently). This video depicts the analysis on a sample. https://drive.google.com/a/eng.ucsd.edu/file/d/0Bxl8rYlGsKW1SWQxUU5kOFlfNVE/view?usp=sharing

In the video, we see a predefined line where the bee performs the waggle dance. We also see the point closest to the bee on the line (green marker). We can analyze the position and the speed of the bee relative to the line to check if it’s waggling or not.

However, we will face two issues using this approach:

  • Fitting the motion on a model like gaussian won’t work very well because for a waggle we don’t only need the motion along the line but also check if it is waggling motion as a lot of times bee moves in straight line without waggling.
  • Predicting the line is not as straightforward as we thought it would be because the bee does not necessarily follows the eight structure. Sometimes it starts waggling without the eight shape or even within the semicircles of the eight shape (Can be seen in the video).

Other Ideas

Zig-Zig motion

From the tracking results, we can observe a zig-zag motion of the bee when it waggles. A very naive predictor would be to capture these zig-zag patterns of the bee and predict the waggles where the zig-zag motion happens continuously for some period of time. I ran an analysis to filter all the positions where the bee followed a zig-zag motion (i.e. 2 consecutive angles < 90 degrees). The following video shows green blips where the above criteria is satisfied. If we filter the singular blips and join the multiple blips separated by only a few frames, then we can observe that the prediction is decent for a naive classifier. https://drive.google.com/a/eng.ucsd.edu/file/d/0Bxl8rYlGsKW1a3A0cHkxblNhXzQ/view?usp=sharing

Harris Corner Detector

Following from the above idea, instead of using the zig-zag motion, we use the harris corner detector to predict the sharp corners. And like the above approach we filter the singleton results and join nearby corners. The below image shows the predictions of sharp corners using harris corner detector. *Green blips are the predictions by Harris corner detector and areas marked Red are actual waggle positions.

Harris.png


Sharpness Quotient

The idea explores the fact that, when a bee does the waggle dance its abdomen and most area of its body gets blurred. (This may not be true for videos with higher fps but we can always reduce the fps for this particular analysis). In this technique, we first retrieve the orientation of the bee by fitting a rectangle enclosing the bee. We then compute the sharpness of the image in the rectangle. Sharpness is measured by first applying the laplacian filter on the image and then taking the variance of the resulting image. The technique is from http://ieeexplore.ieee.org/document/903548/. *The below plot shows the sharpness measure for corresponding frames and the red marked areas are the frames where actual waggle happens.

Sharpness.png


We can clearly see a dip in the sharpness measure for waggle frames. We can construct our algorithm around this pattern to predict a waggle.

--Tushar Bansal 15:16, 5 January 2017 (PST)

Tracking using Marker (without optical flow)

This video is tracking the bee marker without using optical flow. The video shows full path of the bee.[2]

This video is same tracking as above but with a diminishing path (only last ~4 seconds shown). [3]

--Tushar Bansal 23:00, 14 December 2016 (PST)

Optical Flow

After talking to Prof. Freund, we decided on exploring the optical flow methods to track the bee movements. To compute all the parameters of the bee, we only need to track the movement of the thorax and the abdomen of the bee. So we prefer using a sparse implementation than a full frame method.

--Yoavfreund 22:29, 10 December 2016 (PST) I think there was some misunderstanding. I thought you have good tracking using the color dots and that the role of the optical flow is to identify the times when the bee is performing a high-frequency wobble (maybe I got the name wrong here). In the experiments here it seems you are trying to track using optical flow. I am not sure why you are doing that and how it is supposed to work.

I implemented the Lucas-Kanade method to compute the optical flow of the thorax and the abdomen. The input on the first frame is given by the user. The method works well for the marked area of thorax. The implementation can be seen in the video here. [4]

However, while the method doesn't work very well for the abdomen, possibly because of two reasons:

  • The Aperture Problem
  • Similar intensities of nearby bees

The implementation is shown here. [5]

To solve the abdomen problem, I am currently looking at some more recent optical flow algorithms which have performed well on the middlebury evalutation [6] like DeepFlow and TV-L1 flow.

--Tushar Bansal 14:30, 9 December 2016 (PST)

Results

The blue line gives the direction of the bee.


Bee2.png


Bee1.png

Approach

We divide the whole problem into three subproblems of finding the parameters mentioned below:

Position of the Bee

  • Spot the marked and select the area around it.
  • Find the largest contour in the threshold of selected image that contains the marked points.
  • We assume that the largest contour is of the whole bee and take the centroid as the position of bee.

Direction of Bee

Join the marked point to the center of the bee. (Ensure that marker on thorax.)

Waggle Dance

There are two different ways to address this problem. First is to check the any oscillatory movement of the abdomen of the bee. However, abdomen being hidden under wings in some frames can lead to inaccuracies. To tackle this, we are trying to work on a second approach, which is to study the wings to predict the dance. The wings in the images are much more clear but it is not established if bees make any pattern using wings during the dance.

--Tushar Bansal 03:34, 2 November 2016 (PDT)

Data

The data we have is from multiple sources and each set has different characteristics. A ideal video should have:

  • High Resolution
  • One marked bee per video OR bee of interest marked with a unique color; with an easily distinguishable (bright preferred) color.
  • Fixed camera
  • No obstruction
  • Not so wide field of view

Currently the videos being used for analysis are selected manually that would satisfy most of these criteria. Few of the videos being used for analysis are:

  1. https://drive.google.com/drive/folders/0B56KYy4jU4TkdUVBZGNndVN6Umc?usp=sharing
  2. https://drive.google.com/drive/folders/0BzlpJXJvUCqfRFNfVUdjSDlaNEE?usp=sharing

--Tushar Bansal 03:34, 2 November 2016 (PDT)

Problem Statement

We aim to capture the useful information (like trajectory, waggle time etc.) from the bee videos, where the bee of interest is marked. The problem essentially comes down to finding three features for every frame of the video:

  • Position of the bee.
  • Direction of the bee.
  • Whether the bee is performing the waggle dance.

All the other key parameters can be calculated if we can correctly extract the above data from the videos.

--Tushar Bansal 03:35, 2 November 2016 (PDT)