# IdansDiary

## Idan Izhaki's Diary - Spring 2015

--Yoavfreund 11:23, 19 June 2015 (PDT) Web site needs to be restarted.

--Yoavfreund 21:59, 31 May 2015 (PDT) Are these all pointing to the same server? I can't get past the "page loading" rotating logo.

--Yuncong Chen 22:25, 16 May 2015 (PDT) Please make a google spreadsheet to keep track of this list of mismatched pairs.

Current mismapping (<slice>: <segments>):

• 0-1: 7, 12, 14, 6
• 1-2: 14
• 2-3: 10, 11, 12, 16
• 3-4: 3, 5
• 4-5: 1, 5, 8
• 5-6: 10
• 6-7: 17, 11, 18, 20, 10
• 7-8: 2, 9, 11, 0, 13
• 8-9: 11, 3, 8, 0
• 9-10: 4, 14, 6
• 10-11: 12, 13, 11, 6, 3
• 11-12: 9, 10, 14, 12
• 12-13: 4, 7, 5, 6
• 13-14: 12, 8
• 14-15: 6, 1
• 15-16: 12, 4, 7, 5, 9, 8
• 16-17: 7, 3, 4, 0, 1
• 17-18: 9, 5, 11, 10
• 18-19: 7, 6, 2, 4, 11
• 19-20: 17, 9, 16, 14, 15, 11, 6, 13, 12
• 20-21: 11, 10, 6, 13, 2, 14
• 21-22: 8, 10, 15, 14
• 22-23: 12, 5, 4, 13
• 23-24: 14, 5, 9, 8, 7, 6, 11
• 24-25: 9, 5, 0, 6, 1, 10
• 25-26: 4, 14, 6, 11, 13, 12
• 26-27: 12, 10, 11, 6, 0, 5
• 27-28: 11, 8, 10, 12, 13, 10, 9
• 28-29: 5, 2, 7, 8

Week 6-9

1. Follow-up on feedback: http://seed.ucsd.edu/mediawiki/index.php/Feedback_to_Idan%27s_Human_Classification_Web_App

2. Many stability and cleanup updates.

3. Started writing the thesis paper. On page 15.

4. Meeting summary:

-- Yuncong will add missing sideBySide pairs from 12 to 30

-- Now supports regions across non-adjacent boundaries. Will give me a pickle file for that

-- -- Landmark 5 example: [(3, 7), (4, 8), (5, 1)...]

-- Option 1. Positive class - histogram of each one of the regions/landmarks using binary classifcation.

-- -- Boosting - allows to append more weak learner, that's how you can update incrementally used for online learning. Input can be given one by one.

-- -- SVM / Perceptron

-- Option 2: Generative model vs. Discriminative model

-- -- Describe how a general acts vs. specific examples

-- Take negative and positive examples for each landmark (boundaries)

-- Those that were wrongly matched from other sections, are even more important.

-- Boundaries: instance of landmark in a certain section.

-- Landmark: real entity all over the sections.

-- -- ==> Landmark 1 apears in section 4, 5, 6

-- -- ==> Landmark 1 apperas in section 3 as boundary 5

-- -- ==> landmark 5: [(3, 7), (4, 8), (5, 1)...] == (section 3, boudnary 7), ....

-- -- ==> Probably 30 good and consistent textures (landmarks)

-- 9 bins should be enough to train the classifier (I think we have 14, my bad).

-- * For each new patch, will run all classifiers and give the one that gives that is the closest

-- * Extracting textons and representing a textures

-- * The choice of parameters in gabor fitler: number of orientations, # scales

-- * Follows dedcting the boundaries

-- * How is the website based on such a data

-- Literature review: 1. Texture identification, specifically gabor filter 2. Comparison and how people were using it 3. Relate to our application. We have s/t different from theirs. Our goal is not exactly the same

-- * Find more papers that might be similar to what I'm doing right now. It is not that clear which to search for.

-- -- * Novel goal - filter out ambiguous example

-- Experiments section:

-- -- * When I have the data from the labelers - accuracy.

-- --* How much improvement the interface has brought us.

-- -- -- We have supervision - how does it help? Better accuracy after and before.

Week 5

--Yuncong Chen 22:29, 16 May 2015 (PDT) Please see my comments on the current web app at Feedback to Idan's Human Classification Web App

1. -DONE- Image redundancy bug.

2. -DONE- Introduction page with explanation how to use the app.

3. -DONE- Page numbers that keep updating.

4. -DONE- In global view show all images (not selected with black border).

5. -DONE- Some other interface changes to make it more user friendly and clean.

6. Collect all classifications across ALL layers (slices) and map (currently manually) all super-pixels labels to generate all pages.

Goals next: send it out, get results, use perceptron / boosting / svm to train classification mistakes.

Week 4

Change of heuristic: 1. Work on 2 layers ("slices") at a time. Select all super-pixels classified as the same between the 2 layers, but classified incorrectly.

2. Take a sample from each layer as "reference" images, and all other super-pixels in radius Rd and Rh (physical distance and histogram distance).

3. Use Adaboost / Perceptron classifier to identify the differences in histogram and fix current gabor-based algorithm accordingly.

Code changes: 1. -DONE- Add support in code for 2 layers and show 2 layers at a time, with appropriate circles on "Global View".

2. -DONE- Many web-interface performance improvements and minor bugs fixes.

3. -DONE- Click to move out of "global view". Checkbox is on by default.

4. -DONE- Log related performance and bugs enhancements. More is more generic, less hard-coded parameters.

5. Manually generated a couple of incorrectly classified sections, as shown on the following example (14, 6, 12, 7...):

Waiting for inputs from Yuncong:

1. Automated mapping of super-pixels that were incorrectly classified.

2. Rotations of super-pixels in ALL layers.

Week 2-3

1. -DONE- Selective transition page with last selections.

2. -DONE- New web interface: 2 references + 32 (8x4) images in the middle.

3. -DONE- Generate pairs with the help of Yuncong. Move to stack RS141

```   -- For Yuncong: missing directionality of patch, currently put all 0-s.
```

4. -DONE- Random samples in the middle of screen, still leave around ~50% images correlate to red, ~50% to blue.

5. -DONE- New log file that looks as follows:

```     - No more triplets, but only clicks + static description of page
- Description of page: <photo ids to segments>
- Page #: <number>
- Timestamp, photo id, label (none/red/blue)
- Submit: <comment>
```

6. Tweak paramters of physical and histogram distance that we think gives best result.

7. Skype meeting with Yuncong for updates and mindstorm, Friday at 6pm (PST).

Goals: - Current algorithm makes some mistakes, use user feedback to tweak algorithm. - Find 10+ distinguishable (by eye) patches, that the algorithm categorizes as the same. - Create a classifier for every pair of textures, so we can distinguish the same way as a human can. Somehow tweak Gabor / other patterns to fine tune to these cases? - With boosting: given 2 textures, search using "project pursuit" (Yuncong's) + add supervision, in reaction between 2 textures, and by adding many samples like that significantly improve the algorithm / classifier.

Week 1

1. -DONE- Remove full canvas and add it on next page for previous selection (optional, as a feedback for selection, not helper)

2. -DONE- Change cover set algorithm so we choose 2 references that have a physical distance > T1, and their histogram distance < T2, meaning: dt(xi, xj) > T1 & ds(xi, xj) < T2. Maybe should use Blobs for that

3. -DONE- No random screens - always predefined images to show

4. -DONE- Log changes in delta time

5. -DONE- Books: chapter 6 for thesis idea

## Idan Izhaki's Diary - Winter 2015

Week 9-10

1. -DONE- Change histogram to show 8 bins texton values rather than intensity

2. -DONE- Normalize the histograms + remove axes + shown normalized value rather than count (more compact representation)

3. -DONE- Random display between sets

4. -DONE- Show a map of original photo + circles of current selection! "Global View"

```   Use HTML5 for that (?) and adapt for different resolutions
```
```
```

5. -DONE- Change layout for map on the left, and items on the right. Can be resized to 3x2 rather than 4x2.

```   Maybe add a divider between upper 2 references and lower two references.
```

6. -DONE- Global View needs to show different colors by selection type.

7. Prefetcher sometimes gets stuck, not sure why. Disabled it for now.

Week 7-8

1. -DONE- Rotation performance improvement (improved by ~34x on average).

2. -DONE- Grow all photos in a set to same scale.

3. -DONE- Remove scrolling animation on website.

4. -DONE- Smaller & lower head frame of website.

5. -DONE- Move titles to bottom.

6. -QUESTION- Why do we need circular patches rather than rectangular ones? How does it contribute and is it really necessary?

7. -DONE- Recompute histograms for patches. -- Have them recomputed.

8. -DONE- Grow "BFS" way. Done using scipy.pdist of all points to all points.

9. -DONE- Rotate by main super pixel in a set.

10. -DONE- Filter out "super-super-pixels" with distances <= 0.01 to avoid clear textures mapping.

11. -DONE- Use more Oasis for performance.

12. -DONE- Voloom run on my computer with a new trial version. Test it, examine the results.

Week 5-6

1. -ALREADY DONE- As a metric I would use the distance that Yuncong is using: gabor followed by VQ to find different textons. (not sure what to do with directionality).

2. -DONE- Create cover: consider some way to generate a stream of randomly selected patches. select some threshold T on the distance, set S to be an empty set and then do repeatedly:

```     - Take the next patch from the stream.
- Measure the distance of the patch from all of the patches in S
- If the minimal distance is larger than T: add patch to the set S.
- Choose T really large, like 0.3 at the beginning, and find a good T smaller than that (probably 0.05)
Stop when 99% of the incoming patches are rejected.
```

3. -DONE- To generate the examples for a particular screen:

```    - Choose a random patch from the cover to be reference 1
- Choose the closest patch in the cover to be reference 2.
- Choose the specified number of to-be-labeled patches at random but accept only patches that are a distance of at most T (or maybe 2T) from both of the reference patches.
- Rotate all patches so that their strongest directions are all pointing in the same direction.
```
• Add date & time into CSV and JSON files

Week 4

• New inputs:
• Raw Image /oasis/projects/nsf/csd181/yuncong/DavidData2014tif/RS140/x5/0000/RS140_x5_0000.tif
• Pipeline Results under /oasis/projects/nsf/csd181/yuncong/DavidData2014results/RS140/0000/
• n = # superpixels
• segmentation: RS140_x5_0000_segm-blueNisslRegular_segmentation.npy
• superpixels indexed from 0 to n-1; -1 means background
• neighbor list: RS140_x5_0000_segm-blueNisslRegular_neighbors.npy
• a list of n sets. The i'th set contains the neighbors of superpixel i
• superpixel properties: RS140_x5_0000_segm-blueNisslRegular_spProps.npy
• n x 8 matrix. The i'th row is (center_x, center_y, area, mean_intensity, ymin, xmin, ymax, xmax) of the i'th superpixel
• pairwise texton histogram distance: RS140_x5_0000_gabor-blueNisslWide-segm-blueNisslRegular-vq-blueNissl_texHistPairwiseDist.npy
• n x n matrix
• image with annotated segmentation: RS140_x5_0000_segm-blueNisslRegular_segmentationWithText.jpg
• dominant direction angle: RS140_x5_0000_gabor-blueNisslWide-segm-blueNisslRegular_spMaxDirAngle.npy
• n x 1 array. Each superpixel's dominant orientation, in degree, counter-clockwise starting from 12 o'clock.
• Texton map: RS140_x5_0000_gabor-blueNisslWide-vq-blueNissl_texMap.npy
• matrix, same dimension as image; integer values represent the texton index of each pixel
• Back-end changes:
• -DONE- In order to zoom-out of the super-pixel, take input of neighbors, and check if the distance is <= 0.01 (some constant). If it is, take union of superpixels as an image so we zoom-out to a region of similar texture rather than random ones.
• -DONE- Process rotation map (angles) as majority of rotations of pixels and apply on super-pixel output image before storing to disk.
• Show mixture of super-pixels correlated to references A and B rather than random ones.
• Idea: manually pick references that we want to compare as a subset. Then take all other super-pixels distance <= 0.01 (meaning similar).
• Maybe take average of super-pixels values rather than similarity to a specific super-pixel, and expand that way.
• -DONE- Modify JSon to store actions log rather than selection. Meaning, selecting and deselecting should be logged as well, and so is the order of selection.
• -DONE- Use python as much as possible for image manipulation (rather than Javascript/CSS side)
• Website changes:
• Add options box between 10, 20 and 40 images (how will be able to map more than 10 at a time? Isn't it too confusing?)
• -DONE- Under "submit", add text box with description of why we choose these red-blue selections and store into JSon log.
• Add a floating balloon "mapped" whenever clicked to emphasize something has changed.
• Gallery changes:
• Finish scroll-to-zoom into high-resolution image implementation (current just floating balloon, zoom-in conflicts between two libraries).

--Idan_Izhaki 9:25, 23 January 2015 (PST) Answers:

1. Yes, the main idea was to select all patches that look similar to the reference one, similarly to what they presented here: http://arxiv-web3.library.cornell.edu/pdf/1404.3291v1.pdf. The red-blue presentation is not hard to implement, but what are we supposed to do with the unclear ones? After all, we need triplets of (ref, similar, different) pictures, so if we select red and blue patches, do we need to create (ref_1, blue_i, red_j) and (ref_2, red_k, blue_j) triplets, and ignore the unlabeled ones?

--Yoavfreund 08:57, 24 January 2015 (PST) I realize now that what I had in my head is different from what is in the paper. I am not sure which approach will work the best for us, I elaborate here: Triplets Comparison experimental design

--Yoavfreund 08:47, 24 January 2015 (PST) : As this is information from the the user, and user time is expensive, I would start by logging everything: Each click on each patch and a time-stamp to go with it. For example, a user might make a mistake in the labeling and click again to fix it, we want to catch that.

1. I am currently using the classifier's borders to crop the patches out. How much do we want to zoom out (e.g., minimum amount of pixels)?

--Yoavfreund 08:47, 24 January 2015 (PST) I guess by "classifier borders" you mean "super-pixel borders". It seems you displaying those in a higher zoom but without using a higher resolution image (20X). It seems to me that this size/resolution will not be comfortable to the biologists labeling. Probably the best is to give several options and see what the biologists like. Talk with Yuncong.

1. Yes, right now they are chosen randomly. I will discuss that with Yuncong. Do we want the two references to be of the same family (and all patches as well), or can they be from different families (and some of patches of family 1, some of family 2)?
2. Right now I am logging everything into a JSon file on server. Every session stores a timestamp that relates to a new file. I will add a "username" to that file too.

--Yoavfreund 08:47, 24 January 2015 (PST) I want a time-stamp on each click, not just on the whole session.

--Yoavfreund 17:56, 20 January 2015 (PST) Looks like a good start. A few comments:

1. What I see is one large patch and 6 small patches. I am not sure what I am supposed to do, identify the patches that are most similar to the large patch? What I expect to see are two reference patches and a larger number of patches that need to be labeled: something around 20 (it would be good to allow the user to choose between 10,20,40 patches per screen. The task is then to mark each of the twenty patches according to which of the two patches is more similar. Leaving patches for which it is unclear unlabeled. To make this classification visual I would put a red and a blue border around the reference patches, leave the 20 patches without a border and then use mouse clicks to cycle through red-blue-noborder.
2. The patches are too zoomed in, at this resolution you can have much less zooming in and fit many more patches.
3. The choice of patches to show should be thought out: the reference patches should have significantly different textures and the 20 should be patches that are relatively close to both so that they are "in-between". Distance can be measured using the distance between the texons histograms the Yuncong uses. The heuristic for choosing which patches to show should be documented here. You will know that you hit the right balance when different users would classify most of the patches consistently. Right now it seems the patches are chosen randomly and as a result they are very different.
4. Patches should be round and should be rotated so that they are all oriented in the same way (using the orientation measure that is computed from the Gabor filters) If there is not dominant direction, then the orientation should be left as is.
5. Are you logging everything? It is important to log the name of the user and the time stamp for each click, as well as the final labeling of each screenful.

Week 3

• Github: https://github.com/idan192/WebStem
• Website improvements:
• Different log file per session (save in cookie session name valid only for current open browser session).
• Red-blue implementation with 2 references (plus store triplets of red-red-blue, blue-blue-red).
• Get inputs from Yuncong for rotation, neighbors, features in same group.
• Understand how to reduce randomness.
• Gallery improvements:

Week 2

• New code to take the slices info from Brainstem project and output images to compare on website.
• Improve website, NodeJS based back-end:
• http://www.idanizhaki.me:8080
• Support 6 photos to compare and multiple select.
• Less visual affects, faster response time.
• Prefetch next set of images for better performance.
• All runs on SDSU node.
• Build a gallery website to display all original tif/png/jpeg images in a fast way.
• Creates cache of smaller photos as thumbnails autmoatically.
• Allows focus on image.

Week 1

• Read: Wilber et. al. (including belongie) HCOMP 14
• http://homepage.tudelft.nl/19j49/Publications_files/PID2449611.pdf
• This paper presents a minimization function to solve the triplets similarity problem.
It explains the distance function and shows two existing techniques to learn
data embedding based on similarity (generalized non-metric multidimensional scaling,
crowed kernel learning and constraint gradients).

## Idan Izhaki's Diary - Spring 2014

System and environment advancements are as following:

• All hot-code ported to C++. Manipulating full image takes less than an hour
• All post-processing is in Python. Latest diary runs on Hadoop server:
• RAW files communication. Support for PNG, BMP and JPEG2000 formats
• mpld3 - advanced matplot library in iPython integrated. Currently supports zoom-in/out and transformations.

Future work is needed for selecting reference.

• KMeans runs with rotated windows to reduce noise from directionality (does not seem to change result)
• KMeans result is comparing Histograms in two ways (semi-supervised):
• Norm of pixels to a reference image, compared to each super pixel,
• Kullback-Leibler entropy of pixels to a reference image, compared to each super pixel

Github for C code: https://github.com/idan192/BrainStem

## Idan Izhaki's Diary - Winter 2014

--Yoavfreund 13:51, 17 March 2014 (PDT) Please make a github repository for your code.

### Compiling C code:

```             cd gabor_project
mkdir release
cd release
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local ../opencv-2.4.8
cmake -D CMAKE_BUILD_TYPE=RELEASE  ../opencv-2.4.8
make
sudo make install
```
• Make sure environment is pointing right GCC and OpenCV versions.

``` LD_LIBRARY_PATH=/oasis/projects/nsf/csd181/iizhaki/opencv_bin/lib/:/opt/gnu/gcc/lib64:/usr/lib/gcc/x86_64-redhat-linux/4.4.7/:\${LD_LIBRARY_PATH}
```
• Run the following compilation command:
``` g++  main.cpp -O3 -I ../opencv_bin/include  -std=c++11 -L../opencv_bin/lib/ -lopencv_core -lopencv_highgui -lopencv_imgproc
```

### Running C code

./a.out -file <file1> -file <file2> ... <-dump_conv> <-dump_kernel> <-dump_histogram> <-dump_kmean>

```    This outputs a folder with processed files.
-dump_conv: The convoluted image (Z value) with kernel. Both grayscale and color outputs
-dump_kernel: The kernel as a resized image
-dump_histogram: C implementation for histogram. WIP
-dump_kmean: Output of kmean
```

### Python code to generate SVD statistics:

``` import numpy as np
import matplotlib
from matplotlib.mlab import PCA
import matplotlib.pyplot as plt
```
``` img = imread(fName)
```
``` X1 = np.array(img[:, :, 1]).flatten()
X2 = np.array(img[:, :, 2]).flatten()
X3 = np.array(img[:, :, 3]).flatten()
```
``` Res = [[0 for x in xrange(3)] for x in xrange(3)]
```
``` for i  in range(1,  size(X1)):
for x in range(1, 3):
for y in range(1,3):
if x == 1:
P1 = X1[i]
elif x == 2:
P1 = X2[i]
else:
P1 = X3[i]

if y == 1:
P2 = X1[i]
elif y == 2:
P2 = X2[i]
else:
P2 = X3[i]

Res[x][y] += P1 * P2
```
``` imshow(Res)
```

### Python code to generate histograms for super-pixels

``` def drange(start, stop, step):
while start < stop:
yield start
start += step
```
``` Image = imread('KMean.png')
bin_size = 20;
```
``` Sizes = [len(Image), len(Image[0])];
```
``` num_h = np.ceil(Sizes[0] / bin_size);
num_w = np.ceil(Sizes[1] / bin_size);
```
``` for h in drange(1, Sizes[0]-bin_size, bin_size):
for w in drange(1, Sizes[1]-bin_size, bin_size):
oName = 'Hist_' + str(h) + '_' + str(w) + '.png';

CurrImg = Image[h : h+bin_size, w : w+bin_size];
CurrImg = CurrImg.flatten();
plt.hist(CurrImg);
plt.savefig(oName)

```

### Images and output files

The image sample shown below and its run results on Google Drive: https://drive.google.com/folderview?id=0B-toQYtnt0DwdzhORjFFXzF4UEU&usp=sharing

Code:

Week 6-8

Week 3-5

• Port all code to faster C++ and OpenCV
• SVD - did not change results. Merged with color image
• Z values for each picture ((X - mean) > 4 * STD)
• Z value for rotation sum of images
• Average of all Z value images
• Average > 2*STD of all Z value images
• OpenCV and Jpeg2000 environment and libraries
• Permissions
• Ramp up wih Alican
• /oasis/scratch for SSD fast access
• Multi processing

Week 2

• Use higher definition images
• Original resolution jams Mac
• Half the size takes a day to run
• Fix Z-value
• Count number of "hits" and divide to sections
• 3 examples produced
• Split images to smaller-sections
• Ramp-up Teja
• Port iPython cluster to hadoop
• Server runs and accessible from outside
• Javascript and all necessary already-installed references configured in ipython.config
• Still missing JPEG2000 library (glymur)
• Remote server: http://ion-21-14.sdsc.edu:1235/
• Did not succeed to install with no basic admin permissions
• Use D3 to view using imshow
• Enables nice zoom and future node-js
• Imshow still resizes original image to a compact one in viewer
• Suggested to use saved images for diagnosis

Full code in iPthon nbViewer:

Week 1

• Semi-supervised Gabor filter
• Compute images convoluted with Gabor kernel for 18 angles, overlapping frequencies for ${\displaystyle sqrt(2)}$
• Z-value image: (pixel value - mean value between all convoluted images) / (variance value between all convoluted images)
• Hadoop "hello world" test (word count)

Full code in iPthon nbViewer:

Week 1 - Experimenting with Z values:

Experimenting with Z values

Week 0

• Ramp up Kemal's code
• Environment setting
• Set necessary permissions
• iPython, Hadoop and intro books exploration