Since it’s almost a few days until the end of summer 2018, it’s prime time to do some reflecting!
All in all, the past 4 months was pretty great. I got a summer position working on an industrial project as a Research Assistant, at the University Robotics/Computer Vision (CV) Lab. This has greatly widened my understanding for CV. Though I have still got far to go and much to learn, having the opportunity to develop software within such a great team has been a wonderful experience. I’m truly thankful for this opportunity and for all the help that I have received.
The industrial project I worked on aims at solving the particle detection, classification and tracking problem to improve oil sands process yield. In detection, the particles within the image should be detected, measured and distinguished from the background. In classification, the particles should be classified into 1 of the 4 categories: sand, bitumen, bubbles and unknown based on the output of a CNN classifier. In tracking, the particles should be tracked across frames in a video. Since this project has been on-going for 2 years before I joined, there is a lot of materials to pick up on – database, webUI tools, in-house libraries, folder setup and software pipeline. Quite overwhelming at start, but gradually familiarize with the project as I worked more on it.
I worked on resolving the incorrect particle size distribution (PSD) during detection. When a dataset with known ground truth PSD is input into the gold standard – current software pipeline for detection, an incorrect PSD histogram plot is generated (see example plot below). 2 observations can be made about the PSD plot:
- The PSD is shifted right (peak x is offset towards a higher value).
- The PSD has extended tails (larger span).
The Deblur Method:
The first method tried, to adjust and align with the ground truth PSD, is deblurring. Because in the particle images, there appears to be a ring of blur surrounding the particle. This can result in an overestimate of the diameter measurement. The deblur is performed by deconvolution using the Richardson Lucy algorithm. Briefly, this is an iterative algorithm that recomputes a correction factor for each pixel and multiplies each pixel with the correction factor. The correction factor approaches 1 as the desired, deconvolved image is obtained. Naturally, the Guassian kernel is used. There are 3 parameters to tune and define in using this algorithm: kernel size, kernel standard deviation and number of iterations. I investigated that selecting different kernel sizes and standard deviations can generate ringing artifacts at the border, while different number of iterations don’t seem to generate significant effect on deblur. Below shows the deblur results after integrating the algorithm with the rest of the software pipeline. The deblur effect is more obvious in particle image than the frame image.
Though the particle is deblurred, the PSD histogram remains the same.
Also, to be extra sure that our algorithm is performing correctly, I also conducted the following 2 experiments:
- Experiment 1: Generate synthetic deblur particle > apply Gaussian blur > apply RL deblur algorithm.
- Experiment 2: Input real particle image > apply Gaussian blur > apply RL deblur algorithm.
In both experiments, I wanted to make sure the end result matched the input result, which it did. Therefore, deblur is not the problem for the incorrect PSD histogram.
The Particle Density and OOC Analysis:
In order to find another approach to tackle the PSD problem, I needed to investigate the problem more. The question we want to answer here is: “what is the relation between frame density (number of particles in frame) and the total number of out-of-control (OOC) particles?”
The database is used to query all article count per frame for the x axis, and all OCC particles (particles in that frame with diameter outside the ground truth range) for the y axis (see example plots below). Overall, there exists a linear relation (y=x) between density and OCC. This shows that the OCC particles can be due to the fusing of glass bead particles (clumping), manufacture defects and partial occlusion of particles. Not just clumping. If clumping was the only problem, then the plot should have a downward sloping curve as density increases.
To further understand OOC particles, I created a dataset from 100 hand labeled circular particles, from a 300-335 um experiment. However, its histogram shows that these well-formed circular particles also fall outside the ground truth range of 300 – 335 um.
Also, I created a different dataset from 100 hand labeled clumped particles, from a 300-335 um experiment. However, its histogram shows that these deformed particles also fall inside the ground truth range of 300 – 335 um.
These histogram results re-emphasize that well-formed AND deformed particles can reside both inside and outside of the ground truth range. 😕
The takeaway from this analysis is that a “perfect” detection algorithm would correct the occlusion and clumping problem, but the underlying manufacture defects would not result in a ground truth PSD.
The LDA and Neural Networks Method:
Following the above analysis results, the second method tried, to adjust and align with the ground truth PSD, is LDA Classifier. Because the PSD has 2 incorrect properties, as mentioned above: shifted right and extended tails, I wanted to see if training a classifier to discern the well-formed and deformed particles would truncate the extended tails and possibly shift the histogram peak leftwards. Briefly, LDA is a supervised algorithm that can be used as a classifier to project data onto a linear decision boundary that maximizes the separation between classes. LDA was the first choice mainly because of its simplicity to use – no hyperparameters to tune, and fast computation time. It is used as a first pass approach to solve the classification problem, if LDA can classify with high accuracy than this might not be a hard problem. Also, traditional techniques should always be investigated first before implementing any neural network (NN) techniques.
I started by building a training dataset of 20 valid (circular) and 20 invalid (anything noncircular) for each experiment (include all diameter ranges). A total of +200 particle images were hand-labeled. Then I bootstrap the LDA to create a larger dataset of 4651 particles. After training, I investigated the effect of the number of attributes input into a classifier on PSD and accuracy, in which the performance of a 9 multi-class attribute is on par with a 2 class attributes with solidity and eccentricity (2 most weighted attributes). I also performed an ablation study to determine the input size, in which accuracy converged when number of particles is 3500.
At first, the LDA accuracy is only 92.02 %. I did a failure case analysis by gathering the 7.98 % of images that LDA labeled incorrectly. From the images, by visual inspection, almost half of the images have been incorrectly labeled. This prompted a refine of the training dataset. But instead of re-labelling the entire 4651 images, I passed in the entire dataset for testing and re-labeled the 7.98 % or 372 failure case images. With this new dataset, the LDA achieved an accuracy of 95.39 %. This can be further improved by improving the input dataset. See below for a comparison of PSD histogram before and after LDA classifier.
Next, I tried several other classification methods. I trained a SVM classifier using the same training dataset above. SVM is selected for its nonlinear decision boundary characteristic. I selected a degree 6 polynomial kernel and achieved an accuracy of 92.7%. Also, I input the 9 multi-class attribute values to a MLP Network with: 4 hidden layers, selu activation, alpha dropout of 0.2, Adam optimizer and 20 epochs. The accuracy = 93.21 %. Then, I trained a CNN with: 4 hidden layers, selu activation, alpha dropout of 0.2, Adam optimizer, locally connected layer 64 and 100 epochs. The accuracy = 94.29 %. However, LDA still wins with 95.39% accuracy.
The takeaway from the LDA classifier is that it can perform classification with high accuracy. The LDA truncates the extended tails and provides a calibration constant to shift the PSD peak closer to the ground truth PSD peak. Therefore, LDA is a feasible method to resolve the incorrect PSD problem.
All in all, I have learned lots. And I’m excited for new semester to come, where I will be learning more on software development and OS concepts. 😀