Page 34 - ChipScale_Sep-Oct_2020-digital
P. 34
Figure 2: The deep-learning model training process: a) (left) segmentation; b)
(right) classification.
algorithm checks every pixel in the image and calculates the
defect probability value. If the probability value is higher than a
threshold, it is marked as an NG (defect) pixel. In Figure 1a (top),
pixels with probability values greater than 0.8 are segmented as
being defective. Micro-cracks can be accurately detected both
on the chip and on the mold surface. By using our AI solution,
we can correctly differentiate micro-cracks from other overkill
modes like grinding marks on the chip (see Figure 1a, bottom).
In contrast, the classification algorithm classifies defects into
corresponding defined classes. For each detected defect, the
probability values of all classes are calculated. The defect is
then assigned to the class with the highest probability value.
Figure 1b shows classification steps and a bump area defect case
study. In Figure 1b (top), the FM (particle) class has the highest
probability value, so the defect is classified as FM mode. Figure
1b (bottom) depicts a bump damage reject and a metal particle
defect, which are classified as true rejects compared to other
acceptable modes, e.g., fiber, stain.
Model training and validation. Deep-learning model training
includes four steps: loading, annotation, learning, and validation.
For segmentation training, we load images to annotate defects on
each image to achieve pixel-level ground truth. For classification
training, pixel-level ground truth is not required—instead,
we need to crop each defect and label it per the image. Model
learning is performed in the third step, followed by testing in the
last step. Segmentation testing is done by comparing annotated
areas with segmentation results. Classification testing is carried
out by comparing marked classes and test results classes. We
enhance inspection performance of our AI solution by using
multi-frame image capturing. Six frames with different lighting
conditions are captured for each defect, then a minimum of
three frames, including poor defect visibility, are selected for
deep-learning model training. Inspection speed is reduced by
increasing the number of frames. As shown in Figure 2a, for
the segmentation model, a minimum of 50 multi-frame full-size
images (different units of same product) and for the classification
model, a minimum of 50 multi-frame crop-size images (per
defect type) are required (see Figure 2b).
32
32 Chip Scale Review September • October • 2020 [ChipScaleReview.com]