Data Science
SECTION IIB - Identification of component parts or steps

SECTION II. TECHNICALLY COMPLETE AND EASILY UNDERSTANDABLE DESCRIPTION OF INNOVATION DEVELOPED TO SOLVE THE PROBLEM OR MEET THE OBJECTIVE

B. - Identification of component parts or steps, and explanation of mode of operation of innovation/software preferably referring to drawings, sketches, photographs, graphs, flow charts, and/or parts or ingredient lists illustrating the components.

The Region Labeling Tool is implemented under the Khoros Pro 2000 Software Developer's Kit. As such, this program runs on any workstation running Unix or many of its variants (i.e., the program will run on any system that Khoros Pro 2000 will run on). A previously released description of an earlier version of this program can be found at:

http://code935.gsfc.nasa.gov/code935/tilton/index.html#REGLBL_TOOL.

The current version of the Region Labeling Tool is described herein through a description of an example of its operation, including screen shots of graphical user interface (GUI) panels produced by the program.

The data set used in the example is a six-band section of a Landsat TM image taken over the Washington, DC / Baltimore, MD area on September 16, 1991 (WRS II path/row 15/33). The thermal band was not used. The section used for the example is a 2504-by-2504 pixel section with the southwest corner approximately 8 miles west and 9 miles south of the center of Washington, DC (the White House), and the northeast corner approximately 12 miles north and 16 miles east of the center of Baltimore, MD (the inner harbor). Figure 1 shows an RGB rendition of this image data, with spectral band 5 as red, band 4 as green, and band 2 as blue.

This section of Landsat TM image data was processed on the HIVE with the RHSEG (Recursive Hierarchical Segmentation) program (described in the companion Disclosures NASA GSFC Case No. 14,305-1 and NASA GSFC Case No. 14,328-1). This processing was performed in two stages. In the first stage, the RHSEG program was run on the Landsat TM image data in which non-homogeneous portions of the scene were masked out. In the second stage, the RHSEG program was run on the entire Landsat TM image data set using as an initial region label map the 591 region region label map output from the first stage. The unlabeled pixels from the first stage were initialized as additional one pixel sized regions.

The edge mask for the first stage was produced using programs available in the Khoros Pro 2000 software package. The "vdrf" program was run on the Landsat TM image data with t1=6 and t2=9 on bands 1, 2 and 3 and with t1=9 and t2=12 on bands 4 and 6, and with t1=12 and t2=15 on band 5 (default values were used for the other parameters). The "kbitor" program was used to combine the edge maps, which were then grown out by one pixel with the "las_lowcal" program. For both the first and second stages, the RHSEG program was run with "normalization across the bands," with the "1-Norm" dissimilarity criterion with mean extraction, with "spectral clustering," with "eight nearest neighbor" connectivity, with "minregions" equal to 384, with "chkregions" equal to 1024, and with "convfact" equal to 1.01.

The resulting 16-level hierarchical image segmentation is used as input in this example description of the Region Labeling Tool.

Input parameters can be supplied to the Region Labeling Tool through the command line, from a parameter file, or through a graphical user interface. The input parameters for the Region Labeling Tool are as follows:

Parameter name Parameter Description
image Input RGB image file (required)
rlblmap Input hierarchical region label map file (required)
npixlist Input region number of pixels list file (required)
label_out Resulting output class label map file (required)
mask Input image mask object (optional) (no default)
regmerges Input region merges list file (optional) (no default)
meanlist Input region mean value list file (optional) (no default)
cvlist Input region criterion value list file (optional) (no default)
ref1 First input reference file (optional) (no default)
ref2 Second input reference file (optional) (no default)
label_in Input class label map (optional) (no default)
ascii_in Input ASCII class label names (optional) (no default)
zoomfactor Initial zoom factor (optional) (1e-05 <= zoomfactor <= 200, default = 4)
ascii_out Output ASCII class label names (optional) (no default)

For the example case, the command line initiation is as follows:

region_label -image tm153393.rgb -rlblmap tm153393.1norm.mext.rlblmap -npixlist tm153393.1norm.mext.rnpixlist -regmerges tm153393.1norm.mext.regmerges -meanlist tm153393.1norm.mext.rmeanlist -cvlist tm153393.1norm.mext.rcvlist -label_out tm153393.label_out -ascii_out tm153393.ascii_out

The parameter file initiation is as follows:

region_label -a region_label.ans

where the file region_label.ans was created with the Khoros Pro 2000 "kdbmedit" program. The graphical user interface initiation is as shown in Figure 2.

Upon initiation, the Region Labeling Tool displays a GUI panel as shown in Figure 3. Notice that about a third of the way down from the top is a label "Display Options:". Under this label is a button labeled "RGB Image". When the analyst left mouse button clicks on this button, the RGB image is displayed as shown in Figure 4. Note the "Pan" display at the upper left, with which the analyst can roam around the entire image data set. In Figure 4 the image is panned so that it is centered to the southeast of Baltimore, MD, over the west central part of the Chesapeake Bay.

Looking back at the GUI panel in Figure 3, we see that it says "Select the location for the next region to be labeled" in the "Region Label Tool Informational Output" panel. In this example, we have the analyst select a location in the main portion of the Chesapeake Bay by clicking the left mouse button at some point in the main portion of the Chesapeake Bay, and then clicking on the "Select Region" button in the upper left corner of the GUI panel. When the analyst clicks on the "Current Class Labels" button on the GUI panel (Figure 3), we see a display in which most of the Chesapeake Bay is highlighted in purple (see Figure 5). By default, this is the finest segmentation from the segmentation hierarchy containing the pixel selected. As an option, the analyst could have changed the "First Select" option to "Coarsest Segmentation" on the Reg ion Labeling Tool GUI panel.

At this point, the analyst has two choices. The analyst can label the currently highlighted region or attempt to refine the region further by looking at other segmentations from the segmentation hierarchy. A good way to make this decision is to inspect the spatial extent of the highlighted region in comparison to the RGB rendition of the image. Noting that this inspection reveals that some small but significant areas of the Chesapeake Bay are not highlighted, the analyst chooses to look at the next coarsest segmentation from the segmentation hierarchy.

The simplest way to highlight the next coarsest segmentation from the segmentation hierarchy is to left mouse button click on the button labeled "Select Next Coarser Segmentation". In doing so, the analyst sees (through inspection) that the highlighting of the Chesapeake Bay is much more complete. Now the analyst must decide to either label the currently highlighted region or to look at even coarser segmentations from the segmentation hierarchy. The analyst is aided in making this decision by the information displayed in "Region Label Tool Informational Output" text output area in the bottom portion of the Region Labeling Tool GUI (see Figure 6). We are currently viewing level 1 of the segmentation hierarchy (level 0 is the finest and level 15 is the coarsest). The informational output shows that there are 972,833 pixels in the level 1 region. This increases only slightly to 974,377 pixels in the level 14 region. This being the case, the analyst decides to look at the level 14 segmentation from the segmentation hierarchy. The easiest way to do this is to enter the number 14 (followed by the "Enter" key) in the space after the label "Segmentation Level".

We pause here to make a brief note here on the usefulness of the information provided in the "Region Label Tool Informational Output" text output area. Significant changes in the number of pixels, region criterion value (a measure of the difference of the region mean from the original image values), and/or the region mean vector are indications of significant changes in the entities that the region is representing. For example, the combining of two distinct ground cover types, such as concrete and forest, would cause significant changes in the criterion value and region mean vector (assuming roughly equal numbers of each class are combined).

Now back to the labeling problem at hand. An inspection of the highlighted region shows that the level 14 segmentation from the segmentation hierarchy not only highlights the Chesapeake Bay, but also most bodies of water throughout the whole image - including even small lakes and thin stretches of river. However, going to level 15 highlights nearly the whole image, mixing land and water.

The analyst now decides to label the highlighted area from the level 14 segmentation. This is done by clicking the left mouse button on the button labeled "Label Region". When this is done, the Label Region panel appears, as shown in Figure 7.

The Label Region panel contains a set of 39 preprogrammed colors for use in color labeling regions. Associated with each of these colors is a text window which is initialized as "(undefined)". The analyst can choose any of the preprogrammed colors to label a particular region and associate any alphanumeric phrase with the region color. In addition, if the YES/NO toggle to the right of the label "Change Colors When Labeling?" is left as YES, the analyst will be prompted with a panel that allows adjusting the region color to any color the analyst wishes. In this example, however, the analyst chooses to toggle the "Change Colors When Labeling?" toggle to NO and select the color blue for labeling the highlighted area. In addition, the analyst associates the phrase "Water" with this region. The area highlighted in purple is recolored to blue when the analyst clicks the left mouse button on the "Label" button to the left of the blue square. The resulting L abel Region panel is shown in Figure 8. The resulting Current Class Labeling Display is shown in Figure 9.

Now the analyst compares the RGB image and the Current Class Labeling, and notes that the Chesapeake Bay Bridge is visible in the RGB image, but has been largely lost in the Current Class Labeling. The analyst was assisted in the observation by the Zoom panels obtained by left mouse button clicking on the Zoom buttons under the RGB image and Current Class Labeling buttons. These Zoom panels are shown in Figures 10a and Figure 10b. In these figures, the Zoom Factor was adjusted from the default "4" to "2" by entering in the value "2" in the text input panel to the right of the label "Zoom Factor:".

If the hierarchical segmentation separates out the Chesapeake Bay Bridge from the bay, the analyst can distinguished the bridge from the bay in the labeling with the following approach. The analyst selects a pixel that is labeled as "Water" (blue) in the Current Class Labeling, but that appears to on the bridge in the RGB Image display, and clicks on the "Select Region" button on the Region Label Tool GUI panel. The resulting Current Class Labeling Zoom panel is shown in Figure 11. In this figure the analyst sees that the Chesapeake Bay Bridge is nicely highlighted, along with a very small number of other "water edge" pixels (this is the case for a small number of additional pixels throughout the image). The analyst decides to label the highlighted pixels with a brown color and the alphanumeric phrase "Bridges/Water Edge".

Now the analyst notices that part of the Chesapeake Bay Bridge is still not labeled (i.e., still colored black in Figure 11). The analyst selects one of these pixels and clicks on the "Select Region" button on the Region Label Tool GUI panel. The analyst finds that not only are bridge pixels highlighted, but also industrial areas, dense urban areas and a number of water edge pixels. The analyst decides to label the highlighted pixels with an orange color and the alphanumeric phrase "Industrial/Dense Urban/Water Edge". A portion of this labeling is shown in Figure 12. Note the Sparrow Point industrial area in the center right of the image, the Key Memorial Bridge just of the left of the Sparrow Point industrial area, and the Baltimore Harbor and Downtown area to the upper left.

Now the analyst shifts gears and starts looking for vegetation. Panning to the west central part of the RGB Image display, the analyst selects a pixel that appears to be in the midst of a forested area, and clicks on the "Select Region" button on the Region Label Tool GUI panel. The analyst explores the segmentation hierarchy and decides the segmentation at the segmentation hierarchy level 5 does the best job of labeling the wooded areas in the scene. He/she labels the highlighted pixels with a green color and the alphanumeric phrase "Wooded".

The analyst now needs to temporarily suspend his work labeling this Landsat TM image. The analyst can save his/her work by simply exiting the program by clicking on the "Exit" button in the upper right corner of the Region Label Tool GUI panel. The latest labeling is stored in the file tm153393.label_out specified when the program was initiated (see Figure 2). The color and alphanumeric labels are stored in the file tm153393.ascii_out. Figure 13 shows the labeling of the entire scene to this point (this is tm153393.label_out with the colors specified in tm153393.ascii_out).

Later, when the analyst would like to restart the labeling process, he/she can copy the file tm153393.ascii_out to tm153393.ascii_in and copy the file tm153393.label_out to tm153393.label_in, and restart the program through the parameter input GUI as in Figure 14 (or with the command line or command file methods).

Upon restarting the relabeling exercise, the analyst decides to look for urban development. To do so he/she selects an unlabeled pixel in the core of downtown Baltimore, MD, and clicks on the "Select Region" button on the Region Label Tool GUI panel. The analyst explores the segmentation hierarchy and decides the segmentation at the segmentation hierarchy level 8 does the best job of labeling the dense urban areas in the scene. He labels the highlighted pixels with a yellow color and the alphanumeric phrase "Urban". The result for the whole scene is displayed in Figure 15.

The analyst now tries to label other developed areas. To do so he/she selects an unlabeled pixel on Interstate Highway 70 mid-way between Baltimore, MD and Frederick, MD, and clicks on the "Select Region" button on the Region Label Tool GUI panel. The analyst explores the segmentation hierarchy and decides the segmentation at the segmentation hierarchy level 3 does the best job of labeling the roads and moderate urban areas in the scene. He labels the highlighted pixels with a light yellow color and the alphanumeric phrase "Roads/Moderate Urban". The result for the whole scene is displayed in Figure 16.

Now the analyst looks for parklands and other grassy areas. To do so, he/she selects an unlabeled pixel in the Haines Point park in Washington, DC, and clicks on the "Select Region" button on the Region Label Tool GUI panel. The analyst explores the segmentation hierarchy and decides he/she needs to stay at segmentation hierarchy level 0 to avoid excessive confusion with agricultural areas in the scene. Even so, significant areas of grass-like agricultural areas are included with the parklands. He/she labels the highlighted pixels with a very light green color and the alphanumeric phrase "Grasslands/Parks". The resulting labeling of park areas is incomplete, so the analyst selects a still unlabeled pixel in the Haines Point park in Washington, DC, and clicks on the "Select Region" button on the Region Label Tool GUI panel. The analyst again explores the segmentation hierarchy and again decides he/she needs to stay at segmentation hierarchy level 0 to again avoid excess ive confusion with agricultural areas in the scene. Even so, significant areas of grass-like agricultural areas are included with the parklands. Also included are grassy areas surrounding the runways at the airports. He/she decides to combine these areas with the areas previously labeled very light green color and the alphanumeric phrase "Grasslands/Parks". This is done by clicking on the "Label" button to the left of the very light green color square in the Label Region panel. Noticing that a significant parkland area in east central Baltimore, MD is still unlabeled, the analyst also selects a pixel in that parkland area and also labels it very light green and "Grassland/Parks". More agricultural areas are included. Again the analyst notices that a significant portion of the mall in downtown Washington, DC is still unlabeled, so the analyst selects a pixel in this area and similarly labels it. Some more agricultural areas are included. Even though all these parkland l abelings were done at the finest level of the segmentation hierarchy (level 0), the confusion between grassy parklands and grass-like agricultural areas cannot be avoided. The result for the whole scene is displayed in Figure 17.

The result shown in Figure 17 highlights a minor problem with the recursive implementation of the hierarchical segmentation algorithm. The recursive division of the image into quarters can result in artifacts in the result reflecting this quartering division. A more complete labeling of the scene would reduce this problem substantially.

The analyst now notices that a marshy island in the Potomac River in the area of the Wilson Bridge (the I-95 crossing) has been mistakenly labeled as "Water." To correct this, the analyst selects a pixel on the island and clicks on the "Select Region" button on the Region Label Tool GUI panel. As it turns out, the highlighted area at segmentation hierarchy level 0 covers all of the mistakenly labeled area - and even finds a few other mistakenly labeled areas that the analyst did not notice earlier. The analyst labels the highlighted pixels with a light turquoise color and the alphanumeric phrase "Marshlands".

The analyst continues to label other unlabeled areas in a similar manner to that described above. In this process he/she added several classes: "Light Urban", "Agricultural," "Water Edge/Marsh/Shadow" and "Bare Soil." The class label/color correspondence is shown with a view of the Label Region panel given in Figure 18. The result for the whole scene is displayed in Figure 19.

We had remarked earlier that the incomplete labeling of Figure 17 highlighted an artifact problem from the recursive implementation of the hierarchical segmentation algorithm. However, this artifact problem has disappeared from the more complete labeling displayed in Figure 19.

We notice something else unusual about Figure 19. There are still some unlabeled areas to the north and west of Baltimore, MD. A closer inspection of the original image (Figure 1) shows that these unlabeled areas are under some thin hazy clouds! This is clearer in a blown up 768x768 pixel section of the 2054x2054 pixel data set, as displayed in Figure 20. The corresponding class labeling is displayed in Figure 21. This was not readily apparent from a cursory inspection of the image, but the hierarchical segmentation algorithm clearly detected it. Since even thin hazy clouds distort the radiance values detected by the Earth orbiting sensor, we have to be extra careful about labeling the land cover classes for image pixels under the clouds. Fortunately, the Region Labeling Tool has some features that can be useful in such a labeling.

The analyst first selects a pixel in the hazy area that appears to be in the midst of a wooded area. Exploring the segmentation hierarchy, the analyst decides that segmentation hierarchy level 2 highlights the most appropriate area. This highlighting is shown in Figure 22. To make sure that only pixels in the hazy area get labeled, the analyst clicks on the "Extract ROI" button at the top center of the Current Class Labeling Display, and draws a loop around the hazy area. This causes only pixels in the hazy area to be highlighted. The analyst then labels the highlighted area as "Wooded."

Note that there is an "Extract ROI" button on all the image display panels, including the zoom panels. This button works the same way on all of the image display panels.

The analyst selects a pixel in the hazy area that appears to be in the midst of a grassy area. Exploring the segmentation hierarchy, the analyst decides that segmentation hierarchy level 3 highlights the most appropriate area. This highlighting is shown in Figure 23. To make sure that only pixels in the hazy area get labeled, the analyst clicks on the "Extract ROI" button at the top center of the Current Class Labeling Display and draws a loop around the hazy area. This causes only pixels in the hazy area to be highlighted. The analyst then labels the highlighted area as "Grasslands/Parks."

The analyst now notices that some areas under the hazy clouds that were previously labeled "Bare Soil" appear to be actually grassy areas. The haze has brightened the apparent reflectance values from grassy areas to make them appear as if they were bare soil. To correct this, the analyst selects a grassy pixel under the hazy area that had been previously labeled "Bare Soil," and explores the associated segmentation hierarchy. Segmentation hierarchy level 4 highlights all the mislabeled areas. To again make sure that only pixels in the hazy area get relabeled, the analyst clicks on the "Extract ROI" button at the top center of the Current Class Labeling Display, and draws a loop around the hazy area. This causes only pixels in the hazy area to be highlighted. The analyst then labels the highlighted area as "Grasslands/Parks."

The analyst continues to label the regions under the thin hazy clouds in the same manner, labeling unlabeled areas and relabeling previously incorrectly labeled areas. In doing so, the analyst found there was a region that could not be differentiated appropriately into classes. He/she labeled this region "Unclassifiable because of haze" and colored it dark red (the last color square in the second column of the Label Region panel - see Figure 18). The result for the 768x768 pixel subset is shown in Figure 24.

There is one key feature of the Region Labeling Tool that has yet to be demonstrated. This is the choice between "Label Spectrally Similar Regions Together," which is the default mode of operation that has been used so far in this demonstration, and "Label Spatially Disjoint Regions Separately." When "Label Spatially Disjoint Regions Separately" is selected from the drop down menu under the label "Label Spectrally Similar Regions Together" (e.g. see Figure 6), connected component labeling is performed in the region starting from the selected pixel, and the region highlighting is restricted to only those pixels in the region that are spatially connected to the selected pixel. For example, our analyst initially selects a pixel from a stretch of Interstate 95 in the northeast part of Baltimore, MD in-between the junctions with Interstate 695 and Interstate 895 that was previously labeled "Water Edge/Marsh/Shadow." See Figure 25a for the RGB Image Zoom panel and Figure 25b for the resulting Current Class Labeling Zoom panel. After "Label Spatially Disjoint Regions Separately" is selected, the Current Class Labeling Zoom panel is modified to appear as in Figure 25c, where only the pixels in the region that are spatially connected to the selected pixel are highlighted. At this point the analyst can label the highlighted region in the normal fashion.

There is one more important option available in the Region Labeling Tool that has not been mentioned. This is the option to include one or two reference image files as inputs (see the fuzzy appearing lines "First Input Reference File" and "Second Input Reference File" in Figure 2). If supplied, these input reference image files can be displayed in exactly the same manner as the "Input RGB Image" and "Current Class Labeling" with corresponding Display and Zoom windows. All of these display and zoom windows are slaved together so that when the image is panned, the same areas appear in each of the displays. This makes possible useful intercomparisons between the data displayed in the windows, assisting the analyst in making photointerpretation decisions.