Image processing development in fuzzy logic
by barkkathulla[ Edit ] 2012-09-20 16:38:40
Process commences by transforming the CCD image of the two targets from the RGB color space to the corresponding hue and saturation spaces. For computational convenience, the hue and saturation images are then converted into equivalent binary images by applying threshold values of Hmin(60)/Hmax(85) and Smin(60)/Smax(180), respectively. There after, the twoimages are multiplied to produce a composite H-S binary image. A 3 * 3 median filter is applied to remove any noise in the image, and a morphological closing operation is then performed to repair any resulting damage to the contours of the two targets. Finally, the two targets in the image are detected by performing a blob analysis with a minimum threshold pixel value of 20. The blob analysis block is used to calculate the statistics associated with the images of the leading marks, such as the total number of pixels. Having acquired the two targets, their respective centers of gravity (CG) are computed in order to construct the leading line, leading marks location and the deviated heading angle.
2. Visual Guidance Strategy
As described in the Introduction, the autopilot system developed in this study is based on the leading line visual guidance strategy. Thus, as described in the sub-sections below, in mimicking the actions of a human pilot, the autopilot system requires image processing.
(1) the orientation of the ship relative to the front leading mark; (2) the position of the ship relative to the leading line; and (3) the distance between the ship and the berthing wall. This information is then used to instruct the necessary course adjustments required to bring the ship toward the leading line and to determine the appropriate moment at which to switch from an approach maneuvering mode to a
berthing control mode.
Figure 3 illustrates the image coordinate framework used by the autopilot system in computing the heading data of the ship and the distance of the ship from the berthing wall. The CCD camera used in the current trials had a resolution of 720 * 480 pixels, i.e. the maximum X-axis value in the image coordinate framework is 720, while that of the Y-axis is 480. As a result, the center of the CCD image is located at coordinates (360, 240). According to the manufacturer’s specification, the CCD camera has a horizontal field of view (HFOV) of 47.31° and a vertical field of view (VFOV) of 36.32°. However, in the
trials, the CCD camera was operated in a 2× zoom mode, and thus the HFOV and VFOV were reduced to 17.91° and 13.76°,
respectively.