Detect Corners Of Grid
Solution 1:
You're working in Python with OpenCV, but I'm going to give you an answer using MATLAB with DIPimage. I intend this answer to be about the concepts, not about the code. I'm sure there are ways to accomplish all of these things in Python with OpenCV.
My aim here is to find the four corners of the board. The grid itself can be guessed, since it's just an equidistant division of the board, there is no need to try to detect all lines. The four corners give all information about the perspective transformation.
The simplest way to detect the board is to recognize that it is light-colored and has a dark-colored background. Starting with a grey-value image, I apply a small closing (I used a circle with a diameter of 7 pixels, this is suitable for the down-sampled image I used as example, but you might want to increase the size appropriately for the full-size image). This gives this result:
Next, I binarize using Otsu threshold selection, and remove holes (that part is not important, the rest would work if there are holes too). The connected components that we see now correspond to the board and the neighboring boards (or whatever the other white things are around the board).
Selecting the largest connected component is a fairly common procedure. In the code below I label the image (identifies connected components), count the number of pixels per connected component, and select the one with the most pixels.
Finally, subtracting from this result its erosion leaves us only with the pixels at the edge of the board (here in blue overlaid on the input image):
The trick I'm using to find the corners is fairly simple, but fails here because one of the corners is not in the image. Using Hough on these four edges would probably be a more robust way to go about it. Use this other answer for some ideas and code on how to go about that.
In any case, I'm finding as the top-left corner of the board the edge pixel that is closest to the top-left corner of the image. Likewise for the other 3 corners. These results are the red dots in the image above.
A third option here would be to convert the outline into a polygon, simplify it with the Douglas–Peucker algorithm, discard the edges that go along the image edge (this is where corners are not in the image), and extend the two edges on either side of this to find the vertex that is outside the image.
The MATLAB (with DIPimage) code follows.
img = readim('https://i.stack.imgur.com/GYZGa.jpg');
img = colorspace(img,'gray');
% Downsample, makes display easier
img = gaussf(img,2);
img = img(0:4:end,0:4:end);
% Simplify and binarize
sim = closing(img,7);
brd = threshold(sim); % uses Otsu threshold selection
% Fill the holes
brd = fillholes(brd);
% Keep only the largest connected component
brd = label(brd);
msr = measure(brd);
[~,I] = max(msr,'size');
brd = brd == msr(I).id;
% Extract edges
brd = brd - erosion(brd,3,'rectangular');
% Find corners
pts = findcoord(brd);
[~,top_left] = min(sum(pts.^2,2));
[~,top_right] = min(sum((pts-[imsize(brd,1),0]).^2,2));
[~,bottom_left] = min(sum((pts-[0,imsize(brd,2)]).^2,2));
[~,bottom_right] = min(sum((pts-[imsize(brd,1),imsize(brd,2)]).^2,2));
% Make an image with corner pixels set
cnr = newim(brd,'bin');
cnr(pts(top_left,1),pts(top_left,2)) = 1;
cnr(pts(top_right,1),pts(top_right,2)) = 1;
cnr(pts(bottom_left,1),pts(bottom_left,2)) = 1;
cnr(pts(bottom_right,1),pts(bottom_right,2)) = 1;
cnr = dilation(cnr,3);
% Save images
writeim(sim,'so1.png')
out = overlay(img,brd,[0,0,255]);
out = overlay(out,cnr,[255,0,0]);
writeim(out,'so2.png')
Solution 2:
I have somewhat of an answer for you, although not complete, it might just help you. I use the Ramer–Douglas–Peucker algorithm to determine the contours and then extract rectangular boxes from the contours. I then use the percentage of the "box" area to the image area to remove smaller boxes. This removes most of the junk boxes.
Here is an example of what I did in python code:
Finding Contours:
def findcontours(self):
logging.info("Inside findcontours Contours...")
# Pre-process image
imgGray = self.imgProcess.toGrey(self.img)
logging.info("Success on converting image to greyscale")
imgThresh = self.imgProcess.toBinary(imgGray)
logging.info("Success on converting image to binary")
logging.info("Finding contours...")
image, contours, hierarchy = cv2.findContours(imgThresh.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
logging.info("Contours found: %d", len(contours))
return contours
Using Contours to find boxes:
def getRectangles(self, contours):
arrrect = []
imgArea = self.getArea()
logging.info("Image Area is: %d", imgArea)
for cnt in contours:
epsilon = 0.01*cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, epsilon, False)
area = cv2.contourArea(approx)
rect = cv2.minAreaRect(approx)
box = cv2.boxPoints(rect)
box = np.int0(box)
percentage = (area * 100) / imgArea
if percentage > 0.3:
arrrect.append(box)
return arrrect
To combine these 2 methods:
def process(self):
logging.info("Processing image...")
self.shape_handler = ShapeHandler(self.img)
contours = self.shape_handler.findcontours()
logging.info("Finding Rectangles from contours...")
rectangles = self.shape_handler.getRectangles(contours)
img = self.imgDraw.draw(self.img, rectangles, "Green", 10)
cv2.drawContours(img, array, -1, (0,255,0), thickness)
self.display(img)
logging.info("Amount of Rectangles Found: %d", len(rectangles))
Display Image:
def display(self, img):
cv2.namedWindow('image', cv2.WINDOW_NORMAL)
cv2.imshow("image", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
The final step would be to combine any intersecting boxes since you're only interested in the edges/corners and then only getting the box with the largest area. Look here to check how to combine boxes.
My coding source: OpenCV 3.1 Documentation
Result on your images:
Normal:
Skew:
Hope this helps!
Post a Comment for "Detect Corners Of Grid"