The Python source code below in Figures? ?1.5(a)-(d)? ?shows our initial object detectionalgorithm to detect a green ball. Each of the three video source will use their own instance ofthis object detection algorithm.
Since each of the cameras will be placed in a triangle, each ofthe cameras will use unique parameters to detect a green ball. This is explained further in thenext section, Object Localization Triangulation Algorithm. In the final implementation of thealgorithm, we plan to detect a wider array of objects. For this initial implementation, wedesigned our algorithm to only detect a green ball. Our detection algorithm supportsmovement of a green ball in the X, Y, and Z planes. Figure? ?1.
4(a)? ?shows how we definedparameters for one camera. Figure? ?1.4(b),? ?Figure? ?1.4(c?),? ?and? ?Figure? ?1.4(d)? ?shows how we usedthe defined parameters to detect a green ball.Multiple parameters need to be defined for each camera. Figure? ?1.5(a)? ?shows theseparameters.
The parameter, KNOWN_DISTANCE, is used to define? ?the distance away from thecamera, in inches, that the object will be detected. The parameter, KNOWN_WIDTH, is used todefine? ?the approximate width of the object, in inches. The parameter, marker , is used to definethe detected object’s region/area that will be bounded by a box. The parameter, focalLength,is then calculated to determine the optimal depth to which the algorithm will detect the object.The parameters, greenLower and greenUpper, are used to define the range of green colors onthe HSV spectrum to detect.
The variable, counter, will be used to keep track of how manyframes the algorithm has computed. The variables dX, dY, and dZ will be used to store thedifference between the X-coordinate, Y-coordinate, and Z-coordinate of the object in thecurrent frame and the X-coordinate, Y-coordinate, and Z-coordinate of the object in apreviously calculated frame. The variable, direction, is computed to store the current directionthat the object is moving in. In the next few lines of code, we will define the video source forthe algorithm. This video source will be supplied by the code previously discussed in VideoSource Data Collection.
After defining the initial parameters and the video source, we supply these parametersto OpenCV algorithms. Figure? ?1.5(b)? ?below shows how we defined more parameters usingOpenCV functions.
The first few lines of code make sure a video was supplied to the algorithmbefore continuing. We then use OpenCV functions to apply a Gaussian blur to the frame inorder to smooth the image, reduce noise, and convert it to the HSV color spectrum. Then weuse OpenCV functions to construct a “mask” for the color green and perform a series ofdilations and erosions to get rid of any small discrepancies in the mask. Finally, we contour themask’s outlineWe then perform calculations based on the contours that were previously calculated.Figure? ?1.5(c)? ?below shows how we perform these calculations. First, we make sure that at leastone object was found in the contour. If the object (green ball) was detected, we find the largestpossible contour based on its area.
We then compute the minimum enclosing circle and thecenter of the object. We require that the object have at least a 5 pixel radius in order to track it.If it does, the minimum enclosing circle surrounds the object, marks the the center, andupdates the coordinates of the ball.We then loop over the X, Y, and Z coordinates that have been calculated.
Figure? ?1.5(d)below shows how this is done. We compute the direction the green ball is moving by checkingprevious x, y, and z coordinates.
We compute dX, dY, and dZ of the current frame and with apreviously calculated frame. We use a previously calculated frame because using the frameimmediately preceding the current frame would result in unwanted noise and inaccurateresults. We then calculate the magnitude of dX, dY, and dZ to determine the direction that theobject is moving. The rest of the code handles placing the calculated coordinates and directiononto the GUI.
After runtime, all of the values for dX, dY, and dZ are displayed onto a graph.Object? ?Localization? ?Triangulation? ?Algorithm? ?and? ?Database:The triangulation algorithm and database will be implemented in the Final Design. Here,we will be able to localize a moving object based on the data collected from each camera. Sinceeach camera will be placed in a triangle around a room, the triangulation algorithm will giveunique parameters to each of the cameras in order to detect objects accordingly. Each camerawill continuously send data to a central database.
The triangulation algorithm will use the datafrom the database to determine if an object detected in one camera was correctly detected inthe other cameras. If the object was detected on all three cameras, data from all three cameraswill be quantified to determine the final X, Y, and Z- coordinates of the object within oursystem.