A digital image can be represented in several dimensions. In automatic identification process of buildings, people and others objects, resolution can determine efficiency and efficacy of recognition algorithms. Image dimension reduction is useful to minimize computational effort and avoiding adaptation of object detection algorithms. However, the image reduction process also requires high effort and can spend almost same amount of saved time and effort. This paper proposes a parallel implementation of the Gaussian Pyramid multiresolution approach for image reduction of any image and experimental implementation in CUDA. The implementation was performed in 3 different GPU’s and compared with traditional approach in a regular personal computer. Experiments show significative time processing reduction which means high efficiency and potential for adoption of our parallel solution in a image processing chain.
Text in Portuguese, with English title and abstract.