Abstract
Introduction
The NKS advises to have one’s prosthetic checked every 12 months which can be problematic and time consuming for some patients resulting in them not following that guidance. In order to make the process of prosthetic health evaluation easier everyday use devices like mobile phones with cameras can be used. The obtained photos can then be used to create a 3D model of the prosthetic from the images. This can be done by finding characteristic features on each image that are called key points. Those keypoints are then matched between different images allowing to establish spatial relationships between camera positions at which the images were taken. This allows the allocation of coordinates in a 3D space to the found features, which creates a structure called cloud point data. Those points are then connected creating mesh which ultimately results in the creation of the 3D model. In order to be able to use the generated 3D model it has to be an accurate representation of the physical object which mostly depends on the feature extraction stage in the 3D reconstruction pipeline. Currently, the most popular SIFT algorithm is not an optimal choice for prosthetic reconstruction due to its smooth, often reflective and covered in small patterns surfaces. A-KAZE feature detection algorithm was shown to give better results than other popular algorithms in those types of surfaces [1]. Additionally, A-KAZE is scale and rotation invariant making it easier to collect useful image sets. In this project, the A-KAZE feature detection was developed in C++. The algorithm was then tested on a set of 13 images depicting squares and circles in a binary, greyscale and RGB image format, in order to compare its performance to the widely used open-source A-KAZE implementation from the OpenCV library. In an effort to integrate the proposed algorithm with COLMAP, an open-source program that can produce 3D models from a dataset of photos.
Methods
A-KAZE was implemented by modifying each image in order to create a scale space which was then used to obtain a determinant of Hessian which was used to locate the keypoints on each image separately. As the algorithm was supposed to be integrated with the COLMAP open-source software, those keypoints were described not in an A-KAZE manner but according to the SIFT pipeline. The integration step was obtained by modifying the SQLite database used by COLMAP during the pipeline. Comparison between the developed A-KAZE and the OpenCV implementation was done by comparing clusters of keypoints to each other by calculating how much they overlap.
Results & Discussion
In the images representing squares the highest average cluster overlap was 84.29% with a standard deviation of 1.48. The developed algorithm struggled to detect corners of lighter objects on a darker background with which the OpenCV implementation was performing well. However, the OpenCV implementation was reporting fake features which was not the case for the developed method. For the circle images, the highest cluster overlap equaled 99.71% and the lowest 93.30%. However, after visualizing the results it can be noticed that OpenCV implementation is struggling to detect keypoints on some shades of grey which was not the case for the developed A-KAZE. An unexpected error was encountered during the A-KAZE and COLMAP implementation stage however the developed script was performed without any errors.
Conclusions
The developed A-KAZE can achieve similar results as the widely used Open-CV implementation and additionally, it performs better in certain applications. The integration of the developed A-KAZE with the COLMAP pipeline was not successful even though the database is being updated according to the COLMAP documentation.
References
[1] Y. Sun et al., ‘Wood Product Tracking Using an Improved AKAZE Method in Wood Traceability System’, IEEE Access, vol. 9, pp. 88552–88563, 2021, doi: 10.1109/ACCESS.2021.3088236. Acknowledgements I would like to thank Crack Map and the Advanced Forming Research Centre of the University of Strathclyde for their support.
The NKS advises to have one’s prosthetic checked every 12 months which can be problematic and time consuming for some patients resulting in them not following that guidance. In order to make the process of prosthetic health evaluation easier everyday use devices like mobile phones with cameras can be used. The obtained photos can then be used to create a 3D model of the prosthetic from the images. This can be done by finding characteristic features on each image that are called key points. Those keypoints are then matched between different images allowing to establish spatial relationships between camera positions at which the images were taken. This allows the allocation of coordinates in a 3D space to the found features, which creates a structure called cloud point data. Those points are then connected creating mesh which ultimately results in the creation of the 3D model. In order to be able to use the generated 3D model it has to be an accurate representation of the physical object which mostly depends on the feature extraction stage in the 3D reconstruction pipeline. Currently, the most popular SIFT algorithm is not an optimal choice for prosthetic reconstruction due to its smooth, often reflective and covered in small patterns surfaces. A-KAZE feature detection algorithm was shown to give better results than other popular algorithms in those types of surfaces [1]. Additionally, A-KAZE is scale and rotation invariant making it easier to collect useful image sets. In this project, the A-KAZE feature detection was developed in C++. The algorithm was then tested on a set of 13 images depicting squares and circles in a binary, greyscale and RGB image format, in order to compare its performance to the widely used open-source A-KAZE implementation from the OpenCV library. In an effort to integrate the proposed algorithm with COLMAP, an open-source program that can produce 3D models from a dataset of photos.
Methods
A-KAZE was implemented by modifying each image in order to create a scale space which was then used to obtain a determinant of Hessian which was used to locate the keypoints on each image separately. As the algorithm was supposed to be integrated with the COLMAP open-source software, those keypoints were described not in an A-KAZE manner but according to the SIFT pipeline. The integration step was obtained by modifying the SQLite database used by COLMAP during the pipeline. Comparison between the developed A-KAZE and the OpenCV implementation was done by comparing clusters of keypoints to each other by calculating how much they overlap.
Results & Discussion
In the images representing squares the highest average cluster overlap was 84.29% with a standard deviation of 1.48. The developed algorithm struggled to detect corners of lighter objects on a darker background with which the OpenCV implementation was performing well. However, the OpenCV implementation was reporting fake features which was not the case for the developed method. For the circle images, the highest cluster overlap equaled 99.71% and the lowest 93.30%. However, after visualizing the results it can be noticed that OpenCV implementation is struggling to detect keypoints on some shades of grey which was not the case for the developed A-KAZE. An unexpected error was encountered during the A-KAZE and COLMAP implementation stage however the developed script was performed without any errors.
Conclusions
The developed A-KAZE can achieve similar results as the widely used Open-CV implementation and additionally, it performs better in certain applications. The integration of the developed A-KAZE with the COLMAP pipeline was not successful even though the database is being updated according to the COLMAP documentation.
References
[1] Y. Sun et al., ‘Wood Product Tracking Using an Improved AKAZE Method in Wood Traceability System’, IEEE Access, vol. 9, pp. 88552–88563, 2021, doi: 10.1109/ACCESS.2021.3088236. Acknowledgements I would like to thank Crack Map and the Advanced Forming Research Centre of the University of Strathclyde for their support.
Original language | English |
---|---|
Publication status | Published - 15 Sept 2023 |
Event | BioMedEng2023 - Swansea University, Swansea , United Kingdom Duration: 14 Sept 2023 → 15 Sept 2023 https://biomedeng.org/biomedeng23/ |
Conference
Conference | BioMedEng2023 |
---|---|
Country/Territory | United Kingdom |
City | Swansea |
Period | 14/09/23 → 15/09/23 |
Internet address |
Keywords
- prosthetic
- image processing
- health monitoring