- Journal Home
- Volume 19 - 2024
- Volume 18 - 2023
- Volume 17 - 2022
- Volume 16 - 2021
- Volume 15 - 2020
- Volume 14 - 2019
- Volume 13 - 2018
- Volume 12 - 2017
- Volume 11 - 2016
- Volume 10 - 2015
- Volume 9 - 2014
- Volume 8 - 2013
- Volume 7 - 2012
- Volume 6 - 2011
- Volume 5 - 2010
- Volume 4 - 2009
- Volume 3 - 2008
- Volume 2 - 2007
- Volume 1 - 2006
J. Info. Comput. Sci. , 19 (2024), pp. 65-80.
[An open-access article; the PDF is free to any online user.]
Cited by
- BibTex
- RIS
- TXT
In view that traditional manual feature extraction method cannot effectively extract the overall deep image information, a new method of scene classification based on deep learning feature fusion is proposed for remote sensing images. First, the Grey Level Co-occurrence Matrix (GLCM) and Local Binary Patterns (LBP) are used to extract the shallow information of texture features with relevant spatial characteristics and local texture features as well; second, the deep information of images is extracted by the AlexNet migration learning network, and a 256-dimensional fully connected layer is added as feature output while the last fully connected layer is removed; and the two features are adaptively integrated, then the remote sensing images are classified and identified by the Grid Search optimized Support Vector Machine (GS-SVM). The experimental results on 21 types of target data of the public dataset UC Merced and 7 types of target data of RSSCN7 produced average accuracy rates of 94.77% and 93.79%, respectively, showing that the proposed method can effectively improve the classification accuracy of remote sensing image scenes.
}, issn = {3080-180X}, doi = {https://doi.org/10.4208/JICS-2024-005}, url = {http://global-sci.org/intro/article_detail/jics/23880.html} }In view that traditional manual feature extraction method cannot effectively extract the overall deep image information, a new method of scene classification based on deep learning feature fusion is proposed for remote sensing images. First, the Grey Level Co-occurrence Matrix (GLCM) and Local Binary Patterns (LBP) are used to extract the shallow information of texture features with relevant spatial characteristics and local texture features as well; second, the deep information of images is extracted by the AlexNet migration learning network, and a 256-dimensional fully connected layer is added as feature output while the last fully connected layer is removed; and the two features are adaptively integrated, then the remote sensing images are classified and identified by the Grid Search optimized Support Vector Machine (GS-SVM). The experimental results on 21 types of target data of the public dataset UC Merced and 7 types of target data of RSSCN7 produced average accuracy rates of 94.77% and 93.79%, respectively, showing that the proposed method can effectively improve the classification accuracy of remote sensing image scenes.