- Journal Home
- Volume 19 - 2024
- Volume 18 - 2023
- Volume 17 - 2022
- Volume 16 - 2021
- Volume 15 - 2020
- Volume 14 - 2019
- Volume 13 - 2018
- Volume 12 - 2017
- Volume 11 - 2016
- Volume 10 - 2015
- Volume 9 - 2014
- Volume 8 - 2013
- Volume 7 - 2012
- Volume 6 - 2011
- Volume 5 - 2010
- Volume 4 - 2009
- Volume 3 - 2008
- Volume 2 - 2007
- Volume 1 - 2006
J. Info. Comput. Sci. , 17 (2022), pp. 028-039.
[An open-access article; the PDF is free to any online user.]
Cited by
- BibTex
- RIS
- TXT
Benefitting from Fully Convolutional Networks (FCNs), salient object detection methods have achieved prominent performance. However, there are still some challenges in this task: 1) lack of effective feature representation and integration make the result salient maps lose some regions of object, or bring some non-saliency regions. 2) suffering from the continuous pooling or stride operations, the predicted maps will lose some important spatial detail information, especially the boundary of object. To address these two problems, we propose the Content-aware and Edge-aware network (CENet) which contains three sub-modules: 1) we design a content-aware feature extraction module which uses a transformer block and channel-wise attention mechanism to capture the distinct content features and suppresses the non-saliency regions. 2) an edge-aware feature extraction module is introduced to learn the boundary features and predict the intact edge of the salient object. 3) a feature fusion module is proposed to integrate features from the first two module in a learning way. We also design a hybrid loss function which has better performance than the widely-used binary cross entropy loss. Results show that, our method can detect the intact salient object without losing regions of object or bring some non-saliency regions, and can also obtain the precise boundary. Experimented on several datasets, our method can achieve the state-of-art performance.
}, issn = {3080-180X}, doi = {https://doi.org/}, url = {http://global-sci.org/intro/article_detail/jics/22359.html} }Benefitting from Fully Convolutional Networks (FCNs), salient object detection methods have achieved prominent performance. However, there are still some challenges in this task: 1) lack of effective feature representation and integration make the result salient maps lose some regions of object, or bring some non-saliency regions. 2) suffering from the continuous pooling or stride operations, the predicted maps will lose some important spatial detail information, especially the boundary of object. To address these two problems, we propose the Content-aware and Edge-aware network (CENet) which contains three sub-modules: 1) we design a content-aware feature extraction module which uses a transformer block and channel-wise attention mechanism to capture the distinct content features and suppresses the non-saliency regions. 2) an edge-aware feature extraction module is introduced to learn the boundary features and predict the intact edge of the salient object. 3) a feature fusion module is proposed to integrate features from the first two module in a learning way. We also design a hybrid loss function which has better performance than the widely-used binary cross entropy loss. Results show that, our method can detect the intact salient object without losing regions of object or bring some non-saliency regions, and can also obtain the precise boundary. Experimented on several datasets, our method can achieve the state-of-art performance.