Please use this identifier to cite or link to this item:
|Title:||Deep Textual Searching for Visual Semantics of Personal Photo Collections with a Hybrid Similarity Measure|
|Citation:||N. Chinpanthana and T. Phiasai, "Deep Textual Searching for Visual Semantics of Personal Photo Collections with a Hybrid Similarity Measure," 2017 International Symposium on Computer Science and Intelligent Controls (ISCSIC), 2017, pp. 124-128, doi: 10.1109/ISCSIC.2017.36.|
|Abstract:||In recent years, personal photos on the internet have become important parts of people's lives. There is much automatic application software to search for photos, but searching still suffers from problems of semantic accuracy. Research has been conducted to satisfy user demands for a semantic model using a set of features or keyword annotation techniques. Keywords in photos give the best evidence to identify what photos are about. However, it does not always relate to the actual meaning of photos.For this reason, we propose a textual description with a hierarchical concept and comparison of the feature set with a hybrid similarity measure. The experimental results indicate that our proposed approach offers significant performance improvements in the interpretation of semantic meanings with a maximum success rate of 80.4%.|
|Description:||2017 International Symposium on Computer Science and Intelligent Controls|
|Appears in Collections:||Research|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.