Translate   12 w

https://www.selleckchem.com/pr....oducts/GDC-0980-RG74
Cross-modal search has become a research hotspot in the recent years. In contrast to traditional cross-modal search, social network cross-modal information search is restricted by data quality for arbitrary text and low-resolution visual features. In addition, the semantic sparseness of cross-modal data from social networks results in the text and visual modalities misleading each other. In this paper, we propose a cross-modal search method for social network data that capitalizes on adversarial learning (cross-modal search with

  • Like
  • Love
  • HaHa
  • WoW
  • Sad
  • Angry