https://www.selleckchem.com/products/etc-159.html
Recently, cross-subject emotion recognition attracts widespread attention. The current emotional experiments mainly use video clips of different emotions as stimulus materials, but the videos watched by different subjects are the same, which may introduce the same noise pattern in the collected data. However, the traditional experiment settings for cross-subject emotion recognition models couldn't eliminate the impact of same video clips on recognition results, which may lead to a bias on classification. In this paper, we propose a nove