设为首页/加为收藏 /北邮主页/ 党委宣传部/ 联系我们
通知公告
您当前所在的位置: 网站首页 -> 通知公告 -> 正文

图书服务系列信息

发布日期:2020-07-08  来源:图书馆

前沿文献带您解读自然语言处理技术——图书馆前沿文献专题推荐服务(5)

发布时间:2020-04-06

11.jpg

自然语言处理(Natural Language Processing,NLP)是人工智能领域中的重要分支,其技术应用的范围不断扩大,在信息提取、情感分析、自动问答、机器翻译、依据大数据的个性化推荐等领域有着重要作用。NLP的常见任务有序列标注(如分词、词性标注)、分类任务(如文本分类、情感计算)、句子关系判断(如问答系统)、生成式任务(如机器翻译)等。在此,我们参考了国际学术前沿期刊,为您推荐自然语言处理方面的相关文献,以期能够共同推进人工智能技术的发展。

本期我们为您推送4篇文献,从不同领域关注自然语言处理技术的应用。

11.png

Language independent sequence labelling for Opinion Target Extraction

Agerri, Rodrigo, etc.

ARTIFICIAL INTELLIGENCE, 2019, 268:85-95

In this research note we present a language independent system to model Opinion Target Extraction (OTE) as a sequence labelling task. The system consists of a combination of clustering features implemented on top of a simple set of shallow local features. Experiments on the well known Aspect Based Sentiment Analysis (ABSA) benchmarks show that our approach is very competitive across languages, obtaining best results for six languages in seven different datasets. Furthermore, the results provide further insights into the behaviour of clustering features for sequence labelling tasks. The system and models generated in this work are available for public use and to facilitate reproducibility of results.

12.png

Aspect-based sentiment analysis with alternating coattention networks

Yang, Chao, etc.

INFORMATION PROCESSING & MANAGEMENT, 2019, 56(3):463-478

Aspect-based sentiment analysis aims to predict the sentiment polarities of specific targets in a given text. Recent researches show great interest in modeling the target and context with attention network to obtain more effective feature representation for sentiment classification task. However, the use of an average vector of target for computing the attention score for context is unfair. Besides, the interaction mechanism is simple thus need to be further improved. To solve the above problems, this paper first proposes a coattention mechanism which models both target-level and context-level attention alternatively so as to focus on those key words of targets to learn more effective context representation. On this basis, we implement a Coattention-LSTM network which learns nonlinear representations of context and target simultaneously and can extracts more effective sentiment feature from coattention mechanism. Further, a Coattention-MemNet network which adopts a multiple-hops coattention mechanism is proposed to improve the sentiment classification result. Finally, we propose a new location weighted function which considers the location information to enhance the performance of coattention mechanism. Extensive experiments on two public datasets demonstrate the effectiveness of all proposed methods, and our findings in the experiments provide new insight for future developments of using attention mechanism and deep neural network for aspect-based sentiment analysis.

13.png

Hybrid query expansion using lexical resources and word embeddings for sentence retrieval in question answering

Esposito, Massimo, etc.

INFORMATION SCIENCES, 2020, 514:88-105

Question Answering (QA) systems based on Information Retrieval return precise answers to natural language questions, extracting relevant sentences from document collections. However, questions and sentences cannot be aligned terminologically, generating errors in the sentence retrieval. In order to augment the effectiveness in retrieving relevant sentences from documents, this paper proposes a hybrid Query Expansion (QE) approach, based on lexical resources and word embeddings, for QA systems. In detail, synonyms and hypernyms of relevant terms occurring in the question are first extracted from MultiWordNet and, then, contextualized to the document collection used in the QA system. Finally, the resulting set is ranked and filtered on the basis of wording and sense of the question, by employing a semantic similarity metric built on the top of a Word2Vec model. This latter is locally trained on an extended corpus pertaining the same topic of the documents used in the QA system. This QE approach is implemented into an existing QA system and experimentally evaluated, with respect to different possible configurations and selected baselines, for the Italian language and in the Cultural Heritage domain, assessing its effectiveness in retrieving sentences containing proper answers to questions belonging to four different categories.

14.png

Exploiting reverse target-side contexts for neural machine translation via asynchronous bidirectional decoding

Su, Jinsong, etc.

ARTIFICIAL INTELLIGENCE, 2019, 277

Based on a unified encoder-decoder framework with attentional mechanism, neural machine translation (NMT) models have attracted much attention and become the mainstream in the community of machine translation. Generally, the NMT decoders produce translation in a le-to-right way. As a result, only le-to-right target-side contexts from the generated translations are exploited, while the right-to-le target-side contexts are completely unexploited for translation. In this paper, we extend the conventional attentional encoder-decoder NMT framework by introducing a backward decoder, in order to explore asynchronous bidirectional decoding for NMT. In the first step aer encoding, our backward decoder learns to generate the target-side hidden states in a right-to-le manner. Next, in each timestep of translation prediction, our forward decoder concurrently considers both the source-side and the reverse target-side hidden states via two attention models. Compared with previous models, the innovation in this architecture enables our model to fully exploit contexts from both source side and target side, which improve translation quality altogether. We conducted experiments on NIST Chinese-English, WMT English-German and Finnish-English translation tasks to investigate the effectiveness of our model. Experimental results show that (1) our improved RNNbased NMT model achieves significant improvements over the conventional RNNSearch by 1.44/-3.02, 1.11/-1.01, and 1.23/-1.27 average BLEU and TER points, respectively; and (2) our enhanced Transformer outperforms the standard Transformer by 1.56/-1.49, 1.76/-2.49, and 1.29/-1.33 average BLEU and TER points, respectively.

上一页 [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] 下一页

热点新闻

电子信箱:news@bupt.edu.cn   新闻热线:010-62282031      通讯地址:北京市海淀区西土城路10号    邮编:100876

Copyright © 2018  All Rights Reserved  北京邮电大学党委宣传部(新闻中心) 版权所有

京ICP备 05064445号    京公网安备110402430070