이상구 교수 연구진, AAAI 2019 논문 두 편 동시 게재
이상구 교수 연구진(지능형 데이터 시스템 연구실, http://ids.snu.ac.kr )이 AAAI (컴퓨터공학부 지정 최고 국제 학회)에 논문 두 편을 발표하게 되었습니다. 이상구 교수 연구진은 작년 동일 학회(AAAI 2018)에도 논문 한 편을 게재한 바 있어, 2년 연속 인공지능 분야 최고 권위 학회에 논문 게재하는 성과를 달성했습니다.
1. Kang Min Yoo, Youhyun Shin, and Sang-goo Lee. "Data Augmentation for Spoken Language Understanding via Joint Variational Generation." (arXiv:1809.02305).
Abstract: Data scarcity is one of the main obstacles of domain adaptation in spoken language understanding (SLU) due to the high cost of creating manually tagged SLU datasets. Recent works in neural text generative models, particularly latent variable models such as variational autoencoder (VAE), have shown promising results in regards to generating plausible and natural sentences. In this paper, we propose a novel generative architecture which leverages the generative power of latent variable models to jointly synthesize fully annotated utterances. Our experiments show that existing SLU models trained on the additional synthetic examples achieve performance gains. Our approach not only helps alleviate the data scarcity issue in the SLU task for many datasets but also indiscriminately improves language understanding performances for various SLU models, supported by extensive experiments and rigorous statistical testing.
2. Taeuk Kim, Jihun Choi, Daniel Edmiston, Sanghwan Bae, Sang-goo Lee. "Dynamic Compositionality in Recursive Neural Networks with Structure-aware Tag Representations." (arXiv:1809.02286).
Abstract: Most existing recursive neural network (RvNN) architectures utilize only the structure of parse trees, ignoring syntactic tags which are provided as by-products of parsing. We present a novel RvNN architecture that can provide dynamic compositionality by considering comprehensive syntactic information derived from both the structure and linguistic tags. Specifically, we introduce a structure-aware tag representation constructed by a separate tag-level tree-LSTM. With this, we can control the composition function of the existing word-level tree-LSTM by augmenting the representation as a supplementary input to the gate functions of the tree-LSTM. We show that models built upon the proposed architecture obtain superior performance on several sentence-level tasks such as sentiment analysis and natural language inference when compared against previous tree-structured models and other sophisticated neural models. In particular, our models achieve new state-of-the-art results on Stanford Sentiment Treebank, Movie Review, and Text Retrieval Conference datasets.