2024
pdf
bib
abs
TEII: Think, Explain, Interact and Iterate with Large Language Models to Solve Cross-lingual Emotion Detection
Long Cheng
|
Qihao Shao
|
Christine Zhao
|
Sheng Bi
|
Gina-Anne Levow
Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis
Cross-lingual emotion detection allows us to analyze global trends, public opinion, and social phenomena at scale. We participated in the Explainability of Cross-lingual Emotion Detection (EXALT) shared task, achieving an F1-score of 0.6046 on the evaluation set for the emotion detection sub-task. Our system outperformed the baseline by more than 0.16 F1-score absolute, and ranked second amongst competing systems. We conducted experiments using fine-tuning, zero-shot learning, and few-shot learning for Large Language Model (LLM)-based models as well as embedding-based BiLSTM and KNN for non-LLM-based techniques. Additionally, we introduced two novel methods: the Multi-Iteration Agentic Workflow and the Multi-Binary-Classifier Agentic Workflow. We found that LLM-based approaches provided good performance on multilingual emotion detection. Furthermore, ensembles combining all our experimented models yielded higher F1-scores than any single approach alone.
pdf
bib
abs
PRIMO: Progressive Induction for Multi-hop Open Rule Generation
Jianyu Liu
|
Sheng Bi
|
Guilin Qi
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Open rules refer to the implication from premise atoms to hypothesis atoms, which captures various relationships between instances in the real world. Injecting open rule knowledge into the machine helps to improve the performance of downstream tasks such as dialogue and relation extraction. Existing approaches focus on single-hop open rule generation, ignoring scenarios involving multiple hops, leading to logical inconsistencies between premise and hypothesis atoms, as well as semantic duplication of generated rule atoms. To address these issues, we propose a progressive multi-stage open rule generation method called PRIMO. We introduce ontology information during the rule generation stage to reduce ambiguity and improve rule accuracy. PRIMO constructs a multi-stage structure consisting of generation, extraction, and rank modules to fully leverage the latent knowledge within the language model across multiple dimensions. Furthermore, we employ reinforcement learning from human feedback to further optimize model, enhancing the model’s understanding of commonsense knowledge. Experimental results demonstrate that compared to baseline models, PRIMO significantly enhances rule quality and diversity while reducing the repetition rate of rule atoms.
2021
pdf
bib
Adaptive Knowledge-Enhanced Bayesian Meta-Learning for Few-shot Event Detection
Shirong Shen
|
Tongtong Wu
|
Guilin Qi
|
Yuan-Fang Li
|
Gholamreza Haffari
|
Sheng Bi
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
pdf
bib
abs
Simple or Complex? Complexity-controllable Question Generation with Soft Templates and Deep Mixture of Experts Model
Sheng Bi
|
Xiya Cheng
|
Yuan-Fang Li
|
Lizhen Qu
|
Shirong Shen
|
Guilin Qi
|
Lu Pan
|
Yinlin Jiang
Findings of the Association for Computational Linguistics: EMNLP 2021
The ability to generate natural-language questions with controlled complexity levels is highly desirable as it further expands the applicability of question generation. In this paper, we propose an end-to-end neural complexity-controllable question generation model, which incorporates a mixture of experts (MoE) as the selector of soft templates to improve the accuracy of complexity control and the quality of generated questions. The soft templates capture question similarity while avoiding the expensive construction of actual templates. Our method introduces a novel, cross-domain complexity estimator to assess the complexity of a question, taking into account the passage, the question, the answer and their interactions. The experimental results on two benchmark QA datasets demonstrate that our QG model is superior to state-of-the-art methods in both automatic and manual evaluation. Moreover, our complexity estimator is significantly more accurate than the baselines in both in-domain and out-domain settings.
2020
pdf
bib
abs
Hierarchical Chinese Legal event extraction via Pedal Attention Mechanism
Shirong Shen
|
Guilin Qi
|
Zhen Li
|
Sheng Bi
|
Lusheng Wang
Proceedings of the 28th International Conference on Computational Linguistics
Event extraction plays an important role in legal applications, including case push and auxiliary judgment. However, traditional event structure cannot express the connections between arguments, which are extremely important in legal events. Therefore, this paper defines a dynamic event structure for Chinese legal events. To distinguish between similar events, we design hierarchical event features for event detection. Moreover, to address the problem of long-distance semantic dependence and anaphora resolution in argument classification, we propose a novel pedal attention mechanism to extract the semantic relation between two words through their dependent adjacent words. We label a Chinese legal event dataset and evaluate our model on it. Experimental results demonstrate that our model can surpass other state-of-the-art models.
pdf
bib
abs
Knowledge-enriched, Type-constrained and Grammar-guided Question Generation over Knowledge Bases
Sheng Bi
|
Xiya Cheng
|
Yuan-Fang Li
|
Yongzhen Wang
|
Guilin Qi
Proceedings of the 28th International Conference on Computational Linguistics
Question generation over knowledge bases (KBQG) aims at generating natural-language questions about a subgraph, i.e. a set of triples. Two main challenges still face the current crop of encoder-decoder-based methods, especially on small subgraphs: (1) low diversity and poor fluency due to the limited information contained in the subgraphs, and (2) semantic drift due to the decoder’s oblivion of the semantics of the answer entity. We propose an innovative knowledge-enriched, type-constrained and grammar-guided KBQG model, named KTG, to addresses the above challenges. In our model, the encoder is equipped with auxiliary information from the KB, and the decoder is constrained with word types during QG. Specifically, entity domain and description, as well as relation hierarchy information are considered to construct question contexts, while a conditional copy mechanism is incorporated to modulate question semantics according to current word types. Besides, a novel reward function featuring grammatical similarity is designed to improve both generative richness and syntactic correctness via reinforcement learning. Extensive experiments show that our proposed model outperforms existing methods by a significant margin on two widely-used benchmark datasets SimpleQuestion and PathQuestion.