2024
pdf
bib
Sign Language Translation with Gloss Pair Encoding
Taro Miyazaki
|
Sihan Tan
|
Tsubasa Uchida
|
Hiroyuki Kaneko
Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources
pdf
bib
SEDA: Simple and Effective Data Augmentation for Sign Language Understanding
Sihan Tan
|
Taro Miyazaki
|
Katsutoshi Itoyama
|
Kazuhiro Nakadai
Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources
pdf
bib
HamNoSys-based Motion Editing Method for Sign Language
Tsubasa Uchida
|
Taro Miyazaki
|
Hiroyuki Kaneko
Proceedings of the LREC-COLING 2024 11th Workshop on the Representation and Processing of Sign Languages: Evaluation of Sign Language Resources
pdf
bib
abs
Understanding How Positional Encodings Work in Transformer Model
Taro Miyazaki
|
Hideya Mino
|
Hiroyuki Kaneko
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
A transformer model is used in general tasks such as pre-trained language models and specific tasks including machine translation. Such a model mainly relies on positional encodings (PEs) to handle the sequential order of input vectors. There are variations of PEs, such as absolute and relative, and several studies have reported on the superiority of relative PEs. In this paper, we focus on analyzing in which part of a transformer model PEs work and the different characteristics between absolute and relative PEs through a series of experiments. Experimental results indicate that PEs work in both self- and cross-attention blocks in a transformer model, and PEs should be added only to the query and key of an attention mechanism, not to the value. We also found that applying two PEs in combination, a relative PE in the self-attention block and an absolute PE in the cross-attention block, can improve translation quality.
2023
pdf
bib
Leveraging the Fusion-in-Decoder for Label Classification
Azumi Okuda
|
Hideya Mino
|
Taro Miyazaki
|
Jun Goto
Proceedings of the Second Workshop on Information Extraction from Scientific Publications
2020
pdf
bib
abs
NHK_STRL at WNUT-2020 Task 2: GATs with Syntactic Dependencies as Edges and CTC-based Loss for Text Classification
Yuki Yasuda
|
Taichi Ishiwatari
|
Taro Miyazaki
|
Jun Goto
Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)
The outbreak of COVID-19 has greatly impacted our daily lives. In these circumstances, it is important to grasp the latest information to avoid causing too much fear and panic. To help grasp new information, extracting information from social networking sites is one of the effective ways. In this paper, we describe a method to identify whether a tweet related to COVID-19 is informative or not, which can help to grasp new information. The key features of our method are its use of graph attention networks to encode syntactic dependencies and word positions in the sentence, and a loss function based on connectionist temporal classification (CTC) that can learn a label for each token without reference data for each token. Experimental results show that the proposed method achieved an F1 score of 0.9175, out- performing baseline methods.
pdf
bib
abs
Machine Translation from Spoken Language to Sign Language using Pre-trained Language Model as Encoder
Taro Miyazaki
|
Yusuke Morita
|
Masanori Sano
Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives
Sign language is the first language for those who were born deaf or lost their hearing in early childhood, so such individuals require services provided with sign language. To achieve flexible open-domain services with sign language, machine translations into sign language are needed. Machine translations generally require large-scale training corpora, but there are only small corpora for sign language. To overcome this data-shortage scenario, we developed a method that involves using a pre-trained language model of spoken language as the initial model of the encoder of the machine translation model. We evaluated our method by comparing it to baseline methods, including phrase-based machine translation, using only 130,000 phrase pairs of training data. Our method outperformed the baseline method, and we found that one of the reasons of translation error is from pointing, which is a special feature used in sign language. We also conducted trials to improve the translation quality for pointing. The results are somewhat disappointing, so we believe that there is still room for improving translation quality, especially for pointing.
pdf
bib
abs
Relation-aware Graph Attention Networks with Relational Position Encodings for Emotion Recognition in Conversations
Taichi Ishiwatari
|
Yuki Yasuda
|
Taro Miyazaki
|
Jun Goto
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Interest in emotion recognition in conversations (ERC) has been increasing in various fields, because it can be used to analyze user behaviors and detect fake news. Many recent ERC methods use graph-based neural networks to take the relationships between the utterances of the speakers into account. In particular, the state-of-the-art method considers self- and inter-speaker dependencies in conversations by using relational graph attention networks (RGAT). However, graph-based neural networks do not take sequential information into account. In this paper, we propose relational position encodings that provide RGAT with sequential information reflecting the relational graph structure. Accordingly, our RGAT model can capture both the speaker dependency and the sequential information. Experiments on four ERC datasets show that our model is beneficial to recognizing emotions expressed in conversations. In addition, our approach empirically outperforms the state-of-the-art on all of the benchmark datasets.
2019
pdf
bib
abs
Label Embedding using Hierarchical Structure of Labels for Twitter Classification
Taro Miyazaki
|
Kiminobu Makino
|
Yuka Takei
|
Hiroki Okamoto
|
Jun Goto
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Twitter is used for various applications such as disaster monitoring and news material gathering. In these applications, each Tweet is classified into pre-defined classes. These classes have a semantic relationship with each other and can be classified into a hierarchical structure, which is regarded as important information. Label texts of pre-defined classes themselves also include important clues for classification. Therefore, we propose a method that can consider the hierarchical structure of labels and label texts themselves. We conducted evaluation over the Text REtrieval Conference (TREC) 2018 Incident Streams (IS) track dataset, and we found that our method outperformed the methods of the conference participants.
pdf
bib
abs
Mining Tweets that refer to TV programs with Deep Neural Networks
Takeshi Kobayakawa
|
Taro Miyazaki
|
Hiroki Okamoto
|
Simon Clippingdale
Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)
The automatic analysis of expressions of opinion has been well studied in the opinion mining area, but a remaining problem is robustness for user-generated texts. Although consumer-generated texts are valuable since they contain a great number and wide variety of user evaluations, spelling inconsistency and the variety of expressions make analysis difficult. In order to tackle such situations, we applied a model that is reported to handle context in many natural language processing areas, to the problem of extracting references to the opinion target from text. Experiments on tweets that refer to television programs show that the model can extract such references with more than 90% accuracy.
2018
pdf
bib
abs
Twitter Geolocation using Knowledge-Based Methods
Taro Miyazaki
|
Afshin Rahimi
|
Trevor Cohn
|
Timothy Baldwin
Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text
Automatic geolocation of microblog posts from their text content is particularly difficult because many location-indicative terms are rare terms, notably entity names such as locations, people or local organisations. Their low frequency means that key terms observed in testing are often unseen in training, such that standard classifiers are unable to learn weights for them. We propose a method for reasoning over such terms using a knowledge base, through exploiting their relations with other entities. Our technique uses a graph embedding over the knowledge base, which we couple with a text representation to learn a geolocation classifier, trained end-to-end. We show that our method improves over purely text-based methods, which we ascribe to more robust treatment of low-count and out-of-vocabulary entities.
pdf
bib
abs
Classification of Tweets about Reported Events using Neural Networks
Kiminobu Makino
|
Yuka Takei
|
Taro Miyazaki
|
Jun Goto
Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text
We developed a system that automatically extracts “Event-describing Tweets” which include incidents or accidents information for creating news reports. Event-describing Tweets can be classified into “Reported-event Tweets” and “New-information Tweets.” Reported-event Tweets cite news agencies or user generated content sites, and New-information Tweets are other Event-describing Tweets. A system is needed to classify them so that creators of factual TV programs can use them in their productions. Proposing this Tweet classification task is one of the contributions of this paper, because no prior papers have used the same task even though program creators and other events information collectors have to do it to extract required information from social networking sites. To classify Tweets in this task, this paper proposes a method to input and concatenate character and word sequences in Japanese Tweets by using convolutional neural networks. This proposed method is another contribution of this paper. For comparison, character or word input methods and other neural networks are also used. Results show that a system using the proposed method and architectures can classify Tweets with an F1 score of 88 %.
2017
pdf
bib
Extracting Important Tweets for News Writers using Recurrent Neural Network with Attention Mechanism and Multi-task Learning
Taro Miyazaki
|
Shin Toriumi
|
Yuka Takei
|
Ichiro Yamada
|
Jun Goto
Proceedings of the 31st Pacific Asia Conference on Language, Information and Computation
pdf
bib
Tweet Extraction for News Production Considering Unreality
Yuka Takei
|
Taro Miyazaki
|
Ichiro Yamada
|
Jun Goto
Proceedings of the 31st Pacific Asia Conference on Language, Information and Computation
2014
pdf
bib
Proper Name Machine Translation from Japanese to Japanese Sign Language
Taro Miyazaki
|
Naoto Kato
|
Seiki Inoue
|
Shuichi Umeda
|
Makiko Azuma
|
Nobuyuki Hiruma
|
Yuji Nagashima
Proceedings of the EMNLP’2014 Workshop on Language Technology for Closely Related Languages and Language Variants