Dialogue system research is a hot topic nowadays. Usually the deployed systems are very specifically focused and are not able of having a complex conversation. One of the reasons is, that there is not a lot of methods that would allow us to extend the system in a simple way. Moreover, they usually require a large amount of well-annotated training data. This is especially problematic for dialogue systems, because they require a lot of resources to do the annotation right.
Therefore we propose to invent new learning methods for dialogue systems which would lead to quality improvement and widen dialogue systems use cases. We plan to use unsupervised methods that bring us the possibility of using much larger unannotated corpora and hence more effective training of statistical models. We further focus on exploiting weekly annotated data using methods such as transfer learning or meta-learning (learning from the similar tasks). These techniques would enable usage of partially annotated data for dialogue system training.
These methods still weren’t explored much in the area of dialogue systems.
We’ll propose novel applications of these methods to data in dialogue domain and explore the potential of using large, unannotated corpora which are crucial for creating robust statistical models that are applicable in practice.