[go: up one dir, main page]

Unbiased Math Word Problems Benchmark for Mitigating Solving Bias

Zhicheng Yang, Jinghui Qin, Jiaqi Chen, Xiaodan Liang


Abstract
In this paper, we revisit the solving bias when evaluating models on current Math Word Problem (MWP) benchmarks. However, current solvers exist solving bias which consists of data bias and learning bias due to biased dataset and improper training strategy. Our experiments verify MWP solvers are easy to be biased by the biased training datasets which do not cover diverse questions for each problem narrative of all MWPs, thus a solver can only learn shallow heuristics rather than deep semantics for understanding problems. Besides, an MWP can be naturally solved by multiple equivalent equations while current datasets take only one of the equivalent equations as ground truth, forcing the model to match the labeled ground truth and ignoring other equivalent equations. Here, we first introduce a novel MWP dataset named UnbiasedMWP which is constructed by varying the grounded expressions in our collected data and annotating them with corresponding multiple new questions manually. Then, to further mitigate learning bias, we propose a Dynamic Target Selection (DTS) Strategy to dynamically select more suitable target expressions according to the longest prefix match between the current model output and candidate equivalent equations which are obtained by applying commutative law during training. The results show that our UnbiasedMWP has significantly fewer biases than its original data and other datasets, posing a promising benchmark for fairly evaluating the solvers’ reasoning skills rather than matching nearest neighbors. And the solvers trained with our DTS achieve higher accuracies on multiple MWP benchmarks. The source code is available at https://github.com/yangzhch6/UnbiasedMWP.
Anthology ID:
2022.findings-naacl.104
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1401–1408
Language:
URL:
https://aclanthology.org/2022.findings-naacl.104
DOI:
10.18653/v1/2022.findings-naacl.104
Bibkey:
Cite (ACL):
Zhicheng Yang, Jinghui Qin, Jiaqi Chen, and Xiaodan Liang. 2022. Unbiased Math Word Problems Benchmark for Mitigating Solving Bias. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1401–1408, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Unbiased Math Word Problems Benchmark for Mitigating Solving Bias (Yang et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-naacl.104.pdf
Video:
 https://aclanthology.org/2022.findings-naacl.104.mp4
Code
 yangzhch6/unbiasedmwp +  additional community code
Data
Math23K