default search action
Jun Chen 0024
Person information
- affiliation: Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
- affiliation: Tencent Ethereal Audio Lab, Shenzhen, China
Other persons with the same name
- Jun Chen — disambiguation page
- Jun Chen 0001 — Wuhan University, National Engineering Research Center for Multimedia Software, China
- Jun Chen 0002 — Oakland University, Rochester, MI, USA (and 2 more)
- Jun Chen 0003 — Peking University, Beijing, China
- Jun Chen 0004 — Tsinghua University, School of Software, Beijing, China
- Jun Chen 0005 — McMaster University, Department of Electrical and Computer Engineering, Hamilton, ON, Canada (and 4 more)
- Jun Chen 0006 — Ningbo University of Technology, Faculty of Science, China
- Jun Chen 0007 — National University of Defense Technology, College of Electronic Science and Engineering, Changsha, China
- Jun Chen 0008 — University of California, Los Angeles, Department of Electrical Engineering, CA, USA (and 1 more)
- Jun Chen 0009 — Queen Mary University of London, School of Engineering and Materials Science, United Kingdom (and 2 more)
- Jun Chen 0010 — Northwest A&F University, College of Animal Science and Technology, Yangling, China
- Jun Chen 0011 — State Key Laboratory of Mathematical Engineering and Advanced Computing, Zhengzhou, China (and 1 more)
- Jun Chen 0012 — Xiamen University, China (and 1 more)
- Jun Chen 0013 — Sun Yat-Sen University, School of Electronics and Information Technology, Guangzhou, China
- Jun Chen 0014 — University of Essex, Centre for Computational Finance and Economic Agents, Colchester, UK
- Jun Chen 0015 — Nanjing University of Information Science and Technology, School of Electronic and Information Engineering, China (and 1 more)
- Jun Chen 0016 — Nanjing University of Science and Technology, School of Automation, China (and 3 more)
- Jun Chen 0017 — KDDI Research Inc., Ultra-realistic Communication Group, Fujimino, Japan
- Jun Chen 0018 — University of Waterloo, Ontario, Canada
- Jun Chen 0019 — Hubei Key Laboratory of Advanced Control and Intelligent Automation for Complex Systems, Wuhan, China (and 1 more)
- Jun Chen 0020 — Wuhan University, School of Information Management, China
- Jun Chen 0021 — King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
- Jun Cheng 0005 (aka: Jun Chen 0022) — Southeast University, Jiangning Development Zone, School of Transportation, Nanjing, China
- Jun Chen 0023 — Zhejiang University, Institute of Cyber-Systems and Control, China (and 1 more)
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c18]Zilin Wang, Haolin Zhuang, Lu Li, Yinmin Zhang, Junjie Zhong, Jun Chen, Yu Yang, Boshi Tang, Zhiyong Wu:
Explore 3D Dance Generation via Reward Model from Automatically-Ranked Demonstrations. AAAI 2024: 301-309 - [c17]Boshi Tang, Zhiyong Wu, Xixin Wu, Qiaochu Huang, Jun Chen, Shun Lei, Helen Meng:
SimCalib: Graph Neural Network Calibration Based on Similarity between Nodes. AAAI 2024: 15267-15275 - [c16]Zhiwei Lin, Jun Chen, Boshi Tang, Binzhu Sha, Jing Yang, Yaolong Ju, Fan Fan, Shiyin Kang, Zhiyong Wu, Helen Meng:
Multi-View Midivae: Fusing Track- and Bar-View Representations for Long Multi-Track Symbolic Music Generation. ICASSP 2024: 941-945 - [c15]Weinan Tong, Jiaxu Zhu, Jun Chen, Shiyin Kang, Tao Jiang, Yang Li, Zhiyong Wu, Helen Meng:
SCNet: Sparse Compression Network for Music Source Separation. ICASSP 2024: 1276-1280 - [c14]Xingda Li, Fan Zhuo, Dan Luo, Jun Chen, Shiyin Kang, Zhiyong Wu, Tao Jiang, Yang Li, Han Fang, Yahui Zhou:
Generating Stereophonic Music with Single-Stage Language Models. ICASSP 2024: 1471-1475 - [i13]Zhiwei Lin, Jun Chen, Boshi Tang, Binzhu Sha, Jing Yang, Yaolong Ju, Fan Fan, Shiyin Kang, Zhiyong Wu, Helen Meng:
Multi-view MidiVAE: Fusing Track- and Bar-view Representations for Long Multi-track Symbolic Music Generation. CoRR abs/2401.07532 (2024) - [i12]Wei Chen, Xintao Zhao, Jun Chen, Binzhu Sha, Zhiwei Lin, Zhiyong Wu:
RobustSVC: HuBERT-based Melody Extractor and Adversarial Learning for Robust Singing Voice Conversion. CoRR abs/2409.06237 (2024) - 2023
- [c13]Jun Chen, Wei Rao, Zilin Wang, Jiuxin Lin, Zhiyong Wu, Yannan Wang, Shidong Shang, Helen Meng:
Inter-Subnet: Speech Enhancement with Subband Interaction. ICASSP 2023: 1-5 - [c12]Jun Chen, Yupeng Shi, Wenzhe Liu, Wei Rao, Shulin He, Andong Li, Yannan Wang, Zhiyong Wu, Shidong Shang, Chengshi Zheng:
Gesper: A Unified Framework for General Speech Restoration. ICASSP 2023: 1-2 - [c11]Shulin He, Wei Rao, Jinjiang Liu, Jun Chen, Yukai Ju, Xueliang Zhang, Yannan Wang, Shidong Shang:
Speech Enhancement with Intelligent Neural Homomorphic Synthesis. ICASSP 2023: 1-5 - [c10]Yukai Ju, Jun Chen, Shimin Zhang, Shulin He, Wei Rao, Weixin Zhu, Yannan Wang, Tao Yu, Shidong Shang:
TEA-PSE 3.0: Tencent-Ethereal-Audio-Lab Personalized Speech Enhancement System For ICASSP 2023 Dns-Challenge. ICASSP 2023: 1-2 - [c9]Jiuxin Lin, Xinyu Cai, Heinrich Dinkel, Jun Chen, Zhiyong Yan, Yongqing Wang, Junbo Zhang, Zhiyong Wu, Yujun Wang, Helen Meng:
Av-Sepformer: Cross-Attention Sepformer for Audio-Visual Target Speaker Extraction. ICASSP 2023: 1-5 - [c8]Weinan Tong, Jiaxu Zhu, Jun Chen, Zhiyong Wu, Shiyin Kang, Helen Meng:
TFCnet: Time-Frequency Domain Corrector for Speech Separation. ICASSP 2023: 1-5 - [c7]Zilin Wang, Peng Liu, Jun Chen, Sipan Li, Jinfeng Bai, Gang He, Zhiyong Wu, Helen Meng:
A Synthetic Corpus Generation Method for Neural Vocoder Training. ICASSP 2023: 1-5 - [c6]Jiuxin Lin, Peng Wang, Heinrich Dinkel, Jun Chen, Zhiyong Wu, Zhiyong Yan, Yongqing Wang, Junbo Zhang, Yujun Wang:
Focus on the Sound around You: Monaural Target Speaker Extraction via Distance and Speaker Information. INTERSPEECH 2023: 2488-2492 - [c5]Jun Chen, Wei Rao, Zilin Wang, Jiuxin Lin, Yukai Ju, Shulin He, Yannan Wang, Zhiyong Wu:
MC-SpEx: Towards Effective Speaker Extraction with Multi-Scale Interfusion and Conditional Speaker Modulation. INTERSPEECH 2023: 4034-4038 - [c4]Wenzhe Liu, Yupeng Shi, Jun Chen, Wei Rao, Shulin He, Andong Li, Yannan Wang, Zhiyong Wu:
Gesper: A Restoration-Enhancement Framework for General Speech Reconstruction. INTERSPEECH 2023: 4044-4048 - [c3]Yunrui Cai, Jingran Xie, Boshi Tang, Yuanyuan Wang, Jun Chen, Haiwei Xue, Zhiyong Wu:
First-order Multi-label Learning with Cross-modal Interactions for Multimodal Emotion Recognition. MRAC@MM 2023: 13-20 - [i11]Yukai Ju, Jun Chen, Shimin Zhang, Shulin He, Wei Rao, Weixin Zhu, Yannan Wang, Tao Yu, Shidong Shang:
TEA-PSE 3.0: Tencent-Ethereal-Audio-Lab Personalized Speech Enhancement System For ICASSP 2023 DNS Challenge. CoRR abs/2303.07704 (2023) - [i10]Jun Chen, Wei Rao, Zilin Wang, Jiuxin Lin, Zhiyong Wu, Yannan Wang, Shidong Shang, Helen Meng:
Inter-SubNet: Speech Enhancement with Subband Interaction. CoRR abs/2305.05599 (2023) - [i9]Wenzhe Liu, Yupeng Shi, Jun Chen, Wei Rao, Shulin He, Andong Li, Yannan Wang, Zhiyong Wu:
Gesper: A Restoration-Enhancement Framework for General Speech Reconstruction. CoRR abs/2306.08454 (2023) - [i8]Jiuxin Lin, Xinyu Cai, Heinrich Dinkel, Jun Chen, Zhiyong Yan, Yongqing Wang, Junbo Zhang, Zhiyong Wu, Yujun Wang, Helen Meng:
AV-SepFormer: Cross-Attention SepFormer for Audio-Visual Target Speaker Extraction. CoRR abs/2306.14170 (2023) - [i7]Jiuxin Lin, Peng Wang, Heinrich Dinkel, Jun Chen, Zhiyong Wu, Zhiyong Yan, Yongqing Wang, Junbo Zhang, Yujun Wang:
Focus on the Sound around You: Monaural Target Speaker Extraction via Distance and Speaker Information. CoRR abs/2306.16241 (2023) - [i6]Jun Chen, Wei Rao, Zilin Wang, Jiuxin Lin, Yukai Ju, Shulin He, Yannan Wang, Zhiyong Wu:
MC-SpEx: Towards Effective Speaker Extraction with Multi-Scale Interfusion and Conditional Speaker Modulation. CoRR abs/2306.16250 (2023) - [i5]Zilin Wang, Haolin Zhuang, Lu Li, Yinmin Zhang, Junjie Zhong, Jun Chen, Yu Yang, Boshi Tang, Zhiyong Wu:
Explore 3D Dance Generation via Reward Model from Automatically-Ranked Demonstrations. CoRR abs/2312.11442 (2023) - [i4]Boshi Tang, Zhiyong Wu, Xixin Wu, Qiaochu Huang, Jun Chen, Shun Lei, Helen Meng:
SimCalib: Graph Neural Network Calibration based on Similarity between Nodes. CoRR abs/2312.11858 (2023) - 2022
- [c2]Jun Chen, Zilin Wang, Deyi Tuo, Zhiyong Wu, Shiyin Kang, Helen Meng:
FullSubNet+: Channel Attention Fullsubnet with Complex Spectrograms for Speech Enhancement. ICASSP 2022: 7857-7861 - [c1]Jun Chen, Wei Rao, Zilin Wang, Zhiyong Wu, Yannan Wang, Tao Yu, Shidong Shang, Helen Meng:
Speech Enhancement with Fullband-Subband Cross-Attention Network. INTERSPEECH 2022: 976-980 - [i3]Jun Chen, Zilin Wang, Deyi Tuo, Zhiyong Wu, Shiyin Kang, Helen Meng:
FullSubNet+: Channel Attention FullSubNet with Complex Spectrograms for Speech Enhancement. CoRR abs/2203.12188 (2022) - [i2]Shulin He, Wei Rao, Jinjiang Liu, Jun Chen, Yukai Ju, Xueliang Zhang, Yannan Wang, Shidong Shang:
Speech Enhancement with Intelligent Neural Homomorphic Synthesis. CoRR abs/2210.15853 (2022) - [i1]Jun Chen, Wei Rao, Zilin Wang, Zhiyong Wu, Yannan Wang, Tao Yu, Shidong Shang, Helen Meng:
Speech Enhancement with Fullband-Subband Cross-Attention Network. CoRR abs/2211.05432 (2022)
Coauthor Index
aka: Helen Meng
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-13 19:13 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint