[go: up one dir, main page]

IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v346y2023ics0306261923007225.html
   My bibliography  Save this article

Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework

Author

Listed:
  • Huang, Ruchen
  • He, Hongwen
  • Gao, Miaojue
Abstract
Deep reinforcement learning (DRL) has become the mainstream method to design intelligent energy management strategies (EMSs) for fuel cell hybrid electric vehicles with the prosperity of artificial intelligence in recent years. Conventional DRL algorithms are suffering from low sampling efficiency and unsatisfactory utilization of computing resources. Combined with distributed architecture and parallel computation, DRL algorithms can be more efficient. Given that, this paper proposes a novel distributed DRL-based energy management framework for a fuel cell hybrid electric bus (FCHEB) to shorten the development cycle of DRL-based EMSs while reducing the total operation cost of the FCHEB. To begin, to make full use of the limited computing resources, a novel asynchronous advantage actor-critic (A3C)-based energy management framework is designed by innovatively integrating with the multi-process parallel computation technique. Then, a promising EMS considering the extra operation cost caused by fuel cell degradation and battery aging is designed based on this novel framework. Furthermore, EMSs based on a conventional DRL algorithm, advantage actor-critic (A2C), and another conventional distributed DRL framework, multi-thread A3C, are employed as baselines, and the performance of the proposed EMS is evaluated by training and testing using different driving cycles. Simulation results indicate that compared to EMSs based on A2C and multi-thread A3C, the proposed EMS can efficiently accelerate the convergence speed respectively by 87.46% and 88.92%, and reduce the total operation cost respectively by 44.83% and 41.19%. The main contribution of this article is to explore the integration of multi-process parallel computation in a distributed DRL-based EMS for a fuel cell vehicle for more efficient utilization of hydrogen energy in the transportation sector.

Suggested Citation

  • Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
  • Handle: RePEc:eee:appene:v:346:y:2023:i:c:s0306261923007225
    DOI: 10.1016/j.apenergy.2023.121358
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261923007225
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2023.121358?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Dong, Peng & Zhao, Junwei & Liu, Xuewu & Wu, Jian & Xu, Xiangyang & Liu, Yanfang & Wang, Shuhan & Guo, Wei, 2022. "Practical application of energy management strategy for hybrid electric vehicles based on intelligent and connected technologies: Development stages, challenges, and future trends," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
    2. Jonas Degrave & Federico Felici & Jonas Buchli & Michael Neunert & Brendan Tracey & Francesco Carpanese & Timo Ewalds & Roland Hafner & Abbas Abdolmaleki & Diego de las Casas & Craig Donner & Leslie F, 2022. "Magnetic control of tokamak plasmas through deep reinforcement learning," Nature, Nature, vol. 602(7897), pages 414-419, February.
    3. Chen, Huicui & Pei, Pucheng & Song, Mancun, 2015. "Lifetime prediction and the economic lifetime of Proton Exchange Membrane fuel cells," Applied Energy, Elsevier, vol. 142(C), pages 154-163.
    4. Julian Schrittwieser & Ioannis Antonoglou & Thomas Hubert & Karen Simonyan & Laurent Sifre & Simon Schmitt & Arthur Guez & Edward Lockhart & Demis Hassabis & Thore Graepel & Timothy Lillicrap & David , 2020. "Mastering Atari, Go, chess and shogi by planning with a learned model," Nature, Nature, vol. 588(7839), pages 604-609, December.
    5. Di Giorgio, Paolo & Di Ilio, Giovanni & Jannelli, Elio & Conte, Fiorentino Valerio, 2022. "Innovative battery thermal management system based on hydrogen storage in metal hydrides for fuel cell hybrid electric vehicles," Applied Energy, Elsevier, vol. 315(C).
    6. Quan, Shengwei & Wang, Ya-Xiong & Xiao, Xuelian & He, Hongwen & Sun, Fengchun, 2021. "Real-time energy management for fuel cell electric vehicle using speed prediction-based model predictive control considering performance degradation," Applied Energy, Elsevier, vol. 304(C).
    7. Shuo Feng & Haowei Sun & Xintao Yan & Haojie Zhu & Zhengxia Zou & Shengyin Shen & Henry X. Liu, 2023. "Dense reinforcement learning for safety validation of autonomous vehicles," Nature, Nature, vol. 615(7953), pages 620-627, March.
    8. Ganesh, Akhil Hannegudda & Xu, Bin, 2022. "A review of reinforcement learning based energy management systems for electrified powertrains: Progress, challenge, and potential solution," Renewable and Sustainable Energy Reviews, Elsevier, vol. 154(C).
    9. Tang, Xiaolin & Zhou, Haitao & Wang, Feng & Wang, Weida & Lin, Xianke, 2022. "Longevity-conscious energy management strategy of fuel cell hybrid electric Vehicle Based on deep reinforcement learning," Energy, Elsevier, vol. 238(PA).
    10. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    11. Zhou, Jianhao & Liu, Jun & Xue, Yuan & Liao, Yuhui, 2022. "Total travel costs minimization strategy of a dual-stack fuel cell logistics truck enhanced with artificial potential field and deep reinforcement learning," Energy, Elsevier, vol. 239(PA).
    12. Wang, Hao & He, Hongwen & Bai, Yunfei & Yue, Hongwei, 2022. "Parameterized deep Q-network based energy management with balanced energy economy and battery life for hybrid electric vehicles," Applied Energy, Elsevier, vol. 320(C).
    13. Suri, Girish & Onori, Simona, 2016. "A control-oriented cycle-life model for hybrid electric vehicle lithium-ion batteries," Energy, Elsevier, vol. 96(C), pages 644-653.
    14. Lee, Heeyun & Kim, Kyunghyun & Kim, Namwook & Cha, Suk Won, 2022. "Energy efficient speed planning of electric vehicles for car-following scenario using model-based reinforcement learning," Applied Energy, Elsevier, vol. 313(C).
    15. Peter R. Wurman & Samuel Barrett & Kenta Kawamoto & James MacGlashan & Kaushik Subramanian & Thomas J. Walsh & Roberto Capobianco & Alisa Devlic & Franziska Eckert & Florian Fuchs & Leilani Gilpin & P, 2022. "Outracing champion Gran Turismo drivers with deep reinforcement learning," Nature, Nature, vol. 602(7896), pages 223-228, February.
    16. David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
    17. Pu, Yuchen & Li, Qi & Zou, Xueli & Li, Ruirui & Li, Luoyi & Chen, Weirong & Liu, Hong, 2021. "Optimal sizing for an integrated energy system considering degradation and seasonal hydrogen storage," Applied Energy, Elsevier, vol. 302(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Peng, Jiankun & Shen, Yang & Wu, ChangCheng & Wang, Chunhai & Yi, Fengyan & Ma, Chunye, 2023. "Research on energy-saving driving control of hydrogen fuel bus based on deep reinforcement learning in freeway ramp weaving area," Energy, Elsevier, vol. 285(C).
    2. Hussain, Shahid & Irshad, Reyazur Rashid & Pallonetto, Fabiano & Hussain, Ihtisham & Hussain, Zakir & Tahir, Muhammad & Abimannan, Satheesh & Shukla, Saurabh & Yousif, Adil & Kim, Yun-Su & El-Sayed, H, 2023. "Hybrid coordination scheme based on fuzzy inference mechanism for residential charging of electric vehicles," Applied Energy, Elsevier, vol. 352(C).
    3. He, Hongwen & Su, Qicong & Huang, Ruchen & Niu, Zegong, 2024. "Enabling intelligent transferable energy management of series hybrid electric tracked vehicle across motion dimensions via soft actor-critic algorithm," Energy, Elsevier, vol. 294(C).
    4. Huang, Ruchen & He, Hongwen & Su, Qicong, 2024. "Towards a fossil-free urban transport system: An intelligent cross-type transferable energy management framework based on deep transfer reinforcement learning," Applied Energy, Elsevier, vol. 363(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. He, Hongwen & Meng, Xiangfei & Wang, Yong & Khajepour, Amir & An, Xiaowen & Wang, Renguang & Sun, Fengchun, 2024. "Deep reinforcement learning based energy management strategies for electrified vehicles: Recent advances and perspectives," Renewable and Sustainable Energy Reviews, Elsevier, vol. 192(C).
    2. Huang, Ruchen & He, Hongwen & Su, Qicong, 2024. "Towards a fossil-free urban transport system: An intelligent cross-type transferable energy management framework based on deep transfer reinforcement learning," Applied Energy, Elsevier, vol. 363(C).
    3. Weifan Long & Taixian Hou & Xiaoyi Wei & Shichao Yan & Peng Zhai & Lihua Zhang, 2023. "A Survey on Population-Based Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(10), pages 1-17, May.
    4. Christoph Graf & Viktor Zobernig & Johannes Schmidt & Claude Klockl, 2021. "Computational Performance of Deep Reinforcement Learning to find Nash Equilibria," Papers 2104.12895, arXiv.org.
    5. Li, Wenqing & Ni, Shaoquan, 2022. "Train timetabling with the general learning environment and multi-agent deep reinforcement learning," Transportation Research Part B: Methodological, Elsevier, vol. 157(C), pages 230-251.
    6. Ren, Xiaoxia & Ye, Jinze & Xie, Liping & Lin, Xinyou, 2024. "Battery longevity-conscious energy management predictive control strategy optimized by using deep reinforcement learning algorithm for a fuel cell hybrid electric vehicle," Energy, Elsevier, vol. 286(C).
    7. Wu, Jie & Li, Dong, 2023. "Modeling and maximizing information diffusion over hypergraphs based on deep reinforcement learning," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 629(C).
    8. De Moor, Bram J. & Gijsbrechts, Joren & Boute, Robert N., 2022. "Reward shaping to improve the performance of deep reinforcement learning in perishable inventory management," European Journal of Operational Research, Elsevier, vol. 301(2), pages 535-545.
    9. Christopher R. Madan, 2020. "Considerations for Comparing Video Game AI Agents with Humans," Challenges, MDPI, vol. 11(2), pages 1-12, August.
    10. Christoph Graf & Viktor Zobernig & Johannes Schmidt & Claude Klöckl, 2024. "Computational Performance of Deep Reinforcement Learning to Find Nash Equilibria," Computational Economics, Springer;Society for Computational Economics, vol. 63(2), pages 529-576, February.
    11. Wang, Yong & Wu, Yuankai & Tang, Yingjuan & Li, Qin & He, Hongwen, 2023. "Cooperative energy management and eco-driving of plug-in hybrid electric vehicle via multi-agent reinforcement learning," Applied Energy, Elsevier, vol. 332(C).
    12. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    13. Sumitkumar, Rathor & Al-Sumaiti, Ameena Saad, 2024. "Shared autonomous electric vehicle: Towards social economy of energy and mobility from power-transportation nexus perspective," Renewable and Sustainable Energy Reviews, Elsevier, vol. 197(C).
    14. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    15. Gokhale, Gargya & Claessens, Bert & Develder, Chris, 2022. "Physics informed neural networks for control oriented thermal modeling of buildings," Applied Energy, Elsevier, vol. 314(C).
    16. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.
    17. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    18. Zhang, Tianhao & Dong, Zhe & Huang, Xiaojin, 2024. "Multi-objective optimization of thermal power and outlet steam temperature for a nuclear steam supply system with deep reinforcement learning," Energy, Elsevier, vol. 286(C).
    19. Taejong Joo & Hyunyoung Jun & Dongmin Shin, 2022. "Task Allocation in Human–Machine Manufacturing Systems Using Deep Reinforcement Learning," Sustainability, MDPI, vol. 14(4), pages 1-18, February.
    20. Boute, Robert N. & Gijsbrechts, Joren & van Jaarsveld, Willem & Vanvuchelen, Nathalie, 2022. "Deep reinforcement learning for inventory control: A roadmap," European Journal of Operational Research, Elsevier, vol. 298(2), pages 401-412.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:346:y:2023:i:c:s0306261923007225. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.