[go: up one dir, main page]

nep-cmp New Economics Papers
on Computational Economics
Issue of 2023‒06‒26
seventeen papers chosen by



  1. Machine Learning and Deep Learning Forecasts of Electricity Imbalance Prices By Sinan Deng; John Inekwe; Vladimir Smirnov; Andrew Wait; Chao Wang
  2. Gated Deeper Models are Effective Factor Learners By Jingjing Guo
  3. Backward Hedging for American Options with Transaction Costs By Ludovic Gouden\`ege; Andrea Molent; Antonino Zanette
  4. Contingent valuation machine learning (CVML): A novel method for estimating citizens’ willingness- to- pay for safer and cleaner environment By Khuc, Quy Van; Tran, Duc-Trung
  5. Efficient Learning of Nested Deep Hedging using Multiple Options By Masanori Hirano; Kentaro Imajo; Kentaro Minami; Takuya Shimada
  6. Deep Learning for Solving and Estimating Dynamic Macro-Finance Models By Benjamin Fan; Edward Qiao; Anran Jiao; Zhouzhou Gu; Wenhao Li; Lu Lu
  7. Machine learning and physician prescribing: a path to reduced antibiotic use By Michael Allan Ribers; Hannes Ullrich
  8. From risk mitigation to employee action along the machine learning pipeline: A paradigm shift in European regulatory perspectives on automated decision-making systems in the workplace By Mollen, Anne; Hondrich, Lukas
  9. Reinforcement Learning and Portfolio Allocation: Challenging Traditional Allocation Methods By Lavko, Matus; Klein, Tony; Walther, Thomas
  10. From Alchemy to Analytics: Unleashing the Potential of Technical Analysis in Predicting Noble Metal Price Movement By Marcin Chlebus; Artur Nowak
  11. Practical and Ethical Perspectives on AI-Based Employee Performance Evaluation By Pletcher, Scott Nicholas
  12. “Density forecasts of inflation using Gaussian process regression models” By Petar Soric; Enric Monte; Salvador Torra; Oscar Claveria
  13. Non-adversarial training of Neural SDEs with signature kernel scores By Zacharia Issa; Blanka Horvath; Maud Lemercier; Cristopher Salvi
  14. Mind Your Language: Market Responses to Central Bank Speeches By Maximilian Ahrens; Deniz Erdemlioglu; Michael McMahon; Christopher J. Neely; Xiye Yang
  15. How many inner simulations to compute conditional expectations with least-square Monte Carlo? By Aurélien Alfonsi; Bernard Lapeyre; Jérôme Lelong
  16. Evolutionary multi-objective optimisation for large-scale portfolio selection with both random and uncertain returns By Liu, Weilong; Zhang, Yong; Liu, Kailong; Quinn, Barry; Yang, Xingyu; Peng, Qiao
  17. A Simulation Package in VBA to Support Finance Students for Constructing Optimal Portfolios By Abdulnasser Hatemi-J; Alan Mustafa

  1. By: Sinan Deng; John Inekwe; Vladimir Smirnov; Andrew Wait; Chao Wang
    Abstract: In this paper, we propose a seasonal attention mechanism, the effectiveness of which is evaluated via the Bidirectional Long Short-Term Memory (BiLSTM) model. We compare its performance with alternative deep learning and machine learning models in forecasting the balancing settlement prices in the electricity market of Great Britain. Critically, the Seasonal Attention-Based BiLSTM framework provides a superior forecast of extreme prices with an out-of-sample gain in the predictability of 25-37% compared with models in the literature. Our forecasting techniques could aid both market participants, to better manage their risk and assign their assets, and policy makers, to operate the system at lower cost.
    Keywords: forecasting; electricity; balance settlement prices; Long Short-Term Memory; machine learning.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:syd:wpaper:2023-03&r=cmp
  2. By: Jingjing Guo
    Abstract: Precisely forecasting the excess returns of an asset (e.g., Tesla stock) is beneficial to all investors. However, the unpredictability of market dynamics, influenced by human behaviors, makes this a challenging task. In prior research, researcher have manually crafted among of factors as signals to guide their investing process. In contrast, this paper view this problem in a different perspective that we align deep learning model to combine those human designed factors to predict the trend of excess returns. To this end, we present a 5-layer deep neural network that generates more meaningful factors in a 2048-dimensional space. Modern network design techniques are utilized to enhance robustness training and reduce overfitting. Additionally, we propose a gated network that dynamically filters out noise-learned features, resulting in improved performance. We evaluate our model over 2, 000 stocks from the China market with their recent three years records. The experimental results show that the proposed gated activation layer and the deep neural network could effectively overcome the problem. Specifically, the proposed gated activation layer and deep neural network contribute to the superior performance of our model. In summary, the proposed model exhibits promising results and could potentially benefit investors seeking to optimize their investment strategies.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.10693&r=cmp
  3. By: Ludovic Gouden\`ege; Andrea Molent; Antonino Zanette
    Abstract: In this article, we introduce an algorithm called Backward Hedging, designed for hedging European and American options while considering transaction costs. The optimal strategy is determined by minimizing an appropriate loss function, which is based on either a risk measure or the mean squared error of the hedging strategy at maturity. By appropriately reformulating this loss function, we can address its minimization by moving backward in time. The approach avoids machine learning and instead relies on traditional optimization techniques, Monte Carlo simulations, and interpolations on a grid. Comparisons with the Deep Hedging algorithm in various numerical experiments showcase the efficiency and accuracy of the proposed method.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.06805&r=cmp
  4. By: Khuc, Quy Van; Tran, Duc-Trung
    Abstract: This paper introduces an advanced method that integrates contingent valuation and machine learning (CVML) to estimate residents’ demand for mitigating environmental pollutions and climate change. To be precise, CVML is an innovative hybrid machine-learning model, and it can leverage a limited amount of survey data for prediction and data enrichment purposes. The model comprises of two interconnected modules: Module I, an unsupervised learning algorithm, and Module II, a supervised learning algorithm. Module I is responsible for clustering the data (x^sur) into groups based on common characteristics, thereby grouping the corresponding dependent variable (y^sur) values as well. Take a survey on the topic of air pollution in Hanoi in 2019 as an example, we find that CVML can predict households’ willingness– to– pay for polluted air mitigation at a high degree of accuracy (i.e., over 90%). This finding suggests that CVML is a powerful and practical method that would be potentially widely applied in fields of environmental economics and sustainability science in years to come.
    Date: 2023–05–17
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:r35bz&r=cmp
  5. By: Masanori Hirano; Kentaro Imajo; Kentaro Minami; Takuya Shimada
    Abstract: Deep hedging is a framework for hedging derivatives in the presence of market frictions. In this study, we focus on the problem of hedging a given target option by using multiple options. To extend the deep hedging framework to this setting, the options used as hedging instruments also have to be priced during training. While one might use classical pricing model such as the Black-Scholes formula, ignoring frictions can offer arbitrage opportunities which are undesirable for deep hedging learning. The goal of this study is to develop a nested deep hedging method. That is, we develop a fully-deep approach of deep hedging in which the hedging instruments are also priced by deep neural networks that are aware of frictions. However, since the prices of hedging instruments have to be calculated under many different conditions, the entire learning process can be computationally intractable. To overcome this problem, we propose an efficient learning method for nested deep hedging. Our method consists of three techniques to circumvent computational intractability, each of which reduces redundant computations during training. We show through experiments that the Black-Scholes pricing of hedge instruments can admit significant arbitrage opportunities, which are not observed when the pricing is performed by deep hedging. We also demonstrate that our proposed method successfully reduces the hedging risks compared to a baseline method that does not use options as hedging instruments.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.12264&r=cmp
  6. By: Benjamin Fan; Edward Qiao; Anran Jiao; Zhouzhou Gu; Wenhao Li; Lu Lu
    Abstract: We develop a methodology that utilizes deep learning to simultaneously solve and estimate canonical continuous-time general equilibrium models in financial economics. We illustrate our method in two examples: (1) industrial dynamics of firms and (2) macroeconomic models with financial frictions. Through these applications, we illustrate the advantages of our method: generality, simultaneous solution and estimation, leveraging the state-of-art machine-learning techniques, and handling large state space. The method is versatile and can be applied to a vast variety of problems.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.09783&r=cmp
  7. By: Michael Allan Ribers; Hannes Ullrich
    Abstract: Inefficient human decisions are driven by biases and limited information. Health care is one leading example where machine learning is hoped to deliver efficiency gains. Antibiotic resistance constitutes a major challenge to health care systems due to human antibiotic overuse. We investigate how a policy leveraging the strengths of a machine learning algorithm and physicians can provide new opportunities to reduce antibiotic use. We focus on urinary tract infections in primary care, a leading cause for antibiotic use, where physicians often prescribe prior to attaining diagnostic certainty. Symptom assessment and rapid testing provide diagnostic information with limited accuracy, while laboratory testing can diagnose bacterial infections with considerable delay. Linking Danish administrative and laboratory data, we optimize policy rules which base initial prescription decisions on machine learning predictions and delegate decisions to physicians where these benefit most from private information at the point-of-care. The policy shows a potential to reduce antibiotic prescribing by 8.1 percent and overprescribing by 20.3 percent without assigning fewer prescriptions to patients with bacterial infections. We find human-algorithm complementarity is essential to achieve efficiency gains.
    Date: 2023–06–05
    URL: http://d.repec.org/n?u=RePEc:bdp:dpaper:0019&r=cmp
  8. By: Mollen, Anne; Hondrich, Lukas
    Abstract: Automated decision-making (ADM) systems in the workplace aggravate the power imbalance between employees and employers by making potentially crucial decisions about employees. Current approaches focus on risk mitigation to safeguard employee interests. While limiting risks remains important, employee representatives should be able to include their interests in the decision-making of ADM systems. This paper introduces the concept of the Machine Learning Pipeline to demonstrate how these interests can be implemented in practice and point to necessary structural transformations.
    Keywords: Artificial Intelligence, EU regulation, workplace, democracy, employee representatives
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:hbsfof:278&r=cmp
  9. By: Lavko, Matus; Klein, Tony; Walther, Thomas
    Abstract: We test the out-of-sample trading performance of model-free reinforcement learning (RL) agents and compare them with the performance of equally-weighted portfolios and traditional mean-variance (MV) optimization benchmarks. By dividing European and U.S. indices constituents into factor datasets, the RL-generated portfolios face different scenarios defined by these factor environments. The RL approach is empirically evaluated based on a selection of measures and probabilistic assessments. Training these models only on price data and features constructed from these prices, the performance of the RL approach yields better risk-adjusted returns as well as probabilistic Sharpe ratios compared to MV specifications. However, this performance varies across factor environments. RL models partially uncover the nonlinear structure of the stochastic discount factor. It is further demonstrated that RL models are successful at reducing left-tail risks in out-of-sample settings. These results indicate that these models are indeed useful in portfolio management applications.
    Keywords: Asset Allocation, Reinforcement Learning, Machine Learning, Portfolio Theory, Diversification
    JEL: G11 C44 C55 C58
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:qmsrps:202301&r=cmp
  10. By: Marcin Chlebus (University of Warsaw, Faculty of Economic Sciences); Artur Nowak (University of Warsaw, Faculty of Economic Sciences)
    Abstract: Algorithmic trading has been a central theme in numerous research papers, combining knowledge from the fields of Finance and Mathematics. This thesis aimed to apply basic Technical Analysis indicators for predicting price movement of three noble metals: Gold, Silver, and Platinum in a form of multi-class classification. That task was performed using four algorithms: Logistic Regression, k-Nearest Neighbors, Random Forest and XGBoost. The study incorporated feature filtering methods such as Kendall-tau filtering and PCA, as well as five different data frequencies: 1, 5, 10, 15 and 20 trading days. From a total of 40 potential models for each metal, the best one was selected and evaluated using data from period 2018-2022. The result revealed that models utilizing only Technical Analysis indicators were able to predict price movements to a significant extent, leading to investment strategies that outperformed the market in two out of three cases.
    Keywords: precious metals, algotrading, machine learning, multiclass classification, logistic regression, nearest neighbors, random forest, xgboost
    JEL: C38 C51 C52 C58 G17
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:war:wpaper:2023-13&r=cmp
  11. By: Pletcher, Scott Nicholas
    Abstract: For most, job performance evaluations are often just another expected part of the employee experience. While these evaluations take on different forms depending on the occupation, the usual objective is to align the employee’s activities with the values and objectives of the greater organization. Of course, pursuing this objective involves a whole host of complex skills and abilities which sometimes pose challenges to leaders and organizations. Automation has long been a favored tool of businesses to help bring consistency, efficiency, and accuracy to various processes, including many human capital management processes. Recent improvements in artificial intelligence (AI) approaches have enabled new options for its use in the HCM space. One such use case is assisting leaders in evaluating their employees’ performance. While using technology to measure and evaluate worker production is not novel, the potential now exists through AI algorithms to delve beyond just piece-meal work and make inferences about an employee’s economic impact, emotional state, aptitude for leadership and the likelihood of leaving. Many organizations are eager to use these tools, potentially saving time and money, and are keen on removing bias or inconsistency humans can introduce in the employee evaluation process. However, these AI models often consist of large, complex neural networks where transparency and explainability are not easily achieved. These black-box systems might do a reasonable job, but what are the implications of faceless algorithms making life-changing decisions for employees?
    Date: 2023–04–28
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:29yej&r=cmp
  12. By: Petar Soric (University of Zagreb); Enric Monte (Polytechnic University of Catalunya); Salvador Torra (Riskcenter-IREA, University of Barcelona); Oscar Claveria (AQR-IREA, University of Barcelona)
    Abstract: The present study uses Gaussian Process regression models for generating density forecasts of inflation within the New Keynesian Phillips curve (NKPC) framework. The NKPC is a structural model of inflation dynamics in which we include the output gap, inflation expectations, fuel world prices and money market interest rates as predictors. We estimate country-specific time series models for the 19 Euro Area (EA) countries. As opposed to other machine learning models, Gaussian Process regression allows estimating confidence intervals for the predictions. The performance of the proposed model is assessed in a one-step-ahead forecasting exercise. The results obtained point out the recent inflationary pressures and show the potential of Gaussian Process regression for forecasting purposes.
    Keywords: Machine learning, Gaussian process regression, Time-series analysis, Economic forecasting, Inflation, New Keynesian Phillips curve JEL classification: C45, C51, C53, E31
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:aqr:wpaper:202207&r=cmp
  13. By: Zacharia Issa; Blanka Horvath; Maud Lemercier; Cristopher Salvi
    Abstract: Neural SDEs are continuous-time generative models for sequential data. State-of-the-art performance for irregular time series generation has been previously obtained by training these models adversarially as GANs. However, as typical for GAN architectures, training is notoriously unstable, often suffers from mode collapse, and requires specialised techniques such as weight clipping and gradient penalty to mitigate these issues. In this paper, we introduce a novel class of scoring rules on pathspace based on signature kernels and use them as objective for training Neural SDEs non-adversarially. By showing strict properness of such kernel scores and consistency of the corresponding estimators, we provide existence and uniqueness guarantees for the minimiser. With this formulation, evaluating the generator-discriminator pair amounts to solving a system of linear path-dependent PDEs which allows for memory-efficient adjoint-based backpropagation. Moreover, because the proposed kernel scores are well-defined for paths with values in infinite dimensional spaces of functions, our framework can be easily extended to generate spatiotemporal data. Our procedure permits conditioning on a rich variety of market conditions and significantly outperforms alternative ways of training Neural SDEs on a variety of tasks including the simulation of rough volatility models, the conditional probabilistic forecasts of real-world forex pairs where the conditioning variable is an observed past trajectory, and the mesh-free generation of limit order book dynamics.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.16274&r=cmp
  14. By: Maximilian Ahrens; Deniz Erdemlioglu; Michael McMahon; Christopher J. Neely; Xiye Yang
    Abstract: Researchers have carefully studied post-meeting central bank communication and have found that it often moves markets, but they have paid less attention to the more frequent central bankers’ speeches. We create a novel dataset of US Federal Reserve speeches and use supervised multimodal natural language processing methods to identify how monetary policy news affect financial volatility and tail risk through implied changes in forecasts of GDP, inflation, and unemployment. We find that news in central bankers’ speeches can help explain volatility and tail risk in both equity and bond markets. We also find that markets attend to these signals more closely during abnormal GDP and inflation regimes. Our results challenge the conventional view that central bank communication primarily resolves uncertainty.
    Keywords: central bank communication; multimodal machine learning; natural language processing; speech analysis; high-frequency data; volatility; tail risk
    JEL: E50 E52 C45 C53 G10 G12 G14
    Date: 2023–05–31
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:96270&r=cmp
  15. By: Aurélien Alfonsi (MATHRISK - Mathematical Risk Handling - UPEM - Université Paris-Est Marne-la-Vallée - ENPC - École des Ponts ParisTech - Inria de Paris - Inria - Institut National de Recherche en Informatique et en Automatique, CERMICS - Centre d'Enseignement et de Recherche en Mathématiques et Calcul Scientifique - ENPC - École des Ponts ParisTech); Bernard Lapeyre (MATHRISK - Mathematical Risk Handling - UPEM - Université Paris-Est Marne-la-Vallée - ENPC - École des Ponts ParisTech - Inria de Paris - Inria - Institut National de Recherche en Informatique et en Automatique, CERMICS - Centre d'Enseignement et de Recherche en Mathématiques et Calcul Scientifique - ENPC - École des Ponts ParisTech); Jérôme Lelong (DAO - Données, Apprentissage et Optimisation - LJK - Laboratoire Jean Kuntzmann - Inria - Institut National de Recherche en Informatique et en Automatique - CNRS - Centre National de la Recherche Scientifique - UGA - Université Grenoble Alpes - Grenoble INP - Institut polytechnique de Grenoble - Grenoble Institute of Technology - UGA - Université Grenoble Alpes)
    Abstract: The problem of computing the conditional expectation E[f (Y)|X] with least-square Monte-Carlo is of general importance and has been widely studied. To solve this problem, it is usually assumed that one has as many samples of Y as of X. However, when samples are generated by computer simulation and the conditional law of Y given X can be simulated, it may be relevant to sample K ∈ N values of Y for each sample of X. The present work determines the optimal value of K for a given computational budget, as well as a way to estimate it. The main take away message is that the computational gain can be all the more important that the computational cost of sampling Y given X is small with respect to the computational cost of sampling X. Numerical illustrations on the optimal choice of K and on the computational gain are given on different examples including one inspired by risk management.
    Keywords: Least square Monte-Carlo, Conditional expectation estimators, Variance reduction, Least square Monte-Carlo Conditional expectation estimators Variance reduction AMS 2020: 65C05 91G60, Variance reduction AMS 2020: 65C05, 91G60
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03770051&r=cmp
  16. By: Liu, Weilong; Zhang, Yong; Liu, Kailong; Quinn, Barry; Yang, Xingyu; Peng, Qiao
    Abstract: With the advent of Big Data, managing large-scale portfolios of thousands of securities is one of the most challenging tasks in the asset management industry. This study uses an evolutionary multi objective technique to solve large-scale portfolio optimisation problems with both long-term listed and newly listed securities. The future returns of long-term listed securities are defined as random variables whose probability distributions are estimated based on sufficient historical data, while the returns of newly listed securities are defined as uncertain variables whose uncertainty distribution are estimated based on experts' knowledge. Our approach defines security returns as theoretically uncertain random variables and proposes a three-moment optimisation model with practical trading constraints. In this study, a framework for applying arbitrary multi-objective evolutionary algorithms to portfolio optimisation is established, and a novel evolutionary algorithm based on large-scale optimisation techniques is developed to solve the proposed model. The experimental results show that the proposed algorithm outperforms state-of-the-art evolutionary algorithms in large-scale portfolio optimisation.
    Keywords: Evolutionary computations, Portfolio optimisation, Large-scale investment, Uncertain random variable, Multi-objective optimisation
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:qmsrps:202302&r=cmp
  17. By: Abdulnasser Hatemi-J; Alan Mustafa
    Abstract: This paper introduces a software component created in Visual Basic for Applications (VBA) that can be applied for creating an optimal portfolio using two different methods. The first method is the seminal approach of Markowitz that is based on finding budget shares via the minimization of the variance of the underlying portfolio. The second method is developed by El-Khatib and Hatemi-J, which combines risk and return directly in the optimization problem and yields budget shares that lead to maximizing the risk adjusted return of the portfolio. This approach is consistent with the expectation of rational investors since these investors consider both risk and return as the fundamental basis for selection of the investment assets. Our package offers another advantage that is usually neglected in the literature, which is the number of assets that should be included in the portfolio. The common practice is to assume that the number of assets is given exogenously when the portfolio is constructed. However, the current software component constructs all possible combinations and thus the investor can figure out empirically which portfolio is the best one among all portfolios considered. The software is consumer friendly via a graphical user interface. An application is also provided to demonstrate how the software can be used using real-time series data for several assets.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2305.12826&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.