[go: up one dir, main page]

nep-cmp New Economics Papers
on Computational Economics
Issue of 2023‒10‒30
twenty-two papers chosen by



  1. PAMS: Platform for Artificial Market Simulations By Masanori Hirano; Ryosuke Takata; Kiyoshi Izumi
  2. The Productivity Effects of Regional Anchors on Local Firms in Swedish Regions between 2007 and 2019 – Evidence from an Expert-informed Machine-Learning Approach By Nilsson, Magnus; Schubert, Torben; Miörner, Johan
  3. Leveraging Deep Learning and Online Source Sentiment for Financial Portfolio Management By Paraskevi Nousi; Loukia Avramelou; Georgios Rodinos; Maria Tzelepi; Theodoros Manousis; Konstantinos Tsampazis; Kyriakos Stefanidis; Dimitris Spanos; Emmanouil Kirtas; Pavlos Tosidis; Avraam Tsantekidis; Nikolaos Passalis; Anastasios Tefas
  4. Long-term effects of early adverse labour market conditions: A Causal Machine Learning approach By Petru Crudu
  5. AI-generated lemons: a sour outlook for content producers? By Howell, Bronwyn E.; Potgieter, Petrus H.
  6. Artificial intelligence for science – adoption trends and future development pathways By Hajkowicz, Stefan; Naughtin, Claire; Sanderson, Conrad; Schleiger, Emma; Karimi, Sarvnaz; Bratanova, Alexandra; Bednarz, Tomasz
  7. The Role of Disability Insurance on the Labour Market Trajectories of Europeans By Agar Brugiavini; Petru Crudu
  8. Artificial intelligence, complementary assets and productivity: evidence from French firms By Flavio Calvino; Luca Fontanelli
  9. Evaluation of Reinforcement Learning Techniques for Trading on a Diverse Portfolio By Ishan S. Khare; Tarun K. Martheswaran; Akshana Dassanaike-Perera; Jonah B. Ezekiel
  10. How will the State think with the assistance of ChatGPT? The case of customs as an example of generative artificial intelligence in public administrations By Thomas Cantens
  11. What do telecommunications policy academics have to fear from GPT-3? By Howell, Bronwyn E.; Potgieter, Petrus H.
  12. Cite-seeing and reviewing: A study on citation bias in peer review. By Stelmakh, Ivan; Rastogi, Charvi; Liu, Ryan; Chawla, Shuchi; Shah, Nihar; Echenique, Federico
  13. Artificial Intelligence and Workers' Well-Being By Giuntella, Osea; König, Johannes; Stella, Luca
  14. Assessing Look-Ahead Bias in Stock Return Predictions Generated By GPT Sentiment Analysis By Paul Glasserman; Caden Lin
  15. A Perturbational Approach for Approximating Heterogeneous Agent Models By Anmol Bhandari; Thomas Bourany; David Evans; Mikhail Golosov
  16. An algorithm for quickly finding long-term equilibria in models of overlapping generations By Zaytsev, Aleksey (Зайцев, Алексей)
  17. Using Large Language Models for Qualitative Analysis can Introduce Serious Bias By Julian Ashwin; Aditya Chhabra; Vijayendra Rao
  18. Sizing Strategies for Algorithmic Trading in Volatile Markets: A Study of Backtesting and Risk Mitigation Analysis By S. M. Masrur Ahmed
  19. They Are Among Us: Pricing Behavior of Algorithms in the Field By Fourberg, Niklas; Marques Magalhaes, Katrin; Wiewiorra, Lukas
  20. Algorithmic Recommendations and Human Discretion By Victoria Angelova; Will S. Dobbie; Crystal Yang
  21. Künstliche Intelligenz, Large Language Models, ChatGPT und die Arbeitswelt der Zukunft By Seemann, Michael
  22. Advancing algorithmic bias management capabilities in AI-driven marketing analytics research By Shahriar Akter; Saida Sultana; Marcello Mariani; Samuel Fosso Wamba; Konstantina Spanaki; Yogesh Dwivedi

  1. By: Masanori Hirano; Ryosuke Takata; Kiyoshi Izumi
    Abstract: This paper presents a new artificial market simulation platform, PAMS: Platform for Artificial Market Simulations. PAMS is developed as a Python-based simulator that is easily integrated with deep learning and enabling various simulation that requires easy users' modification. In this paper, we demonstrate PAMS effectiveness through a study using agents predicting future prices by deep learning.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.10729&r=cmp
  2. By: Nilsson, Magnus (CIRCLE, Lund University); Schubert, Torben (CIRCLE, Lund University); Miörner, Johan (CIRCLE, Lund University)
    Abstract: This paper analyses the impact of regional anchors on local firms in Swedish regions. Departing from previous idiographic research, we adopt a nomothetic research design relying on a stepwise expert-informed supervised machine learning approach to identify the population of anchor firms in the Swedish economy between 2007 and 2019. We find support for positive anchor effects on the productivity of other firms in the region. These effects are moderated by regional and anchor conditions. We find that the effects are greater when there are multiple anchors within the same industry and that the effects are larger in economically weaker regions.
    Keywords: anchor-tenant; productivity; machine learning; anchor firms; Sweden
    JEL: D24 O30 R11 R12
    Date: 2023–10–10
    URL: http://d.repec.org/n?u=RePEc:hhs:lucirc:2023_008&r=cmp
  3. By: Paraskevi Nousi; Loukia Avramelou; Georgios Rodinos; Maria Tzelepi; Theodoros Manousis; Konstantinos Tsampazis; Kyriakos Stefanidis; Dimitris Spanos; Emmanouil Kirtas; Pavlos Tosidis; Avraam Tsantekidis; Nikolaos Passalis; Anastasios Tefas
    Abstract: Financial portfolio management describes the task of distributing funds and conducting trading operations on a set of financial assets, such as stocks, index funds, foreign exchange or cryptocurrencies, aiming to maximize the profit while minimizing the loss incurred by said operations. Deep Learning (DL) methods have been consistently excelling at various tasks and automated financial trading is one of the most complex one of those. This paper aims to provide insight into various DL methods for financial trading, under both the supervised and reinforcement learning schemes. At the same time, taking into consideration sentiment information regarding the traded assets, we discuss and demonstrate their usefulness through corresponding research studies. Finally, we discuss commonly found problems in training such financial agents and equip the reader with the necessary knowledge to avoid these problems and apply the discussed methods in practice.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.16679&r=cmp
  4. By: Petru Crudu (Department of Economics, University Of Venice CÃ Foscari)
    Abstract: This study estimates the long-term causal effects of completing education during adverse labour market conditions, measuring outcomes 35 years post-education. To achieve this, the study combines historical regional unemployment rates with detailed SHARE microdata for European cohorts completing education between 1960 and 1990 in a novel database. A systematic heterogeneity analysis is conducted by leveraging the Causal Forest, a causal machine learning estimator that allows estimates at various aggregation levels. Furthermore, the causal link is validated using an instrumental variable approach. The main findings reveal that a one-percentage-point increase in the unemployment rate at the time of completing education leads to a significant decline in earnings (-5.2%) and self-perceived health (-2.23%) after 35 years. The heterogeneity analysis uncovers that the results are primarily driven by less educated individuals and highlights a permanent disadvantage for women in labour market participation. This study also provides evidence that systematic divergence in life trajectories can be explained by search theory and human capital models. Overall, the research suggests that the consequences of limited post-education opportunities can be permanent, underscoring the importance of identifying vulnerable groups for effective policy interventions.
    Keywords: Long-term Effects, Unemployment, Heterogeneous Effects, GRF
    JEL: J31 I1 J24 I24 E24
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:ven:wpaper:2023:21&r=cmp
  5. By: Howell, Bronwyn E.; Potgieter, Petrus H.
    Abstract: Artificial intelligence (AI) techniques for natural language processing have made dramatic advances in the past few years (Lin 2023). Thunström & Steingrimsson (2022) demonstrated that the present generation of AI text engines are even able to write low-level scientific pieces about themselves, with relatively minimal prompting, whereas Goyal et al. (2022) show how good general-purpose AI language engines are at summarizing news articles. There is however a downside to all of this progress. Bontridder & Poullet (2021) point out how inexpensive it has become to generate deepfake videos and synthetic voice recordings. Kreps et al. (2022) look at AI generated text and find that "individuals are largely incapable of distinguishing between AI- and human-generated text". Illia et al. (2023) point to three ethical challenges raised by automated text generation that is difficult to distinguish from human writing: 1. facilitation of mass manipulation and disinformation; 2. a lowest denominator problem where a mass of low-quality but incredibly cheap text, crowds out higher-quality discourse; and 3. the suppression of direct communication between stakeholders and an attendant drop in the levels of trust. Our focus is mainly on (2) and we examine the institutional consequences that may arise in two specific sectors currently already facing challenges from AI-generated text: scientific journals and social media platforms. Drawing on the body of learning from institutional economics regarding responses to uncertainties in the veracity of information, it also proposes some elementary remedies that may prove helpful in navigating through the anticipated challenges. Distinguishing genuinely human-authored content from machine-generated text will likely be more easily done using a credible signal of the authenticity of the content creator. This is a variation of Akerlof's (1970) famous "market for lemons" problem. This paper uses an inductive approach to examine sections of the content industry that are likely to be particularly relevant to "market for lemons" substitution, referring to the framework of Giannakas & Fulton (2020).
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:itse23:277971&r=cmp
  6. By: Hajkowicz, Stefan; Naughtin, Claire; Sanderson, Conrad; Schleiger, Emma; Karimi, Sarvnaz; Bratanova, Alexandra; Bednarz, Tomasz
    Abstract: This paper aims to inform researchers and research organisations within the spheres of government, industry, community and academia seeking to develop improved AI capabilities. The paper is focused on the use of AI for science, and it describes AI adoption trends in the physical, natural and social science fields. Using a bibliometric analysis of peer-reviewed publishing trends over 63 years (1960–2022), the paper demonstrates a surge in AI adoption across all fields over the past several years. The paper examines future development pathways and explores implications for science organisations.
    Keywords: Artificial intelligence; machine learning; science; AI capabilities; bibliometric analysis; Australia
    JEL: O32 O33 O38
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:115464&r=cmp
  7. By: Agar Brugiavini (Department of Economics, University Of Venice CÃ Foscari; Institute for Fiscal Studies); Petru Crudu (Department of Economics, University Of Venice CÃ Foscari)
    Abstract: This work documents the role played by disability insurance, typically part of a wider public pension provision package, on the labour market trajectories and retirement decisions. We will first employ a machine learning approach to estimate a Transition Probability Model able to uncover the most likely labour market histories and then evaluate the effects of policy reforms, including reforms to the eligibility for disability insurance benefits. The main contribution is the introduction of disability insurance programs within a framework, which models the entire life course of older Europeans. This requires the detailed administrative eligibility criteria prevailing in each of the 11 countries from 1970 to 2017. Results show that the disability route and early retirement are substitutes. In addition, tightening eligibility rules of disability programs crowd out disabled workers, whose reductions in working capacities are correctly assessed, towards other compensatory schemes (e.g., unemployment benefits or early pension) in which working is not expected. On the contrary, individuals with over-assessed reductions in working capacities are the most reactive to disability policy restrictions. In conclusion, efficient disability assessment procedures are crucial for incentivising labour market participation without hurting individuals most in need.
    Keywords: Retirement, Disability, Path Dependence, Simulation
    JEL: J14 J26 I38 H55
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:ven:wpaper:2023:20&r=cmp
  8. By: Flavio Calvino; Luca Fontanelli
    Abstract: In this work we characterise French firms using artificial intelligence (AI) and explore the link between AI use and productivity. We relevantly distinguish AI users that source AI from external providers (AI buyers) from those developing their own AI systems (AI developers). AI buyers tend to be larger than other firms, while AI developers are also younger. The share of firms using AI is highest in the ICT sector, which exhibits a particularly high share of developers. Complementary assets, including skills, digital capabilities and infrastructure, play a key role for AI use, with AI buyers and developers leveraging different types of human capital. Overall, AI users tend to be more productive, however this appears largely related to the self-selection of more productive and digital-intensive firms into AI use. This is not the case for AI developers, for which the positive link between AI use and productivity remains evident beyond selection, suggesting a positive effect of AI on their productivity.
    Keywords: Technology Diffusion; Artificial Intelligence; Digitalisation; Productivity.
    Date: 2023–10–13
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2023/35&r=cmp
  9. By: Ishan S. Khare; Tarun K. Martheswaran; Akshana Dassanaike-Perera; Jonah B. Ezekiel
    Abstract: This work seeks to answer key research questions regarding the viability of reinforcement learning over the S&P 500 index. The on-policy techniques of Value Iteration (VI) and State-action-reward-state-action (SARSA) are implemented along with the off-policy technique of Q-Learning. The models are trained and tested on a dataset comprising multiple years of stock market data from 2000-2023. The analysis presents the results and findings from training and testing the models using two different time periods: one including the COVID-19 pandemic years and one excluding them. The results indicate that including market data from the COVID-19 period in the training dataset leads to superior performance compared to the baseline strategies. During testing, the on-policy approaches (VI and SARSA) outperform Q-learning, highlighting the influence of bias-variance tradeoff and the generalization capabilities of simpler policies. However, it is noted that the performance of Q-learning may vary depending on the stability of future market conditions. Future work is suggested, including experiments with updated Q-learning policies during testing and trading diverse individual stocks. Additionally, the exploration of alternative economic indicators for training the models is proposed.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.03202&r=cmp
  10. By: Thomas Cantens (WCO - World Customs Organization, CERDI - Centre d'Études et de Recherches sur le Développement International - IRD - Institut de Recherche pour le Développement - CNRS - Centre National de la Recherche Scientifique - UCA - Université Clermont Auvergne)
    Abstract: The paper discusses the implications of General Artificial Intelligence (GAI) in public administrations and the specific questions it raises compared to specialized and « numerical » AI, based on the example of Customs and the experience of the World Customs Organization in the field of AI and data strategy implementation in Member countries. At the organizational level, the advantages of GAI include cost reduction through internalization of tasks, uniformity and correctness of administrative language, access to broad knowledge, and potential paradigm shifts in fraud detection. At this level, the paper highlights three facts that distinguish GAI from specialized AI : i) GAI is less associated to decision-making process than specialized AI in public administrations so far, ii) the risks usually associated with GAI are often similar to those previously associated with specialized AI, but, while certain risks remain pertinent, others lose significance due to the constraints imposed by the inherent limitations of GAI technology itself when implemented in public administrations, iii) training data corpus for GAI becomes a strategic asset for public administrations, maybe more than the algorithms themselves, which was not the case for specialized AI.. At the individual level, the paper emphasizes the "language-centric" nature of GAI in contrast to "number-centric" AI systems implemented within public administrations up until now. It discusses the risks of replacement or enslavement of civil servants to the machines by exploring the transformative impact of GAI on the intellectual production of the State. The paper pleads for the development of critical vigilance and critical thinking as specific skills for civil servants who are highly specialized and will have to think with the assistance of a machine that is eclectic by nature.
    Keywords: Generative artificial intelligence, Language, Critical thinking, Customs, Public administrations
    Date: 2023–09–29
    URL: http://d.repec.org/n?u=RePEc:hal:cdiwps:hal-04233370&r=cmp
  11. By: Howell, Bronwyn E.; Potgieter, Petrus H.
    Abstract: Artificial intelligence (AI) tools such as ChatGPT and GPT-3 have shot to prominence recently (Lin 2023), as dramatic advances have shown them to be capable of writing plausible output that is difficult to distinguish from human-authored content. Unsurprisingly, this has led to concerns about their use by students in tertiary education contexts (Swiecki et al. 2022) and it has led to them being banned in some school districts in the United States (e.g. Rosenblatt 2023; Clarridge 2023) and from at least one top-ranking international university (e.g. Reuters 2023). There are legitimate reasons for such fears to be held, as it is difficult to differentiate students' own written work presented for assessment from that produced by the AI tools. Successfully embedding them into educational contexts requires an understanding of the tools, what they are, and what they can and cannot do. Despite their powerful modelling and description capabilities, these tools have (at least currently) significant issues and limitations (Zhang & Li 2021). As telecommunications policy academics charged with the research-led teaching and supervising both undergraduate and research students, we need to be certain that our graduates are capable of understanding the complexities of current issues in this incredibly dynamic field and applying their learnings appropriately in industry and policy environments. We must be reasonably certain that the grades we assign are based on the students' own work and understanding, To this end, we engaged in an experiment with the current (Q1 of 2023) version of the AI tool to assess how well it coped with questions on a core and current topic in telecommunications policy education: the effects of access regulation (local loop unbundling) on broadband investment and uptake. We found that while the outputs were well-written and appeared plausible, there were significant systematic errors which, once academics are aware of them, can be exploited to avoid the risk of AI use severely undermining the credibility of the assessments we make of students' written work, at least for the time being and in respect of the version of chatbot software we used.
    Keywords: Artificial Intelligence (AI), ChatGPT, GPT-3, Academia
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:itse23:277972&r=cmp
  12. By: Stelmakh, Ivan; Rastogi, Charvi; Liu, Ryan; Chawla, Shuchi; Shah, Nihar; Echenique, Federico
    Abstract: Citations play an important role in researchers careers as a key factor in evaluation of scientific impact. Many anecdotes advice authors to exploit this fact and cite prospective reviewers to try obtaining a more positive evaluation for their submission. In this work, we investigate if such a citation bias actually exists: Does the citation of a reviewers own work in a submission cause them to be positively biased towards the submission? In conjunction with the review process of two flagship conferences in machine learning and algorithmic economics, we execute an observational study to test for citation bias in peer review. In our analysis, we carefully account for various confounding factors such as paper quality and reviewer expertise, and apply different modeling techniques to alleviate concerns regarding the model mismatch. Overall, our analysis involves 1, 314 papers and 1, 717 reviewers and detects citation bias in both venues we consider. In terms of the effect size, by citing a reviewers work, a submission has a non-trivial chance of getting a higher score from the reviewer: an expected increase in the score is approximately 0.23 on a 5-point Likert item. For reference, a one-point increase of a score by a single reviewer improves the position of a submission by 11% on average.
    Keywords: Humans, Prospective Studies, Peer Review, Bias, Research Personnel, Machine Learning, Peer Review, Research
    Date: 2023–01–01
    URL: http://d.repec.org/n?u=RePEc:cdl:econwp:qt3883h8j1&r=cmp
  13. By: Giuntella, Osea (University of Pittsburgh); König, Johannes (DIW Berlin); Stella, Luca (Free University of Berlin)
    Abstract: This study explores the relationship between artificial intelligence (AI) and workers' well-being and mental health using longitudinal survey data from Germany (2000-2020). We construct a measure of individual exposure to AI technology based on the occupation in which workers in our sample were first employed and explore an event study design and a difference-in-differences approach to compare AI-exposed and non-exposed workers. Before AI became widely available, there is no evidence of differential pre-trends in workers' well-being and concerns about their economic futures. Since 2015, however, with the increasing adoption of AI in firms across Germany, we find that AI-exposed workers have become less satisfied with their life and job and more concerned about job security and their personal economic situation. However, we find no evidence of a significant impact of AI on workers' mental health, anxiety, or depression.
    Keywords: artificial intelligence, future of work, well-being, mental health
    JEL: I10 J28 O30
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp16485&r=cmp
  14. By: Paul Glasserman; Caden Lin
    Abstract: Large language models (LLMs), including ChatGPT, can extract profitable trading signals from the sentiment in news text. However, backtesting such strategies poses a challenge because LLMs are trained on many years of data, and backtesting produces biased results if the training and backtesting periods overlap. This bias can take two forms: a look-ahead bias, in which the LLM may have specific knowledge of the stock returns that followed a news article, and a distraction effect, in which general knowledge of the companies named interferes with the measurement of a text's sentiment. We investigate these sources of bias through trading strategies driven by the sentiment of financial news headlines. We compare trading performance based on the original headlines with de-biased strategies in which we remove the relevant company's identifiers from the text. In-sample (within the LLM training window), we find, surprisingly, that the anonymized headlines outperform, indicating that the distraction effect has a greater impact than look-ahead bias. This tendency is particularly strong for larger companies--companies about which we expect an LLM to have greater general knowledge. Out-of-sample, look-ahead bias is not a concern but distraction remains possible. Our proposed anonymization procedure is therefore potentially useful in out-of-sample implementation, as well as for de-biased backtesting.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.17322&r=cmp
  15. By: Anmol Bhandari; Thomas Bourany; David Evans; Mikhail Golosov
    Abstract: We develop a perturbational technique to approximate equilibria of a wide class of discrete-time dynamic stochastic general equilibrium heterogeneous-agent models with complex state spaces, including multi-dimensional distributions of endogenous variables. We show that approximating policy functions and stochastic process that governs the distributional state to any order is equivalent to solving small systems of linear equations that characterize values of certain directional derivatives. We analytically derive the coefficients of these linear systems and show that they satisfy simple recursive relations, making their numerical implementation quick and efficient. Compared to existing state-of-the-art techniques, our method is faster in constructing first-order approximations and extends to higher orders, capturing the effects of risk that are ignored by many current methods. We illustrate how to apply our method to a broad set of questions such as impacts of first- and second-moment shocks, welfare effect of macroeconomic risk and stabilization policies, endogenous household portfolio formation, and transition dynamics in heterogeneous agent general equilibrium settings.
    JEL: E3
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:31744&r=cmp
  16. By: Zaytsev, Aleksey (Зайцев, Алексей) (The Russian Presidential Academy of National Economy and Public Administration)
    Abstract: In this paper, we consider the problem of finding long-term equilibria in models of overlapping generations with a large number of periods. It is often possible to reduce the solution of a model to finding the roots of a system of equations. Some OLG models, after the introduction of additional variables, can be reduced to the form of a system of polynomials. Thus, one can represent the set of long-term equilibria as algebraic diversity. This makes it possible to use computational methods from algebraic geometry in economic problems. In particular, the method using Grebner bases has become popular. However, this approach can be effectively applied only when there are few variables. We propose an algorithm for finding solutions to the system and use it to investigate the presence of a plurality of solutions in realistically calibrated models with long-lived agents.
    Keywords: OLG models, plurality of equilibria, Groebner bases, system of polynomials
    JEL: C02 C32 D11 D58
    Date: 2023–05–07
    URL: http://d.repec.org/n?u=RePEc:rnp:wpaper:w20220230&r=cmp
  17. By: Julian Ashwin; Aditya Chhabra; Vijayendra Rao
    Abstract: Large Language Models (LLMs) are quickly becoming ubiquitous, but the implications for social science research are not yet well understood. This paper asks whether LLMs can help us analyse large-N qualitative data from open-ended interviews, with an application to transcripts of interviews with Rohingya refugees in Cox's Bazaar, Bangladesh. We find that a great deal of caution is needed in using LLMs to annotate text as there is a risk of introducing biases that can lead to misleading inferences. We here mean bias in the technical sense, that the errors that LLMs make in annotating interview transcripts are not random with respect to the characteristics of the interview subjects. Training simpler supervised models on high-quality human annotations with flexible coding leads to less measurement error and bias than LLM annotations. Therefore, given that some high quality annotations are necessary in order to asses whether an LLM introduces bias, we argue that it is probably preferable to train a bespoke model on these annotations than it is to use an LLM for annotation.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.17147&r=cmp
  18. By: S. M. Masrur Ahmed
    Abstract: Backtest is a way of financial risk evaluation which helps to analyze how our trading algorithm would work in markets with past time frame. The high volatility situation has always been a critical situation which creates challenges for algorithmic traders. The paper investigates different models of sizing in financial trading and backtest to high volatility situations to understand how sizing models can lower the models of VaR during crisis events. Hence it tries to show that how crisis events with high volatility can be controlled using short and long positional size. The paper also investigates stocks with AR, ARIMA, LSTM, GARCH with ETF data.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.09094&r=cmp
  19. By: Fourberg, Niklas; Marques Magalhaes, Katrin; Wiewiorra, Lukas
    Abstract: We analyze pricing patterns and price level effects of algorithms in the market segments for OTC-antiallergics and -painkillers in Germany. Based on a novel hourly dataset which spans over four months and contains over 10 million single observations, we produce the following results. First, price levels are substantially higher for antiallergics compared to the segment of painkillers, which seems to be reflective of a lower price elasticity for antiallergics. Second, we find evidence that this exploitation of demand characteristics is heterogeneous with respect to the pricing technology. Retailers with a more advanced pricing technology establish even higher price premiums for antiallergics than retailers with a less advanced technology. Third, retailers with more advanced pricing technology post lower prices which contradicts previous findings from simulations but are in line with empirical findings if many firms compete in a market. Lastly, our data suggests that pricing algorithms takeweb-traffic of retailers' online-shops as demand side feedback into account when choosing prices. Our results stress the importance of a careful policy approach towards pricing algorithms and highlights new areas of risks when multiple players employ the same pricing technology.
    Keywords: Algorithmic pricing, Collusion, Artificial intelligence
    JEL: C13 D83 L13 L41
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:itse23:277958&r=cmp
  20. By: Victoria Angelova; Will S. Dobbie; Crystal Yang
    Abstract: Human decision-makers frequently override the recommendations generated by predictive algorithms, but it is unclear whether these discretionary overrides add valuable private information or reintroduce human biases and mistakes. We develop new quasi-experimental tools to measure the impact of human discretion over an algorithm on the accuracy of decisions, even when the outcome of interest is only selectively observed, in the context of bail decisions. We find that 90% of the judges in our setting underperform the algorithm when they make a discretionary override, with most making override decisions that are no better than random. Yet the remaining 10% of judges outperform the algorithm in terms of both accuracy and fairness when they make a discretionary override. We provide suggestive evidence on the behavior underlying these differences in judge performance, showing that the high-performing judges are more likely to use relevant private information and are less likely to overreact to highly salient events compared to the low-performing judges.
    JEL: C01 D8 K40
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:31747&r=cmp
  21. By: Seemann, Michael
    Abstract: Die rasante Entwicklung von Systemen Künstlicher Intelligenz wie Chat-GPT, die inhaltlich und sprachlich überzeugende Texte generieren können, hat eine intensive Debatte ausgelöst. Es stellt sich die Frage, welche Auswirkungen solche Systeme auf die Prozesse und Arbeitsweisen zum Beispiel in Wissens- und Kreativberufen haben werden. Diese Literaturstudie wertet den aktuellen Stand der Debatte aus. Sie führt in die technische Grundlage, die so genannten "Large Language Models", ein und untersucht abschließend, welche Auswirkungen auf die Arbeitswelt zu erwarten sind.
    Keywords: KI, ChatGPT, AI, Wissensarbeit, Kreativberufe
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:hbsfof:304&r=cmp
  22. By: Shahriar Akter (University of Wollongong [Australia]); Saida Sultana (University of Wollongong [Australia]); Marcello Mariani (Henley Business School [University of Reading] - UOR - University of Reading, University of Bologna/Università di Bologna); Samuel Fosso Wamba (TBS - Toulouse Business School); Konstantina Spanaki (Audencia Business School); Yogesh Dwivedi (School of Management [Swansea] - Swansea University, SIBM - Symbiosis Institute of Business Management Pune)
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04194438&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.