[go: up one dir, main page]

nep-cmp New Economics Papers
on Computational Economics
Issue of 2022‒10‒31
28 papers chosen by



  1. Asset Pricing and Deep Learning By Chen Zhang
  2. Chaotic Hedging with Iterated Integrals and Neural Networks By Ariel Neufeld; Philipp Schmocker
  3. Universal Quantum Speedup for Branch-and-Bound, Branch-and-Cut, and Tree-Search Algorithms By Shouvanik Chakrabarti; Pierre Minssen; Romina Yalovetzky; Marco Pistoia
  4. Finding Needles in Haystacks: Multiple-Imputation Record Linkage Using Machine Learning By John M. Abowd; Joelle Hillary Abramowitz; Margaret Catherine Levenstein; Kristin McCue; Dhiren Patki; Trivellore Raghunathan; Ann Michelle Rodgers; Matthew D. Shapiro; Nada Wasi; Dawn Zinsser
  5. How communication makes the difference between a cartel and tacit collusion: a machine learning approach By Maximilian Andres; Lisa Bruttel; Jana Friedrichsen
  6. Optimal consumption-investment choices under wealth-driven risk aversion By Ruoxin Xiao
  7. Using Knowledge Distillation to improve interpretable models in a retail banking context By Maxime Biehler; Mohamed Guermazi; C\'elim Starck
  8. Portfolio optimization with discrete simulated annealing By \'Alvaro Rubio-Garc\'ia; Juan Jos\'e Garc\'ia-Ripoll; Diego Porras
  9. AI-Assisted Discovery of Quantitative and Formal Models in Social Science By Julia Balla; Sihao Huang; Owen Dugan; Rumen Dangovski; Marin Soljacic
  10. Proxying Economic Activity with Daytime Satellite Imagery: Filling Data Gaps across Time and Space By Lehnert, Patrick; Niederberger, Michael; Backes-Gellner, Uschi; Bettinger, Eric
  11. Detecting asset price bubbles using deep learning By Francesca Biagini; Lukas Gonon; Andrea Mazzon; Thilo Meyer-Brandis
  12. The Community Explorer: Bringing Populations' Diversity into Policy Discussions, One County at a Time By Lopez, Claude; Roh, Hyeongyul; Switek, Maggie
  13. Computing Longitudinal Moments for Heterogeneous Agent Models By Sergio Ocampo; Baxter Robinson
  14. Small Area Estimation of Monetary Poverty in Mexico Using Satellite Imagery and Machine Learning By Newhouse,David Locke; Merfeld,Joshua David; Ramakrishnan,Anusha Pudugramam; Swartz,Tom; Lahiri,Partha
  15. Counterfactual Reconciliation: Incorporating Aggregation Constraints For More Accurate Causal Effect Estimates By Cengiz, Doruk; Tekgüç, Hasan
  16. Automatic Identification and Classification of Share Buybacks and their Effect on Short-, Mid- and Long-Term Returns By Thilo Reintjes
  17. The effectiveness of Minimum Income schemes in the EU By Vanda Almeida; Silvia De Poli; Adrián Hernández
  18. Inefficient at Any Level: A Comparative Efficiency Argument for Complete Elimination of Property Transfer Duties and Insurance Taxes By Jason Nassios; James Giesecke
  19. The Distributional Impact of Taxes and Social Spending in Bhutan : An Application withLimited Income Data By Baquero,Juan Pablo; Gao,Jia; Kim,Yeon Soo
  20. A Customizable Microsimulation Tool to Analyze Distributional Effects of Country Fiscal Policies By Jia Gao; Gabriela Inchauste
  21. Carbon Tax and its Impact on South African Households By Jessika A. Bohlmann; Roula Inglesi-Lotz; Heinrich R. Bohlmann
  22. Intensity-Based Rebating of Emission Pricing Revenues By Böhringer,Christoph; Fischer,Carolyn; Rivers,Nicholas
  23. Revealing Unobservables by Deep Learning: Generative Element Extraction Networks (GEEN) By Yingyao Hu; Yang Liu; Jiaxiong Yao
  24. Taxing Households Energy Consumption in the EU: the Tax Burden and its Redistributive effect By AMORES Antonio F; MAIER Sofia; RICCI Mattia
  25. Fiscal policy, macroeconomic performance and industry structure in a small open economy By Pål Boug; Thomas von Brasch; Ådne Cappelen; Roger Hammersland; Håvard Hungnes; Dag Kolsrud; Julia Skretting; Birger Strøm; Trond C. Vigtel
  26. Private Exploitation of the North-Western Sahara Aquifer System By Amine Chekireb; Julio Goncalves; Hubert Stahn; Agnes Tomini
  27. Economic and Food Security Impacts of Agricultural Input Reduction Under the European Union Green Deal’s Farm to Fork and Biodiversity Strategies By Beckman, Jayson; Ivanic, Maros; Jelliffe, Jeremy L; Baquedano, Felix G; Scott, Sara G
  28. When Is There Enough Data to Create a Global Statistic ? By Mahler,Daniel Gerszon; Serajuddin,Umar; Maeda,Hiroko

  1. By: Chen Zhang (SenseTime Research)
    Abstract: Traditional machine learning methods have been widely studied in financial innovation. My study focuses on the application of deep learning methods on asset pricing. I investigate various deep learning methods for asset pricing, especially for risk premia measurement. All models take the same set of predictive signals (firm characteristics, systematic risks and macroeconomics). I demonstrate high performance of all kinds of state-of-the-art (SOTA) deep learning methods, and figure out that RNNs with memory mechanism and attention have the best performance in terms of predictivity. Furthermore, I demonstrate large economic gains to investors using deep learning forecasts. The results of my comparative experiments highlight the importance of domain knowledge and financial theory when designing deep learning models. I also show return prediction tasks bring new challenges to deep learning. The time varying distribution causes distribution shift problem, which is essential for financial time series prediction. I demonstrate that deep learning methods can improve asset risk premium measurement. Due to the booming deep learning studies, they can constantly promote the study of underlying financial mechanisms behind asset pricing. I also propose a promising research method that learning from data and figuring out the underlying economic mechanisms through explainable artificial intelligence (AI) methods. My findings not only justify the value of deep learning in blooming fintech development, but also highlight their prospects and advantages over traditional machine learning methods.
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2209.12014&r=
  2. By: Ariel Neufeld; Philipp Schmocker
    Abstract: In this paper, we extend the Wiener-Ito chaos decomposition to the class of diffusion processes, whose drift and diffusion coefficient are of linear growth. By omitting the orthogonality in the chaos expansion, we are able to show that every $p$-integrable functional, for $p \in [1,\infty)$, can be represented as sum of iterated integrals of the underlying process. Using a truncated sum of this expansion and (possibly random) neural networks for the integrands, whose parameters are learned in a machine learning setting, we show that every financial derivative can be approximated arbitrarily well in the $L^p$-sense. Moreover, the hedging strategy of the approximating financial derivative can be computed in closed form.
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2209.10166&r=
  3. By: Shouvanik Chakrabarti; Pierre Minssen; Romina Yalovetzky; Marco Pistoia
    Abstract: Mixed Integer Programs (MIPs) model many optimization problems of interest in Computer Science, Operations Research, and Financial Engineering. Solving MIPs is NP-Hard in general, but several solvers have found success in obtaining near-optimal solutions for problems of intermediate size. Branch-and-Cut algorithms, which combine Branch-and-Bound logic with cutting-plane routines, are at the core of modern MIP solvers. Montanaro proposed a quantum algorithm with a near-quadratic speedup compared to classical Branch-and-Bound algorithms in the worst case, when every optimal solution is desired. In practice, however, a near-optimal solution is satisfactory, and by leveraging tree-search heuristics to search only a portion of the solution tree, classical algorithms can perform much better than the worst-case guarantee. In this paper, we propose a quantum algorithm, Incremental-Quantum-Branch-and-Bound, with universal near-quadratic speedup over classical Branch-and-Bound algorithms for every input, i.e., if classical Branch-and-Bound has complexity $Q$ on an instance that leads to solution depth $d$, Incremental-Quantum-Branch-and-Bound offers the same guarantees with a complexity of $\tilde{O}(\sqrt{Q}d)$. Our results are valid for a wide variety of search heuristics, including depth-based, cost-based, and $A^{\ast}$ heuristics. Universal speedups are also obtained for Branch-and-Cut as well as heuristic tree search. Our algorithms are directly comparable to commercial MIP solvers, and guarantee near quadratic speedup whenever $Q \gg d$. We use numerical simulation to verify that $Q \gg d$ for typical instances of the Sherrington-Kirkpatrick model, Maximum Independent Set, and Portfolio Optimization; as well as to extrapolate the dependence of $Q$ on input size parameters. This allows us to project the typical performance of our quantum algorithms for these important problems.
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2210.03210&r=
  4. By: John M. Abowd; Joelle Hillary Abramowitz; Margaret Catherine Levenstein; Kristin McCue; Dhiren Patki; Trivellore Raghunathan; Ann Michelle Rodgers; Matthew D. Shapiro; Nada Wasi; Dawn Zinsser
    Abstract: This paper considers the problem of record linkage between a household-level survey and an establishment-level frame in the absence of unique identifiers. Linkage between frames in this setting is challenging because the distribution of employment across establishments is highly skewed. To address these difficulties, this paper develops a probabilistic record linkage methodology that combines machine learning (ML) with multiple imputation (MI). This ML-MI methodology is applied to link survey respondents in the Health and Retirement Study to their workplaces in the Census Business Register. The linked data reveal new evidence that non-sampling errors in household survey data are correlated with respondents’ workplace characteristics.
    Keywords: administrative data; machine learning; multiple imputation; probabilistic record linkage; survey data
    JEL: C13 C18 C81
    Date: 2021–10–01
    URL: http://d.repec.org/n?u=RePEc:fip:fedbwp:94891&r=
  5. By: Maximilian Andres (University of Potsdam); Lisa Bruttel (University of Potsdam); Jana Friedrichsen (Humboldt-Universität zu Berlin, WZB Berlin Social Science Center, DIW Berlin)
    Abstract: This paper sheds new light on the role of communication for cartel formation. Using machine learning to evaluate free-form chat communication among firms in a laboratory experiment, we identify typical communication patterns for both explicit cartel formation and indirect attempts to collude tacitly. We document that firms are less likely to communicate explicitly about price fixing and more likely to use indirect messages when sanctioning institutions are present. This effect of sanctions on communication reinforces the direct cartel-deterring effect of sanctions as collusion is more difficult to reach and sustain without an explicit agreement. Indirect messages have no, or even a negative, effect on prices.
    Keywords: cartel, collusion, communication, machine learning, experiment
    JEL: C92 D43 L41
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:pot:cepadp:53&r=
  6. By: Ruoxin Xiao
    Abstract: CRRA utility where the risk aversion coefficient is a constant is commonly seen in various economics models. But wealth-driven risk aversion rarely shows up in investor's investment problems. This paper mainly focus on numerical solutions to the optimal consumption-investment choices under wealth-driven aversion done by neural network. A jump-diffusion model is used to simulate the artificial data that is needed for the neural network training. The WDRA Model is set up for describing the investment problem and there are two parameters that require to be optimized, which are the investment rate of the wealth on the risky assets and the consumption during the investment time horizon. Under this model, neural network LSTM with one objective function is implemented and shows promising results.
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2210.00950&r=
  7. By: Maxime Biehler; Mohamed Guermazi; C\'elim Starck
    Abstract: This article sets forth a review of knowledge distillation techniques with a focus on their applicability to retail banking contexts. Predictive machine learning algorithms used in banking environments, especially in risk and control functions, are generally subject to regulatory and technical constraints limiting their complexity. Knowledge distillation gives the opportunity to improve the performances of simple models without burdening their application, using the results of other - generally more complex and better-performing - models. Parsing recent advances in this field, we highlight three main approaches: Soft Targets, Sample Selection and Data Augmentation. We assess the relevance of a subset of such techniques by applying them to open source datasets, before putting them to the test on the use cases of BPCE, a major French institution in the retail banking sector. As such, we demonstrate the potential of knowledge distillation to improve the performance of these models without altering their form and simplicity.
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2209.15496&r=
  8. By: \'Alvaro Rubio-Garc\'ia; Juan Jos\'e Garc\'ia-Ripoll; Diego Porras
    Abstract: Portfolio optimization is an important process in finance that consists in finding the optimal asset allocation that maximizes expected returns while minimizing risk. When assets are allocated in discrete units, this is a combinatorial optimization problem that can be addressed by quantum and quantum-inspired algorithms. In this work we present an integer simulated annealing method to find optimal portfolios in the presence of discretized convex and non-convex cost functions. Our algorithm can deal with large size portfolios with hundreds of assets. We introduce a performance metric, the time to target, based on a lower bound to the cost function obtained with the continuous relaxation of the combinatorial optimization problem. This metric allows us to quantify the time required to achieve a solution with a given quality. We carry out numerical experiments and we benchmark the algorithm in two situations: (i) Monte Carlo instances are started at random, and (ii) the algorithm is warm-started with an initial instance close to the continuous relaxation of the problem. We find that in the case of warm-starting with convex cost functions, the time to target does not grow with the size of the optimization problem, so discretized versions of convex portfolio optimization problems are not hard to solve using classical resources. We have applied our method to the problem of re-balancing in the presence of non-convex transaction costs, and we have found that our algorithm can efficiently minimize those terms.
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2210.00807&r=
  9. By: Julia Balla; Sihao Huang; Owen Dugan; Rumen Dangovski; Marin Soljacic
    Abstract: In social science, formal and quantitative models, such as ones describing economic growth and collective action, are used to formulate mechanistic explanations, provide predictions, and uncover questions about observed phenomena. Here, we demonstrate the use of a machine learning system to aid the discovery of symbolic models that capture nonlinear and dynamical relationships in social science datasets. By extending neuro-symbolic methods to find compact functions and differential equations in noisy and longitudinal data, we show that our system can be used to discover interpretable models from real-world data in economics and sociology. Augmenting existing workflows with symbolic regression can help uncover novel relationships and explore counterfactual models during the scientific process. We propose that this AI-assisted framework can bridge parametric and non-parametric models commonly employed in social science research by systematically exploring the space of nonlinear models and enabling fine-grained control over expressivity and interpretability.
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2210.00563&r=
  10. By: Lehnert, Patrick (University of Zurich); Niederberger, Michael (University of Zurich); Backes-Gellner, Uschi (University of Zurich); Bettinger, Eric (Stanford University)
    Abstract: This paper develops a novel procedure for proxying economic activity with day-time satellite imagery across time periods and spatial units, for which reliable data on economic activity are otherwise not available. In developing this unique proxy, we apply machine-learning techniques to a historical time series of daytime satellite imagery dating back to 1984. Compared to satellite data on night light intensity, another common economic proxy, our proxy more precisely predicts economic activity at smaller regional levels and over longer time horizons. We demonstrate our measure's usefulness for the example of Germany, where East German data on economic activity are unavailable for detailed regional levels and historical time series. Our procedure is generalizable to any region in the world, and it has great potential for analyzing historical economic developments, evaluating local policy reforms, and controlling for economic activity at highly disaggregated regional levels in econometric applications.
    Keywords: daytime satellite imagery, Landsat, machine learning, economic activity, land cover
    JEL: E01 E23 O18 R11 R14
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp15555&r=
  11. By: Francesca Biagini; Lukas Gonon; Andrea Mazzon; Thilo Meyer-Brandis
    Abstract: In this paper we employ deep learning techniques to detect financial asset bubbles by using observed call option prices. The proposed algorithm is widely applicable and model-independent. We test the accuracy of our methodology in numerical experiments within a wide range of models and apply it to market data of tech stocks in order to assess if asset price bubbles are present. In addition, we provide a theoretical foundation of our approach in the framework of local volatility models. To this purpose, we give a new necessary and sufficient condition for a process with time-dependent local volatility function to be a strict local martingale.
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2210.01726&r=
  12. By: Lopez, Claude (Milken Institute); Roh, Hyeongyul (Milken Institute); Switek, Maggie (Milken Institute)
    Abstract: The Community Explorer provides new insights and data on the characteristics and diversity of the US population. Using machine learning methods, it synthesizes the information of 751 variables across 3,142 counties from the US Census Bureau's American Community Survey into 17 communities. Each one of these communities has a distinctive profile that combines demographic, socio-economic, and cultural behavioral determinants while not being geographically bounded. We encourage policy makers and researchers to make use of the results of our analysis. The Community Explorer dashboard provides the location of these profiles, allowing for targeted deployment of community interventions and, more broadly, increasing the understanding of socioeconomic gaps withing the US.
    Keywords: diversity, communities, development, economic well-being
    JEL: D31 J08 J10 R10
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:iza:izapps:pp190&r=
  13. By: Sergio Ocampo (University of Western Ontario); Baxter Robinson (University of Western Ontario)
    Abstract: Computing population moments for heterogeneous agent models is a necessary step for their estimation and evaluation. Computation based on Monte Carlo methods is usually time- and resource-consuming because it involves simulating a large sample of agents and potentially tracking them over time. We argue in favor of an alternative method for computing both cross-sectional and longitudinal moments that exploits the endogenous Markov transition function that defines the stationary distribution of agents in the model. The method relies on following the distribution of populations of interest by iterating forward the Markov transition function rather than focusing on a simulated sample of agents. Approximations of this function are readily available from standard solution methods of dynamic programming problems. The method provides precise estimates of moments like top-wealth shares, auto-correlations, transition rates, age-profiles, or coefficients of population regressions at lower time- and resource-costs compared to Monte Carlo based methods.
    Keywords: Computational Methods, Heterogeneous Agents, Simulation.
    JEL: C6 E2
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:uwo:uwowop:202210&r=
  14. By: Newhouse,David Locke; Merfeld,Joshua David; Ramakrishnan,Anusha Pudugramam; Swartz,Tom; Lahiri,Partha
    Abstract: Estimates of poverty are an important input into policy formulation in developing countries. Theaccurate measurement of poverty rates is therefore a first-order problem for development policy. This paper showsthat combining satellite imagery with household surveys can improve the precision and accuracy of estimated povertyrates in Mexican municipalities, a level at which the survey is not considered representative. It also shows that ahousehold-level model outperforms other common small area estimation methods. However, poverty estimates in 2015derived from geospatial data remain less accurate than 2010 estimates derived from household census data. These resultsindicate that the incorporation of household survey data and widely available satellite imagery can improve on existingpoverty estimates in developing countries when census data are old or when patterns of poverty are changing rapidly,even for small subgroups.
    Date: 2022–09–14
    URL: http://d.repec.org/n?u=RePEc:wbk:wbrwps:10175&r=
  15. By: Cengiz, Doruk; Tekgüç, Hasan
    Abstract: We extend the scope of the forecast reconciliation literature and use its tools in the context of causal inference. Researchers are interested in both the average treatment effect on the treated and treatment effect heterogeneity. We show that ex post correction of the counterfactual estimates using the aggregation constraints that stem from the hierarchical or grouped structure of the data is likely to yield more accurate estimates. Building on the geometric interpretation of forecast reconciliation, we provide additional insights into the exact factors determining the size of the accuracy improvement due to the reconciliation. We experiment with U.S. GDP and employment data. We find that the reconciled treatment effect estimates tend to be closer to the truth than the original (base) counterfactual estimates even in cases where the aggregation constraints are non-linear. Consistent with our theoretical expectations, improvement is greater when machine learning methods are used.
    Keywords: Forecast Reconciliation; Non-linear Constraints; Causal Machine Learning Methods; Counterfactual Estimation; Difference-in-Differences
    JEL: C53
    Date: 2022–06
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:114478&r=
  16. By: Thilo Reintjes
    Abstract: This thesis investigates share buybacks, specifically share buyback announcements. It addresses how to recognize such announcements, the excess return of share buybacks, and the prediction of returns after a share buyback announcement. We illustrate two NLP approaches for the automated detection of share buyback announcements. Even with very small amounts of training data, we can achieve an accuracy of up to 90%. This thesis utilizes these NLP methods to generate a large dataset consisting of 57,155 share buyback announcements. By analyzing this dataset, this thesis aims to show that most companies, which have a share buyback announced are underperforming the MSCI World. A minority of companies, however, significantly outperform the MSCI World. This significant overperformance leads to a net gain when looking at the averages of all companies. If the benchmark index is adjusted for the respective size of the companies, the average overperformance disappears, and the majority underperforms even greater. However, it was found that companies that announce a share buyback with a volume of at least 1% of their market cap, deliver, on average, a significant overperformance, even when using an adjusted benchmark. It was also found that companies that announce share buybacks in times of crisis emerge better than the overall market. Additionally, the generated dataset was used to train 72 machine learning models. Through this, it was able to find many strategies that could achieve an accuracy of up to 77% and generate great excess returns. A variety of performance indicators could be improved across six different time frames and a significant overperformance was identified. This was achieved by training several models for different tasks and time frames as well as combining these different models, generating significant improvement by fusing weak learners, in order to create one strong learner.
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2209.12863&r=
  17. By: Vanda Almeida (European Commission - JRC); Silvia De Poli (European Commission – JRC); Adrián Hernández (European Commission – JRC)
    Abstract: Minimum Income (MI) schemes are essential to alleviate poverty and guarantee a last-resort safety net to households with insufficient resources. Assessing the effectiveness of MI schemes in poverty reduction is challenging. Studies based on survey microdata are usually subject to a bias because households with very low incomes tend to underreport benefit receipts. Studies based on microsimulation models tend to overestimate these benefits mainly due to lack of data on take-up and non-income eligibility conditions. In this paper, we attempt to tackle these challenges to provide an integrated and consistent evaluation of the effectiveness of MI schemes in the European Union (EU). We develop a simple method that calibrates the simulation of MI schemes in the microsimulation model EUROMOD to obtain a new ‘closer to reality’ baseline simulation of each EU Member State’s scheme. We then use this corrected baseline to evaluate existing MI schemes, investigating their degree of coverage and adequacy, their poverty-alleviating effects and their overall cost. Finally, we explore the effects of possible (theoretical) reforms, implementing sequential changes to the levels of coverage and adequacy, towards eradicating the extent of extreme poverty. The main takeaways are that the contribution of MI support to poverty elimination is still rather limited in some EU countries and that action could be taken to increase coverage and adequacy at a relatively low financial cost.
    Keywords: minimum income, coverage, adequacy, poverty, microsimulation, EUROMOD
    JEL: H53 I32 I38
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:ipt:taxref:202209&r=
  18. By: Jason Nassios; James Giesecke
    Abstract: Harberger (1962) coined the term excess burden to emphasise that taxes impose costs in addition to the revenue they collect. Reviews of Australia's tax system have used point estimates of the excess burden for a series of Australian taxes, among other measures, to motivate and prioritise the nation's reform agenda. In this paper we commence the work needed to elucidate what the optimal tax mix in Australia might look like under alternative revenue raising efforts, by studying how the excess burden of four Australian taxes change as we alter their tax-specific revenue-to-GDP ratios. This is achieved via simulation with a large-scale CGE model with high levels of tax-specific detail. We show that property transfer duties and insurance taxes are highly inefficient even at low levels, strengthening the case for their complete replacement with more efficient taxes.
    Keywords: CGE modelling, Immovable property tax, Recurrent property tax, Insurance tax, Value added tax, Personal income tax, Excess burden
    JEL: C68 E62 H2 H71 R38
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:cop:wpaper:g-337&r=
  19. By: Baquero,Juan Pablo; Gao,Jia; Kim,Yeon Soo
    Abstract: This paper analyzes the distributional impact of the tax-benefit system in Bhutan.It makes two main contributions: first, this is the first substantive study of this kind in Bhutan, and second, due tolimited information on incomes in the household survey, a consumption-based model is combined with Mincer-typeearnings to derive estimates of incomes. The results show that the combined impact of government taxes and socialspending is to reduce inequality and slightly increase poverty as of 2017. The increase in poverty is mainly due tothe burden of indirect taxes and social contributions that are not offset by other transfers. Households in the bottom80 percent are net receivers of fiscal interventions, with fiscal benefits primarily occurring through education andhealth benefits, which are both progressive. Most households did not pay much into the system as of 2017, as personalincome taxes have a high exemption threshold and sales taxes only apply to a selected number of goods that are mainlyconsumed by richer households. Due to the lack of direct transfers, the net cash position is negative for poorhouseholds, although the magnitude is very small. Simulations suggest that the recent personal income taxreduction leads direct taxes to be slightly more progressive; however, the inequality-reducing impact isdampened. The goods and services tax is expected to increase indirect taxes for households across the distribution and isless progressive than the sales tax. This could lead to a temporary increase in poverty, which could be offset throughdirect transfers financed by the additional revenues.
    Date: 2022–09–22
    URL: http://d.repec.org/n?u=RePEc:wbk:wbrwps:10190&r=
  20. By: Jia Gao; Gabriela Inchauste
    Keywords: Public Sector Development - Public Sector Economics Macroeconomics and Economic Growth - Taxation & Subsidies Poverty Reduction - Inequality
    Date: 2020–12
    URL: http://d.repec.org/n?u=RePEc:wbk:wboper:35189&r=
  21. By: Jessika A. Bohlmann (Department of Economics, University of Pretoria, Private Bag X20, Hatfield 0028, South Africa); Roula Inglesi-Lotz (Department of Economics, University of Pretoria, Private Bag X20, Hatfield 0028, South Africa); Heinrich R. Bohlmann (Department of Economics, University of Pretoria, Private Bag X20, Hatfield 0028, South Africa)
    Abstract: This paper focuses on evaluating the economy-wide impact of a carbon tax as a policy mechanism designed to reduce GHG emissions in South Africa, with a particular focus on households. Impacts of the carbon tax are evaluated across different households, including low-income households, who are often said to be the least responsible for climate change. A dynamic CGE model of the South African economy that includes detailed tax information allowing for accurate measurement of the effects of imposing a carbon tax is used to conduct the modelling simulations. Results show that the effects of the carbon tax on economic growth are minimised when the revenue collected is recycled back into the economy. Additionally, low-income households are shown to be more affected by the carbon tax implementation compared to high-income households. The results from this study confirm that policymakers need to be careful in introducing new taxes on goods that form a large part of the consumption bundle of vulnerable households, such as energy, and have mitigation policies ready to support such households.
    Keywords: CGE Modelling, Carbon Tax, Households, South Africa
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:pre:wpaper:202248&r=
  22. By: Böhringer,Christoph; Fischer,Carolyn; Rivers,Nicholas
    Abstract: Carbon pricing policies worldwide are increasingly coupled with direct or indirect subsidies whereemissions pricing revenues are rebated to the regulated entities. This paper analyzes the incentives created by twonovel forms of rebating that reward additional emission intensity reductions: one given in proportion to output(intensity-based output rebating) and another that rebates a share of emission payments (intensity-based emissionrebating). These forms are contrasted with output-based rebating, abatement-based rebating, and lump sum rebating.Given the same emission price, intensity-based output rebating incentivizes the most intensity reductions, whileabatement-based rebating incentivizes the most output reductions, and output-based rebating puts the leastpressure on output (and emissions); intensity-based emissions rebating lies in between these, by implicitlysubsidizing emissions while incentivizing intensity reductions. The paper supplements partial equilibriumtheoretical analysis with numerical simulations to assess the performance of different mechanisms in a multisectorgeneral equilibrium model that accounts for economywide market interactions.
    Date: 2022–05–31
    URL: http://d.repec.org/n?u=RePEc:wbk:wbrwps:10069&r=
  23. By: Yingyao Hu; Yang Liu; Jiaxiong Yao
    Abstract: Latent variable models are crucial in scientific research, where a key variable, such as effort, ability, and belief, is unobserved in the sample but needs to be identified. This paper proposes a novel method for estimating realizations of a latent variable $X^*$ in a random sample that contains its multiple measurements. With the key assumption that the measurements are independent conditional on $X^*$, we provide sufficient conditions under which realizations of $X^*$ in the sample are locally unique in a class of deviations, which allows us to identify realizations of $X^*$. To the best of our knowledge, this paper is the first to provide such identification in observation. We then use the Kullback-Leibler distance between the two probability densities with and without the conditional independence as the loss function to train a Generative Element Extraction Networks (GEEN) that maps from the observed measurements to realizations of $X^*$ in the sample. The simulation results imply that this proposed estimator works quite well and the estimated values are highly correlated with realizations of $X^*$. Our estimator can be applied to a large class of latent variable models and we expect it will change how people deal with latent variables.
    Date: 2022–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2210.01300&r=
  24. By: AMORES Antonio F (European Commission - JRC); MAIER Sofia (European Commission - JRC); RICCI Mattia (European Commission - JRC)
    Abstract: The taxation of energy consumption is a central topic in the current policy debate of the European Union. While raising energy taxation is part of the European Commission's strategy for achieving its 2030/50 climate targets, the ongoing dramatic increases in the price of energy products are raising calls for reducing their taxation. Therefore, a close consideration of the incidence and redistributive effects of energy taxation is crucial to design compensatory measures and to ensure support for the Green transition. In this paper, we employ the EUROMOD microsimulation model to estimate the burden and the redistributive impact of energy consumption taxation on households across Member States. In doing so, we break down the role played by differences in consumption patterns, rates of taxation and their regressivity. We find that countries where energy taxation is the highest are often not the ones where its incidence on household income is the strongest. At the same time, the highest inequality impact is not always taking place in countries with the most regressive energy taxation. We therefore stress the importance of considering, not only the level of energy consumption taxation, but also its regressivity and its incidence over household income when assessing its inequality cost.
    Keywords: Energy consumption taxation, regressivity, redistributive effects, EUROMOD, Europe
    Date: 2022–09
    URL: http://d.repec.org/n?u=RePEc:ipt:taxref:202206&r=
  25. By: Pål Boug; Thomas von Brasch; Ådne Cappelen; Roger Hammersland; Håvard Hungnes; Dag Kolsrud; Julia Skretting; Birger Strøm; Trond C. Vigtel (Statistics Norway)
    Abstract: We analyse how fiscal policy affects both the macroeconomy and the industry structure, using a multi-sector macroeconomic model of the Norwegian economy with an inflation targeting monetary policy. Our simulations show that the government spending multiplier in the case of a permanent expansionary fiscal policy coupled with a Taylor-type interest rate rule is around 1 over a ten-year horizon. The corresponding labour tax multiplier is about 0.5. These multipliers are somewhat larger in the case of a transitory fiscal stimulus. The government spending multiplier, in the case of either a permanent or a transitory fiscal stimulus, is considerably larger than 1 when monetary policy is made accommodative by keeping the interest rate fixed. Our simulations also show that the industry structure is substantially affected by an expansionary fiscal policy, as value added in the non-traded goods sector increases at the expense of value added in the traded goods sector. The contraction of activity in the traded goods sector increases when monetary tightening accompanies the fiscal stimulus. Hence, we find that such a policy mix is likely to produce significant de-industrialisation in a small open economy with inflation targeting.
    Keywords: Fiscal Policy; Macroeconomy; Industry Structure; Model Simulations
    JEL: E17 E52 E62
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:ssb:dispap:984&r=
  26. By: Amine Chekireb (AMSE - Aix-Marseille Sciences Economiques - EHESS - École des hautes études en sciences sociales - AMU - Aix Marseille Université - ECM - École Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique, CEREGE - Centre européen de recherche et d'enseignement des géosciences de l'environnement - IRD - Institut de Recherche pour le Développement - AMU - Aix Marseille Université - CdF (institution) - Collège de France - INSU - CNRS - Institut national des sciences de l'Univers - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Julio Goncalves (CEREGE - Centre européen de recherche et d'enseignement des géosciences de l'environnement - IRD - Institut de Recherche pour le Développement - AMU - Aix Marseille Université - CdF (institution) - Collège de France - INSU - CNRS - Institut national des sciences de l'Univers - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Hubert Stahn (AMSE - Aix-Marseille Sciences Economiques - EHESS - École des hautes études en sciences sociales - AMU - Aix Marseille Université - ECM - École Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique); Agnes Tomini (AMSE - Aix-Marseille Sciences Economiques - EHESS - École des hautes études en sciences sociales - AMU - Aix Marseille Université - ECM - École Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique)
    Abstract: We formulate a hydro-economic model of the North-Western Sahara Aquifer System (NWSAS) to assess the effects of intensive pumping on the groundwater stock and examine the subsequent consequences of aquifer depletion. This large system comprises multi-layer reservoirs with vertical exchanges, all exploited under open access properties. We first develop a theoretical model to account for relevant features of the NWSAS by introducing, in the standard Gisser-Sanchez model, a non-stationary demand and quadratic stock-dependent cost functions. In the second step, we calibrate parameters values using data from the NWSAS over 1955–2000. We finally simulate the time evolution of the aquifer system with exploitation under an open-access regime. We specifically examine time trajectories of the piezometric levels in the two reservoirs, the natural outlets, and the modification of water balances. We find that natural outlets of the two reservoirs might be totally dried before 2050.
    Keywords: Hydro-economic model,Private pumping,Multi aquifer system,Groundwater-dependant ecosystems,Semi-arid region,Simulation
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03779321&r=
  27. By: Beckman, Jayson; Ivanic, Maros; Jelliffe, Jeremy L; Baquedano, Felix G; Scott, Sara G
    Abstract: The European Commission has proposed strategies that would impose restrictions on EU agriculture through targeted reductions in the use of land, antimicrobials, fertilizers, and pesticides. We perform a range of policy simulations to examine the economic implications of several of the proposed targets, finding reduction in global agriculture production, higher prices, less trade, and more global food insecurity.
    Keywords: Food Security and Poverty, International Development
    Date: 2020–11
    URL: http://d.repec.org/n?u=RePEc:ags:uerseb:327231&r=
  28. By: Mahler,Daniel Gerszon; Serajuddin,Umar; Maeda,Hiroko
    Abstract: To monitor progress toward global goals such as the Sustainable Development Goals, globalstatistics are needed. Yet cross-country data sets are rarely truly global, creating a trade-off for producers ofglobal statistics: the lower is the data coverage threshold for disseminating global statistics, the more statistics canbe made available, but the lower is the accuracy of these statistics. This paper quantifies the availability-accuracytrade-off by running more than 10 million simulations on the World Development Indicators. It shows that if the fractionof the world’s population for which data are lacking is x, then the global value will on expectation be off by 0.37*xstandard deviation, and it could be off by as much as x standard deviations. The paper shows the robustness of thisresult to various assumptions and provides recommendations on when there is enough data to create global statistics.Although the decision will be context specific, in a baseline scenario, it is suggested not to create globalstatistics when there are data for less than half of the world’s population.
    Date: 2022–05–05
    URL: http://d.repec.org/n?u=RePEc:wbk:wbrwps:10034&r=

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.