[go: up one dir, main page]

What a lovely hat

Is it made out of tin foil?

Paper 2024/1953

Truncation Untangled: Scaling Fixed-Point Arithmetic for Privacy-Preserving Machine Learning to Large Models and Datasets

Christopher Harth-Kitzerow, Technical University of Munich, BMW Group
Georg Carle, Technical University of Munich
Abstract

Fixed point arithmetic (FPA) is essential to enable practical Privacy-Preserving Machine Learning. When multiplying two fixed-point numbers, truncation is required to ensure that the product maintains correct precision. While multiple truncation schemes based on Secure Multiparty Computation (MPC) have been proposed, which of the different schemes offers the best trade-off between accuracy and efficiency on common PPML datasets and models has remained underexplored. In this work, we study several different stochastic and exact truncation approaches found in the MPC literature that require different slack sizes, i.e., additional bits required by each secret share to ensure correctness. We provide novel, improved construction for each truncation approach in the semi-honest 3-PC and malicious 4-PC settings, which reduce communication and round complexity up to three times. Moreover, we propose a truncation scheme that does not introduce any communication overhead in the online phase and exactly matches the accuracy of plaintext floating-point PyTorch inference of VGG-16 on the ImageNet dataset with over 80% accuracy using shares with a bitlength of only 32. This is the first time that high PPML accuracy is demonstrated on ImageNet.

Metadata
Available format(s)
PDF
Category
Cryptographic protocols
Publication info
Preprint.
Keywords
Fixed-point arithmeticMPCPPMLTruncationSecure Inference
Contact author(s)
christopher harth-kitzerow @ tum de
carle @ net in tum de
History
2024-12-06: approved
2024-12-02: received
See all versions
Short URL
https://ia.cr/2024/1953
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2024/1953,
      author = {Christopher Harth-Kitzerow and Georg Carle},
      title = {Truncation Untangled: Scaling Fixed-Point Arithmetic for Privacy-Preserving Machine Learning to Large Models and Datasets},
      howpublished = {Cryptology {ePrint} Archive, Paper 2024/1953},
      year = {2024},
      url = {https://eprint.iacr.org/2024/1953}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.