[go: up one dir, main page]

  EconPapers    
Economics at your fingertips  
 

Linking Representations with Multimodal Contrastive Learning

Abhishek Arora, Xinmei Yang, Shao-Yu Jheng and Melissa Dell

Papers from arXiv.org

Abstract: Many applications require linking individuals, firms, or locations across datasets. Most widely used methods, especially in social science, do not employ deep learning, with record linkage commonly approached using string matching techniques. Moreover, existing methods do not exploit the inherently multimodal nature of documents. In historical record linkage applications, documents are typically noisily transcribed by optical character recognition (OCR). Linkage with just OCR'ed texts may fail due to noise, whereas linkage with just image crops may also fail because vision models lack language understanding (e.g., of abbreviations or other different ways of writing firm names). To leverage multimodal learning, this study develops CLIPPINGS (Contrastively LInking Pooled Pre-trained Embeddings). CLIPPINGS aligns symmetric vision and language bi-encoders, through contrastive language-image pre-training on document images and their corresponding OCR'ed texts. It then contrastively learns a metric space where the pooled image-text embedding for a given instance is close to embeddings in the same class (e.g., the same firm or location) and distant from embeddings of a different class. Data are linked by treating linkage as a nearest neighbor retrieval problem with the multimodal embeddings. CLIPPINGS outperforms widely used string matching methods by a wide margin in linking mid-20th century Japanese firms across financial documents. A purely self-supervised model - trained only by aligning the embeddings for the image crop of a firm name and its corresponding OCR'ed text - also outperforms popular string matching methods. Fascinatingly, a multimodally pre-trained vision-only encoder outperforms a unimodally pre-trained vision-only encoder, illustrating the power of multimodal pre-training even if only one modality is available for linking at inference time.

Date: 2023-04, Revised 2024-06
New Economics Papers: this item is included in nep-big
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (3)

Downloads: (external link)
http://arxiv.org/pdf/2304.03464 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2304.03464

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2024-12-04
Handle: RePEc:arx:papers:2304.03464