Knowledge-Embedded Latent Projection for Robust Representation Learning
Latent space models are widely used for analyzing high-dimensional discrete data matrices, such as patient-feature matrices in electronic health records (EHRs), by capturing complex dependence structures through low-dimensional embeddings. However, estimation becomes challenging in the imbalanced regime, where one matrix dimension is much larger than the other. In EHR applications, cohort sizes are often limited by disease prevalence or data availability, whereas the feature space remains extremely large due to the breadth of medical coding system. Motivated by the increasing availability of external semantic embeddings, such as pre-trained embeddings of clinical concepts in EHRs, we propose a knowledge-embedded latent projection model that leverages semantic side information to regularize representation learning. Specifically, we model column embeddings as smooth functions of semantic embeddings via a mapping in a reproducing kernel Hilbert space. We develop a computationally efficient two-step estimation procedure that combines semantically guided subspace construction via kernel principal component analysis with scalable projected gradient descent. We establish estimation error bounds that characterize the trade-off between statistical error and approximation error induced by the kernel projection. Furthermore, we provide local convergence guarantees for our non-convex optimization procedure. Extensive simulation studies and a real-world EHR application demonstrate the effectiveness of the proposed method.
- /The paper addresses the challenges of analyzing high-dimensional discrete data matrices, particularly in EHRs, where cohort sizes are often limited.
- /Imbalanced data regimes are a significant concern, with one dimension (e.g., patients) being much smaller than the other (e.g., features).
- /The proposed model integrates external semantic embeddings to regularize representation learning, enhancing the quality of data analysis.
- /Column embeddings are modeled as smooth functions of semantic embeddings using a mapping in a reproducing kernel Hilbert space.
- /A two-step estimation procedure is developed, combining semantically guided subspace construction via kernel principal component analysis with scalable projected gradient descent.
- /Estimation error bounds are established, characterizing the trade-off between statistical error and approximation error from kernel projection.
- /Local convergence guarantees for the non-convex optimization procedure are provided, ensuring reliable estimation in practical applications.
- /Extensive simulation studies validate the effectiveness of the proposed method in capturing complex dependence structures in EHR data.
- /The research contributes to the field of healthcare data analysis by offering a robust framework for leveraging semantic information in high-dimensional data.