Learning Large Causal Structures from Inverse Covariance Matrix via Matrix Decomposition - Hybrid Approaches for Interpretable Artificial Intelligence
Pré-Publication, Document De Travail Année : 2023

Learning Large Causal Structures from Inverse Covariance Matrix via Matrix Decomposition

Résumé

Learning causal structures from observational data is a fundamental yet highly complex problem when the number of variables is large. In this paper, we start from linear structural equation models (SEMs) and investigate ways of learning causal structures from the inverse covariance matrix. The proposed method, called -ICID (for {\it Independence-preserving} Decomposition from Oracle Inverse Covariance matrix), is based on continuous optimization of a type of matrix decomposition that preserves the nonzero patterns of the inverse covariance matrix. We show that -ICID provides an efficient way for identifying the true directed acyclic graph (DAG) under the knowledge of noise variances. With weaker prior information, the proposed method gives directed graph solutions that are useful for making more refined causal discovery. The proposed method enjoys a low complexity when the true DAG has bounded node degrees, as reflected by its time efficiency in experiments in comparison with state-of-the-art algorithms.
Fichier principal
Vignette du fichier
2211.14221.pdf (928.96 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03885791 , version 1 (23-10-2023)

Licence

Identifiants

Citer

Shuyu Dong, Kento Uemura, Akito Fujii, Shuang Chang, Yusuke Koyanagi, et al.. Learning Large Causal Structures from Inverse Covariance Matrix via Matrix Decomposition. 2023. ⟨hal-03885791⟩
164 Consultations
48 Téléchargements

Altmetric

Partager

More