Matrix factorization is an important mathematical problem encountered in the context of dictionary learning, recommendation systems, and machine learning. We introduce a decimation scheme that maps it to neural network models of associative memory and provide a detailed theoretical analysis of its performance, showing that decimation is able to factorize extensive-rank matrices and to denoise them efficiently. In the case of binary prior on the signal components, we introduce a decimation algorithm based on a ground-state search of the neural network, which shows performances that match the theoretical prediction.
Matrix factorization with neural networks
Mezard, Marc
2023
Abstract
Matrix factorization is an important mathematical problem encountered in the context of dictionary learning, recommendation systems, and machine learning. We introduce a decimation scheme that maps it to neural network models of associative memory and provide a detailed theoretical analysis of its performance, showing that decimation is able to factorize extensive-rank matrices and to denoise them efficiently. In the case of binary prior on the signal components, we introduce a decimation algorithm based on a ground-state search of the neural network, which shows performances that match the theoretical prediction.File in questo prodotto:
File | Dimensione | Formato | |
---|---|---|---|
2212.02105.pdf
accesso aperto
Descrizione: article
Tipologia:
Documento in Pre-print (Pre-print document)
Licenza:
Non specificato
Dimensione
719.74 kB
Formato
Adobe PDF
|
719.74 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.