nmf#
- diffsptk.NMF#
alias of
NonnegativeMatrixFactorization
- class diffsptk.NonnegativeMatrixFactorization(n_data: int, order: int, n_comp: int, *, beta: float = 0, n_iter: int = 100, eps: float = 1e-05, act_norm: bool = False, batch_size: int | None = None, seed: int | None = None, verbose: bool | int = False)[source]#
Nonnegative matrix factorization module. Note that the forward method is not differentiable.
- Parameters:
- n_dataint >= 1
The number of vectors, \(T\).
- orderint >= 0
The order of the vector, \(M\).
- n_compint >= 1
The number of principal components, \(K\).
- betafloat
A control parameter of beta-divergence, \(\beta\). 0: Itakura-Saito divergence, 1: generalized Kullback-Leibler divergence, 2: Euclidean distance.
- n_iterint >= 1
The number of iterations.
- epsfloat >= 0
The convergence threshold.
- act_normbool
If True, normalizes the activation to sum to one.
- seedint or None
The random seed.
- batch_sizeint >= 1 or None
The bBatch size.
- verbosebool or int
If 1, shows the distance at each iteration; if 2, shows progress bars.
References
[1]M. Nakano et al., “Convergence-guaranteed multiplicative algorithms for nonnegative matrix factorization with beta-divergence,” IEEE International Workshop on Machine Learning for Signal Processing, pp. 283-288, 2010.
- forward(x: Tensor | DataLoader) tuple[tuple[Tensor, Tensor], Tensor] [source]#
Estimate the coefficient matrix and dictionary matrix.
- Parameters:
- xTensor [shape=(T, M+1)] or DataLoader
The input vectors or a DataLoader that yields the input vectors.
- Returns:
- paramstuple of Tensors [shape=((T, K), (K, M+1))]
The estimated coefficient matrix and dictionary matrix.
- divergenceTensor [scalar]
The divergence between the input and reconstructed vectors.
Examples
>>> x = diffsptk.nrand(10, 3) ** 2 >>> nmf = diffsptk.NMF(10, 3, 2) >>> (U, H), _ = nmf(x) >>> U.shape torch.Size([10, 2]) >>> H.shape torch.Size([2, 4]) >>> y = U @ H >>> y.shape torch.Size([10, 4])