nmf#

diffsptk.NMF#

alias of NonnegativeMatrixFactorization

class diffsptk.NonnegativeMatrixFactorization(n_data, order, n_comp, *, beta=0, n_iter=100, eps=1e-05, act_norm=False, batch_size=None, seed=None, verbose=False)[source]#

Nonnegative matrix factorization module. Note that the forward method is not differentiable.

Parameters:
n_dataint >= 1

Number of vectors, \(T\).

orderint >= 0

Order of vector, \(M\).

n_compint >= 1

Number of basis vectors, \(K\).

betafloat

A control parameter of beta-divergence, \(\beta\). 0: Itakura-Saito divergence, 1: generalized Kullback-Leibler divergence, 2: Euclidean distance.

n_iterint >= 1

Number of iterations.

epsfloat >= 0

Convergence threshold.

act_normbool

If True, normalize activation to sum to one.

seedint or None

Random seed.

batch_sizeint >= 1 or None

Batch size.

verbosebool or int

If 1, show distance at each iteration; if 2, show progress bar.

References

[1]

M. Nakano et al., “Convergence-guaranteed multiplicative algorithms for nonnegative matrix factorization with beta-divergence,” IEEE International Workshop on Machine Learning for Signal Processing, pp. 283-288, 2010.

forward(x)[source]#

Estimate coefficient matrix and dictionary matrix.

Parameters:
xTensor [shape=(T, M+1)] or DataLoader

Input vectors or dataloder yielding input vectors.

Returns:
paramstuple of Tensors [shape=((T, K), (K, M+1))]

Estimated coefficient matrix and dictionary matrix.

divergenceTensor [scalar]

Divergence between input and reconstructed vectors.

Examples

>>> x = diffsptk.nrand(10, 3) ** 2
>>> nmf = diffsptk.NMF(10, 3, 2)
>>> (U, H), _ = nmf(x)
>>> U.shape
torch.Size([10, 2])
>>> H.shape
torch.Size([2, 4])
>>> y = U @ H
>>> y.shape
torch.Size([10, 4])
warmup(x, **lbg_params)[source]#

Initialize dictionary matrix by K-means clustering.

Parameters:
xTensor [shape=(T, M+1)] or DataLoader

Training data.

lbg_paramsadditional keyword arguments

Parameters for Linde-Buzo-Gray algorithm.

See also

pca lbg