vq#
- class diffsptk.VectorQuantization(order: int, codebook_size: int, device: device | None = None, **kwargs)[source]#
See this page for details.
- Parameters:
- orderint >= 0
The order of the input vector, \(M\).
- codebook_sizeint >= 1
The codebook size, \(K\).
- devicetorch.device or None
The device of this module.
- **kwargsadditional keyword arguments
See this page for details.
References
[1]A. v. d. Oord et al., “Neural discrete representation learning,” Advances in Neural Information Processing Systems, pp. 6309-6318, 2017.
- forward(x: Tensor, codebook: Tensor | None = None, **kwargs) tuple[Tensor, Tensor, Tensor] [source]#
Perform vector quantization.
- Parameters:
- xTensor [shape=(…, M+1)]
The input vectors.
- codebookTensor [shape=(K, M+1)]
The external codebook. If None, use the internal codebook.
- **kwargsadditional keyword arguments
See this page for details.
- Returns:
- xqTensor [shape=(…, M+1)]
The quantized vectors.
- indicesTensor [shape=(…,)]
The codebook indices.
- lossTensor [scalar]
The commitment loss.
Examples
>>> import diffsptk >>> vq = diffsptk.VectorQuantization(4, 2).eval() >>> x = diffsptk.nrand(4) >>> x.shape torch.Size([5]) >>> xq, _, _ = vq(x) >>> xq.shape torch.Size([5])