quantize#

class diffsptk.UniformQuantization(abs_max=1, n_bit=8, quantizer='mid-rise')[source]#

See this page for details. The gradient is copied from the next module.

Parameters:
abs_maxfloat > 0

Absolute maximum value of input.

n_bitint >= 1

Number of quantization bits.

quantizer[‘mid-rise’, ‘mid-tread’]

Quantizer.

forward(x)[source]#

Quantize input.

Parameters:
xTensor [shape=(…,)]

Input.

Returns:
outTensor [shape=(…,)]

Quantized input.

Examples

>>> x = diffsptk.ramp(-4, 4)
>>> quantize = diffsptk.UniformQuantization(4, 2)
>>> y = quantize(x).int()
>>> y
tensor([0, 0, 1, 1, 2, 2, 3, 3, 3], dtype=torch.int32)
diffsptk.functional.quantize(x, abs_max=1, n_bit=8, quantizer='mid-rise')[source]#

Quantize input.

Parameters:
xTensor [shape=(…,)]

Input.

abs_maxfloat > 0

Absolute maximum value of input.

n_bitint >= 1

Number of quantization bits.

quantizer[‘mid-rise’, ‘mid-tread’]

Quantizer.

Returns:
outTensor [shape=(…,)]

Quantized input.

See also

ulaw dequantize