meslas.external_dependencies package

Submodules

meslas.external_dependencies.numpytorch module

class meslas.external_dependencies.numpytorch.WrapTorch

Bases: object

Uses torch as the backend; allows numpy-style sytax

Attributes:
newaxis

Methods

arange([start, step, out, dtype, layout, ...])

Returns a 1-D tensor of size \(\left\lceil \frac{\text{end} - \text{start}}{\text{step}} \right\rceil\) with values from the interval [start, end) taken with common difference step beginning from start.

array

tensor(data, *, dtype=None, device=None, requires_grad=False, pin_memory=False) -> Tensor

cat(tensors[, dim, out])

Concatenates the given sequence of seq tensors in the given dimension.

exp(input, *[, out])

Returns a new tensor with the exponential of the elements of the input tensor input.

log(input, *[, out])

Returns a new tensor with the natural logarithm of the elements of input.

abs

argmax

argmin

max

min

ones

sum

sumto1

zeros

abs(*args, **kwargs)
arange(start=0, end, step=1, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) Tensor

Returns a 1-D tensor of size \(\left\lceil \frac{\text{end} - \text{start}}{\text{step}} \right\rceil\) with values from the interval [start, end) taken with common difference step beginning from start.

Note that non-integer step is subject to floating point rounding errors when comparing against end; to avoid inconsistency, we advise adding a small epsilon to end in such cases.

\[\text{out}_{{i+1}} = \text{out}_{i} + \text{step}\]
Args:

start (Number): the starting value for the set of points. Default: 0. end (Number): the ending value for the set of points step (Number): the gap between each pair of adjacent points. Default: 1.

Keyword args:

out (Tensor, optional): the output tensor. dtype (torch.dtype, optional): the desired data type of returned tensor.

Default: if None, uses a global default (see torch.set_default_tensor_type()). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see get_default_dtype(). Otherwise, the dtype is inferred to be torch.int64.

layout (torch.layout, optional): the desired layout of returned Tensor.

Default: torch.strided.

device (torch.device, optional): the desired device of returned tensor.

Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.

requires_grad (bool, optional): If autograd should record operations on the

returned tensor. Default: False.

Example:

>>> torch.arange(5)
tensor([ 0,  1,  2,  3,  4])
>>> torch.arange(1, 4)
tensor([ 1,  2,  3])
>>> torch.arange(1, 2.5, 0.5)
tensor([ 1.0000,  1.5000,  2.0000])
argmax(v, *args, **kwargs)
argmin(v, *args, **kwargs)
array()

tensor(data, *, dtype=None, device=None, requires_grad=False, pin_memory=False) -> Tensor

Constructs a tensor with no autograd history (also known as a “leaf tensor”, see /notes/autograd) by copying data.

Warning

When working with tensors prefer using torch.Tensor.clone(), torch.Tensor.detach(), and torch.Tensor.requires_grad_() for readability. Letting t be a tensor, torch.tensor(t) is equivalent to t.clone().detach(), and torch.tensor(t, requires_grad=True) is equivalent to t.clone().detach().requires_grad_(True).

See also

torch.as_tensor() preserves autograd history and avoids copies where possible. torch.from_numpy() creates a tensor that shares storage with a NumPy array.

Args:
data (array_like): Initial data for the tensor. Can be a list, tuple,

NumPy ndarray, scalar, and other types.

Keyword args:
dtype (torch.dtype, optional): the desired data type of returned tensor.

Default: if None, infers data type from data.

device (torch.device, optional): the device of the constructed tensor. If None and data is a tensor

then the device of data is used. If None and data is not a tensor then the result tensor is constructed on the CPU.

requires_grad (bool, optional): If autograd should record operations on the

returned tensor. Default: False.

pin_memory (bool, optional): If set, returned tensor would be allocated in

the pinned memory. Works only for CPU tensors. Default: False.

Example:

>>> torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])
tensor([[ 0.1000,  1.2000],
        [ 2.2000,  3.1000],
        [ 4.9000,  5.2000]])

>>> torch.tensor([0, 1])  # Type inference on data
tensor([ 0,  1])

>>> torch.tensor([[0.11111, 0.222222, 0.3333333]],
...              dtype=torch.float64,
...              device=torch.device('cuda:0'))  # creates a double tensor on a CUDA device
tensor([[ 0.1111,  0.2222,  0.3333]], dtype=torch.float64, device='cuda:0')

>>> torch.tensor(3.14159)  # Create a zero-dimensional (scalar) tensor
tensor(3.1416)

>>> torch.tensor([])  # Create an empty tensor (of size (0,))
tensor([])
backend = <module 'torch' from '/home/cedric/miniconda3/envs/submarine/lib/python3.8/site-packages/torch/__init__.py'>
cat(tensors, dim=0, *, out=None) Tensor

Concatenates the given sequence of seq tensors in the given dimension. All tensors must either have the same shape (except in the concatenating dimension) or be empty.

torch.cat() can be seen as an inverse operation for torch.split() and torch.chunk().

torch.cat() can be best understood via examples.

Args:
tensors (sequence of Tensors): any python sequence of tensors of the same type.

Non-empty tensors provided must have the same shape, except in the cat dimension.

dim (int, optional): the dimension over which the tensors are concatenated

Keyword args:

out (Tensor, optional): the output tensor.

Example:

>>> x = torch.randn(2, 3)
>>> x
tensor([[ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497]])
>>> torch.cat((x, x, x), 0)
tensor([[ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497],
        [ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497],
        [ 0.6580, -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497]])
>>> torch.cat((x, x, x), 1)
tensor([[ 0.6580, -1.0969, -0.4614,  0.6580, -1.0969, -0.4614,  0.6580,
         -1.0969, -0.4614],
        [-0.1034, -0.5790,  0.1497, -0.1034, -0.5790,  0.1497, -0.1034,
         -0.5790,  0.1497]])
exp(input, *, out=None) Tensor

Returns a new tensor with the exponential of the elements of the input tensor input.

\[y_{i} = e^{x_{i}}\]
Args:

input (Tensor): the input tensor.

Keyword args:

out (Tensor, optional): the output tensor.

Example:

>>> torch.exp(torch.tensor([0, math.log(2.)]))
tensor([ 1.,  2.])
log(input, *, out=None) Tensor

Returns a new tensor with the natural logarithm of the elements of input.

\[y_{i} = \log_{e} (x_{i})\]
Args:

input (Tensor): the input tensor.

Keyword args:

out (Tensor, optional): the output tensor.

Example:

>>> a = torch.rand(5) * 5
>>> a
tensor([4.7767, 4.3234, 1.2156, 0.2411, 4.5739])
>>> torch.log(a)
tensor([ 1.5637,  1.4640,  0.1952, -1.4226,  1.5204])
max(v, *args, **kwargs)
min(v, *args, **kwargs)
newaxis = None
ones(*args, **kwargs)
sum(v, *args, **kwargs)
sumto1(v, dim=None, axis=None)
zeros(*args, **kwargs)
meslas.external_dependencies.numpytorch.aggregate(subs, val=1.0, *args, **kwargs)
Parameters:

subs (torch.LongTensor, (*torch.LongTensor)) – [dim, element]

meslas.external_dependencies.numpytorch.append_dim(v, n_dim_to_append=1)
meslas.external_dependencies.numpytorch.append_to_ndim(v, n_dim_desired)
meslas.external_dependencies.numpytorch.attach_dim(v, n_dim_to_prepend=0, n_dim_to_append=0)
meslas.external_dependencies.numpytorch.block_diag(m)

Make a block diagonal matrix along dim=-3 EXAMPLE: block_diag(torch.ones(4,3,2)) should give a 12 x 8 matrix with blocks of 3 x 2 ones. Prepend batch dimensions if needed. You can also give a list of matrices. :type m: torch.Tensor, list :rtype: torch.Tensor

meslas.external_dependencies.numpytorch.block_diag_irregular(matrices)
meslas.external_dependencies.numpytorch.bootstrap(fun, samp, n_boot=100)
meslas.external_dependencies.numpytorch.categrnd(probs=None, logits=None, sample_shape=())
meslas.external_dependencies.numpytorch.circdiff(angle1, angle2, maxangle=tensor(6.2832))
Parameters:
  • angle1 – angle scaled to be between 0 and maxangle

  • angle2 – angle scaled to be between 0 and maxangle

  • maxangle – max angle. defaults to 2 * pi.

Returns:

angular difference, between -.5 and +.5 * maxangle

meslas.external_dependencies.numpytorch.conv_t(p, kernel, **kwargs)

1D convolution with the starting time of the signal and kernel anchored.

EXAMPLE: p_cond_rt = npt.conv_t(

p_cond_td[None], # [1, cond, fr] p_tnd[None, None, :].expand([n_cond, 1, nt]), # [cond, 1, fr] groups=n_cond

) :param p: [batch, time] or [batch, channel_in, time] :param kernel: [time] or [channel_out, channel_in, time] :param kwargs: fed to F.conv1d :return: p[batch, time] or [batch, channel_out, time]

meslas.external_dependencies.numpytorch.crossvalincl(n_tr, i_fold, n_fold=10, mode='consec')
Parameters:
  • n_tr – Number of trials

  • i_fold – Index of fold

  • n_fold – Number of folds. If 1, training set = test set.

  • mode – ‘consec’: consecutive trials; ‘mod’: interleaved

Returns:

boolean (Byte) tensor

meslas.external_dependencies.numpytorch.deg2rad(deg)
meslas.external_dependencies.numpytorch.delta(levels, v, dlevel=None)

@type levels: torch.Tensor @type v: torch.Tensor @type dlevel: torch.Tensor @rtype: torch.Tensor

meslas.external_dependencies.numpytorch.enforce_tensor(v: float | ndarray | Tensor, min_ndim=1, **kwargs)

Construct a tensor if the input is not; otherwise return the input as is. Same as enforce_tensor :param v: :param min_ndim: :param kwargs: :return:

meslas.external_dependencies.numpytorch.entropy(tensor, *args, **kwargs)
Parameters:

tensor (torch.Tensor) – probability. Optionally provide dim and keepdim for

summation. :return: torch.Tensor

meslas.external_dependencies.numpytorch.expand_all(*args, shape=None)

Expand tensors so that all tensors are of the same size. Tensors must have the same number of dimensions; otherwise, use expand_batch() to prepend dimensions. :param args: :param shape: :return:

meslas.external_dependencies.numpytorch.expand_batch(*args, **kwargs)

Same as repeat_batch except forcing use_expand=True, to share memory across repeats, i.e., expand first dimensions, while keeping last dimensions the same :param args: tuple of tensors to repeat. :param repeat_existing_dims: whether to repeat singleton dims. :param to_append_dims: if True, append dims if needed; if False, prepend. :param shape: desired shape of the output. Give None to match max shape of each dim. Give -1 at dims where the max shape is desired. :return: tuple of repeated tensors.

meslas.external_dependencies.numpytorch.expand_upto_dim(args, dim, to_expand_left=True)

Similar to expand_batch(), but keeps some dims unexpanded even if they don’t match. :param args: iterable yielding torch.Tensor :param dim: if to_expand_left=True, then arg[:dim] is expanded,

otherwise, arg[dim:] is expanded, for each arg in args. Note that dim=-1 leaves the last dim unexpanded. This is necessary to make dim=0 expand the first.

Parameters:

to_expand_left – if True, left of dim is expanded while the rest of

the dims are kept unchanged. :return: tuple of expanded args

meslas.external_dependencies.numpytorch.float(v)
meslas.external_dependencies.numpytorch.freeze(module)
meslas.external_dependencies.numpytorch.get_jacobian(net, x, noutputs)

From https://gist.github.com/sbarratt/37356c46ad1350d4c30aefbd488a4faa :type net: torch.nn.Module :type x: torch.Tensor :type noutputs: int :rtype: torch.Tensor

meslas.external_dependencies.numpytorch.interp1d(query: Tensor, value: Tensor, dim=0) Tensor
Parameters:
  • query – index on dim. Should be a FloatTensor for gradient.

  • value

  • dim

Returns:

interpolated to give value[query] (when dim=0)

meslas.external_dependencies.numpytorch.inv_gaussian_cdf(x, mu, lam)
meslas.external_dependencies.numpytorch.inv_gaussian_mean_std2params(mu, std)

mu, std -> mu, lam

meslas.external_dependencies.numpytorch.inv_gaussian_pdf(x, mu, lam)

As in https://en.wikipedia.org/wiki/Inverse_Gaussian_distribution @param x: values to query. Must be positive. @param mu: the expectation @param lam: lambda in Wikipedia’s notation @return: p(x; mu, lam)

meslas.external_dependencies.numpytorch.inv_gaussian_pmf_mean_stdev(x: Tensor, mu: Tensor, std: Tensor, dx=None, algo='diff_cdf') Tensor
Parameters:
  • x – must be a 1-dim tensor along dim.

  • mu

  • std

  • dx

Returns:

meslas.external_dependencies.numpytorch.inv_gaussian_variance(mu, lam)

As in https://en.wikipedia.org/wiki/Inverse_Gaussian_distribution @param mu: the expectation @param lam: lambda in Wikipedia’s notation @return: Var[X]

meslas.external_dependencies.numpytorch.inv_gaussian_variance2lam(mu, var)
meslas.external_dependencies.numpytorch.isnan(v)
meslas.external_dependencies.numpytorch.kron(a, b)

Kronecker product of matrices a and b with leading batch dimensions. Batch dimensions are broadcast. The number of them mush :type a: torch.Tensor :type b: torch.Tensor :rtype: torch.Tensor

meslas.external_dependencies.numpytorch.kw_np2torch(kw)
meslas.external_dependencies.numpytorch.log_normpdf(sample, mu=0.0, sigma=1.0)
meslas.external_dependencies.numpytorch.logistic(x: Tensor) Tensor
meslas.external_dependencies.numpytorch.logit(p: Tensor) Tensor
meslas.external_dependencies.numpytorch.lognorm_params_given_mean_stdev(mean: ~torch.Tensor, stdev: ~torch.Tensor) -> (<class 'torch.Tensor'>, <class 'torch.Tensor'>)
meslas.external_dependencies.numpytorch.lognorm_pmf(x: Tensor, mean: Tensor, stdev: Tensor) Tensor
Parameters:
  • x – must be monotonic increasing with equal increment on dim 0

  • mean

  • stdev

Returns:

p[k] = P(x[k] < X < x[k + 1]; mean, stdev)

meslas.external_dependencies.numpytorch.lognormal_params2mean_stdev(loc, scale)
meslas.external_dependencies.numpytorch.m2v(mm)
Return type:

torch.Tensor

meslas.external_dependencies.numpytorch.m2v0(mat)

Matrix dims come first, unlike v2m

meslas.external_dependencies.numpytorch.mat2vec0(mat)

Matrix dims come first, unlike v2m

meslas.external_dependencies.numpytorch.matmul0(a, b)

Matrix dims come first, unlike torch.matmul

meslas.external_dependencies.numpytorch.matmul2vec(mm)
Return type:

torch.Tensor

meslas.external_dependencies.numpytorch.matsum(*tensors)

Apply expand_upto_dim(tensors, -2) before adding them together, consistent with torch.matmul() :param tensors: iterable of tensors :return: sum of tensors, expanded except for the last two dimensions.

meslas.external_dependencies.numpytorch.matvecmul0(mat, vec)

Matrix and vec dims come first. Vec is expanded to mat first.

meslas.external_dependencies.numpytorch.max_distrib(p: ~torch.Tensor) -> (<class 'torch.Tensor'>, <class 'torch.Tensor'>)

Distribution of the max of independent RVs R0 ~ p[0] and R1 ~ p[1]. When ndims(p) > 2, each pair of p[0, r0, :] and p[1, r1, :] is processed separately. p.sum(1) is taken as the number of trials.

p_max, p_1st = min_distrib(p)

p_max(t,1,:): Probability distribution of min(t_1 ~ p(:,1), t_2 ~ p(:,2)) p_last(t,k,:): Probability of t_k happening first at t.

sums(p_1st, [1, 2]) gives all 1’s.

Formula from: http://math.stackexchange.com/questions/308230/expectation-of-the-min-of-two-independent-random-variables

Parameters:

p – [id, value, [batch, …]]

Returns:

p_max[value, batch, …], p_last[id, value, batch, …]

meslas.external_dependencies.numpytorch.max_shape(shapes)
meslas.external_dependencies.numpytorch.mean_distrib(p, v, axis=None)
meslas.external_dependencies.numpytorch.min_distrib(p: ~torch.Tensor) -> (<class 'torch.Tensor'>, <class 'torch.Tensor'>)

Distribution of the min of independent RVs R0 ~ p[0] and R1 ~ p[1]. When ndims(p) > 2, each pair of p[0, r0, :] and p[1, r1, :] is processed separately. p.sum(1) is taken as the number of trials.

p_min, p_1st = min_distrib(p)

p_min(t,1,:): Probability distribution of min(t_1 ~ p(:,1), t_2 ~ p(:,2)) p_1st(t,k,:): Probability of t_k happening first at t.

sums(p_1st, [1, 2]) gives all 1’s.

Formula from: http://math.stackexchange.com/questions/308230/expectation-of-the-min-of-two-independent-random-variables

Parameters:

p – [id, value, [batch, …]]

Returns:

p_min[value, batch, …], p_1st[id, value, batch, …]

meslas.external_dependencies.numpytorch.mm0(a, b)

Matrix dims come first, unlike torch.matmul

meslas.external_dependencies.numpytorch.mvm0(mat, vec)

Matrix and vec dims come first. Vec is expanded to mat first.

meslas.external_dependencies.numpytorch.mvnpdf_log(x, mu=None, sigma=None)
Parameters:
  • x – [batch, ndim]

  • mu – [batch, ndim]

  • sigma – [batch, ndim, ndim]

Returns:

log_prob [batch]

Return type:

torch.FloatTensor

meslas.external_dependencies.numpytorch.mvnrnd(mu, sigma, sample_shape=())
meslas.external_dependencies.numpytorch.nan2v(v, fill=0)
meslas.external_dependencies.numpytorch.nanmean(v, *args, inplace=False, **kwargs)
meslas.external_dependencies.numpytorch.nansum(v, *args, inplace=False, **kwargs)
meslas.external_dependencies.numpytorch.normrnd(mu=0.0, sigma=1.0, sample_shape=(), return_distrib=False)

@param mu: @param sigma: @param sample_shape: @type return_distrib: bool @rtype: Union[(torch.Tensor, torch.distributions.Distribution), torch.Tensor]

meslas.external_dependencies.numpytorch.npy(v: Tensor | ndarray)

Construct a np.ndarray from tensor; otherwise return the input as is :type v: torch.Tensor :rtype: np.ndarray

meslas.external_dependencies.numpytorch.npys(*args)
meslas.external_dependencies.numpytorch.numpy(v: Tensor | ndarray)

Construct a np.ndarray from tensor; otherwise return the input as is :type v: torch.Tensor :rtype: np.ndarray

meslas.external_dependencies.numpytorch.onehotrnd(probs=None, logits=None, sample_shape=())
meslas.external_dependencies.numpytorch.p2en(v, ndim_st=1)

Permute first ndim_en of an array v to the last :type v: torch.Tensor :type ndim_st: int :rtype: torch.Tensor

meslas.external_dependencies.numpytorch.p2st(v, ndim_en=1)

Permute last ndim_en of an array v to the first :type v: torch.Tensor :type ndim_en: int :rtype: torch.Tensor

meslas.external_dependencies.numpytorch.pconc2conc(pconc)
meslas.external_dependencies.numpytorch.permute2en(v, ndim_st=1)

Permute first ndim_en of an array v to the last :type v: torch.Tensor :type ndim_st: int :rtype: torch.Tensor

meslas.external_dependencies.numpytorch.permute2st(v, ndim_en=1)

Permute last ndim_en of an array v to the first :type v: torch.Tensor :type ndim_en: int :rtype: torch.Tensor

meslas.external_dependencies.numpytorch.prad2unitvec(prad, dim=-1)
meslas.external_dependencies.numpytorch.prepend_dim(v, n_dim_to_prepend=1)
meslas.external_dependencies.numpytorch.prepend_to_ndim(v, n_dim_desired)
meslas.external_dependencies.numpytorch.rad2deg(rad)
meslas.external_dependencies.numpytorch.rand(shape, low=0, high=1)
meslas.external_dependencies.numpytorch.ravel_multi_index(v, shape, **kwargs)

For now, just use np.ravel_multi_index() :type v: torch.LongTensor :type shape: torch.Size, tuple, list :type kwargs: dict :return: torch.LongTensor

meslas.external_dependencies.numpytorch.repeat_all(*args, shape=None, use_expand=False)

Repeat tensors so that all tensors are of the same size. Tensors must have the same number of dimensions; otherwise, use repeat_batch() to prepend dimensions. :param shape: desired shape of the output. Give None to match max shape of each dim. Give -1 at dims where the max shape is desired.

meslas.external_dependencies.numpytorch.repeat_batch(*args, repeat_existing_dims=False, to_append_dims=False, shape=None, use_expand=False)

Repeat first dimensions, while keeping last dimensions the same. :param args: tuple of tensors to repeat. :param repeat_existing_dims: whether to repeat singleton dims. :param to_append_dims: if True, append dims if needed; if False, prepend. :param shape: desired shape of the output. Give None to match max shape of each dim. Give -1 at dims where the max shape is desired. :param use_expand: True to use torch.expand instead of torch.repeat, to share the same memory across repeats. :return: tuple of repeated tensors.

meslas.external_dependencies.numpytorch.repeat_dim(tensor, repeat, dim)
meslas.external_dependencies.numpytorch.repeat_to_shape(arg, shape)
Parameters:

shape – desired shape of the output

Return type:

torch.Tensor

meslas.external_dependencies.numpytorch.scatter_add(subs, val, dim=0, shape=None)

@param subs: ndim x n indices, suitable for np.ravel_multi_index @type subs: Union[np.ndarray, torch.LongTensor]) @param val: n x … values to add @type val: torch.Tensor @return:

meslas.external_dependencies.numpytorch.sem(v, dim=0)
meslas.external_dependencies.numpytorch.sem_distrib(p, v, axis=None, n=None)
meslas.external_dependencies.numpytorch.shiftdim(v: Tensor, shift: Tensor, dim=0, pad='repeat')
meslas.external_dependencies.numpytorch.softmax_bias(p, slope, bias)

Symmetric softmax with bias. Only works for binary. Works elementwise. Cannot use too small or large bias (roughly < 1e-3 or > 1 - 1e-3) :param p: between 0 and 1. :param slope: arbitary real value. 1 gives identity mapping, 0 always 0.5. :param bias: between 1e-3 and 1 - 1e-3. Giving p=bias returns 0.5. :return: transformed probability. :type p: torch.FloatTensor :type slope: torch.FloatTensor :type bias: torch.FloatTensor :rtype: torch.FloatTensor

meslas.external_dependencies.numpytorch.softmax_mask(w: Tensor, dim=-1, mask: BoolTensor | None = None) Tensor

Allows having -np.inf in w to mask out, or give explicit bool mask :param w: :param dim: :param mask: :return:

meslas.external_dependencies.numpytorch.std_distrib(p, v, axis=None)
meslas.external_dependencies.numpytorch.sumto1(v, dim=None, axis=None, keepdim=True)

Make v sum to 1 across dim, i.e., make dim conditioned on the rest. dim can be a tuple.

Parameters:
  • v – tensor.

  • dim – dimensions to be conditioned upon the rest.

  • axis – if given, overrides dim.

Returns:

tensor of the same shape as v.

Return type:

torch.Tensor

meslas.external_dependencies.numpytorch.t(tensor)
meslas.external_dependencies.numpytorch.tensor(v: float | ndarray | Tensor, min_ndim=1, **kwargs)

Construct a tensor if the input is not; otherwise return the input as is. Same as enforce_tensor :param v: :param min_ndim: :param kwargs: :return:

meslas.external_dependencies.numpytorch.test_kron()
meslas.external_dependencies.numpytorch.test_softmax_bias()
meslas.external_dependencies.numpytorch.unblock_diag(m, n=None, size_block=None)

The inverse of block_diag(). Not vectorized yet. :param m: block diagonal matrix :param n: int. Number of blocks :size_block: torch.Size. Size of a block. :return: tensor unblocked such that the last sizes are [n] + size_block

meslas.external_dependencies.numpytorch.unravel_index(v, shape, **kwargs)

For now, just use np.unravel_index() :type v: torch.LongTensor :type shape: torch.Size, tuple, list :type kwargs: dict :return: torch.LongTensor

meslas.external_dependencies.numpytorch.v2m(vec)
Return type:

torch.Tensor

meslas.external_dependencies.numpytorch.v2m0(vec)

Vector dim comes first, unlike v2m

meslas.external_dependencies.numpytorch.var_distrib(p, v, axis=None)
meslas.external_dependencies.numpytorch.vec2mat0(vec)

Vector dim comes first, unlike v2m

meslas.external_dependencies.numpytorch.vec2matmul(vec)
Return type:

torch.Tensor

meslas.external_dependencies.numpytorch.vec_on(v, dim, ndim)
meslas.external_dependencies.numpytorch.vec_on_dim(v, dim, ndim)
meslas.external_dependencies.numpytorch.vmpdf(x, mu, scale=None, normalize=True)
meslas.external_dependencies.numpytorch.vmpdf_a_given_b(a_prad, b_prad, pconc)
Parameters:
  • a_prad (torch.Tensor) – between 0 and 1. Maps to 0 and 2*pi.

  • b_prad (torch.Tensor) – between 0 and 1. Maps to 0 and 2*pi.

  • pconc – float

Returns:

p_a_given_b[index_a, index_b]

Return type:

torch.Tensor

meslas.external_dependencies.numpytorch.vmpdf_prad_pconc(prad, ploc, pconc, normalize=True)
Parameters:
  • prad – 0 to 1 maps to 0 to 2*pi radians

  • pconc – 0 to 1 maps to 0 to inf concentration

Return type:

torch.Tensor

Module contents