Skip to content

Commit 183cbf9

Browse files
authored
Merge branch 'main' into test_quote_fix
2 parents eb5d8e3 + a508707 commit 183cbf9

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

50 files changed

+1198
-483
lines changed

.github/workflows/markdown-check.yml

+9-3
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,14 @@ on:
77
branches: [ "main" ]
88

99
jobs:
10-
markdown-link-check:
10+
check-links:
11+
name: runner / linkspector
1112
runs-on: ubuntu-latest
1213
steps:
13-
- uses: actions/checkout@v4
14-
- uses: gaurav-nelson/github-action-markdown-link-check@v1
14+
- uses: actions/checkout@v4
15+
- name: Run linkspector
16+
uses: umbrelladocs/action-linkspector@v1
17+
with:
18+
github_token: ${{ secrets.github_token }}
19+
reporter: github-pr-review
20+
fail_on_error: true

.pre-commit-config.yaml

+7
Original file line numberDiff line numberDiff line change
@@ -13,3 +13,10 @@ repos:
1313
--extra-keys=metadata.language_info metadata.vscode metadata.kernelspec cell.metadata.vscode,
1414
--drop-empty-cells
1515
]
16+
- repo: https://github.com/codespell-project/codespell
17+
rev: v2.3.0
18+
hooks:
19+
- id: codespell
20+
args: [ --toml, "pyproject.toml"]
21+
additional_dependencies:
22+
- tomli

CHANGELOG.md

+11-2
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,18 @@
1+
# v1.8.2 (2025-01-06)
2+
- Fixed:
3+
- Fixed layout and printing issues (https://github.com/sandialabs/pyttb/pull/354)
4+
- Fixed tutorial hierarchy (https://github.com/sandialabs/pyttb/pull/343)
5+
- Improved:
6+
- Improved `pyttb_utils` (https://github.com/sandialabs/pyttb/pull/353)
7+
- Improved docs for coming from MATLAB (https://github.com/sandialabs/pyttb/pull/352)
8+
- Improved shape support in data classes (https://github.com/sandialabs/pyttb/pull/348)
9+
110
# v1.8.1 (2024-11-11)
211
- Fixed:
312
- Aligning comparison operator output for data classes (https://github.com/sandialabs/pyttb/pull/331)
413
- Improved:
514
- Getting starting documentation (https://github.com/sandialabs/pyttb/pull/324)
6-
- Development enviroment (https://github.com/sandialabs/pyttb/pull/329, https://github.com/sandialabs/pyttb/pull/330)
15+
- Development environment (https://github.com/sandialabs/pyttb/pull/329, https://github.com/sandialabs/pyttb/pull/330)
716
- Documentation (https://github.com/sandialabs/pyttb/pull/328, https://github.com/sandialabs/pyttb/pull/334)
817

918
# v1.8.0 (2024-10-23)
@@ -84,7 +93,7 @@
8493
- Addresses ambiguity of -0 by using `exclude_dims` (`numpy.ndarray`) parameter
8594
- `ktensor.ttv`, `sptensor.ttv`, `tensor.ttv`, `ttensor.ttv`
8695
- Use `exlude_dims` parameter instead of `-dims`
87-
- Explicit nameing of dimensions to exclude
96+
- Explicit naming of dimensions to exclude
8897
- `tensor.ttsv`
8998
- Use `skip_dim` (`int`) parameter instead of `-dims`
9099
- Exclude all dimensions up to and including `skip_dim`

CITATION.bib

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
@misc{pyttb,
22
author = {Dunlavy, Daniel M. and Johnson, Nicholas T. and others},
3-
month = nov,
4-
title = {{pyttb: Python Tensor Toolbox, v1.8.1}},
3+
month = jan,
4+
title = {{pyttb: Python Tensor Toolbox, v1.8.2}},
55
url = {https://github.com/sandialabs/pyttb},
6-
year = {2024}
6+
year = {2025}
77
}

CONTRIBUTING.md

+33-5
Original file line numberDiff line numberDiff line change
@@ -35,19 +35,25 @@ current or filing a new [issue](https://github.com/sandialabs/pyttb/issues).
3535
```
3636
git checkout -b my-new-feature-branch
3737
```
38-
1. Formatters and linting
38+
1. Formatters and linting (These are checked in the full test suite as well)
3939
1. Run autoformatters and linting from root of project (they will change your code)
40-
```commandline
41-
ruff check . --fix
42-
ruff format
43-
```
40+
```commandline
41+
ruff check . --fix
42+
ruff format
43+
```
4444
1. Ruff's `--fix` won't necessarily address everything and may point out issues that need manual attention
4545
1. [We](./.pre-commit-config.yaml) optionally support [pre-commit hooks](https://pre-commit.com/) for this
4646
1. Alternatively, you can run `pre-commit run --all-files` from the command line if you don't want to install the hooks.
4747
1. Check typing
4848
```commandline
4949
mypy pyttb/
5050
```
51+
1. Not included in our pre-commit hooks because of slow runtime.
52+
1. Check spelling
53+
```commandline
54+
codespell
55+
```
56+
1. This is also included in the optional pre-commit hooks.
5157
5258
1. Run tests (at desired fidelity)
5359
1. Just doctests (enabled by default)
@@ -70,6 +76,28 @@ current or filing a new [issue](https://github.com/sandialabs/pyttb/issues).
7076
```
7177
2. Clear notebook outputs if run locally see `nbstripout` in our [pre-commit configuration](.pre-commit-config.yaml)
7278

79+
### Adding tutorials
80+
81+
1. Follow general setup from above
82+
1. Checkout a branch to make your changes
83+
1. Install from source with dev and doc dependencies
84+
1. Verify you can build the existing docs with sphinx
85+
86+
1. Create a new Jupyter notebook in [./docs/source/tutorial](./docs/source/tutorial)
87+
1. Our current convention is to prefix the filename with the type of tutorial and all lower case
88+
89+
1. Add a reference to your notebook in [./docs/source/tutorials.rst](./docs/source/tutorials.rst)
90+
91+
1. Rebuild the docs, review locally, and iterate on changes until ready for review
92+
93+
#### Tutorial References
94+
Generally, inspecting existing documentation or tutorials should provide a reasonable starting point for capabilities,
95+
but the following links may be useful if that's not sufficient.
96+
97+
1. We use [sphinx](https://www.sphinx-doc.org/) to automatically build our docs and may be useful for `.rst` issues
98+
99+
1. We use [myst-nb](https://myst-nb.readthedocs.io/) to render our notebooks to documentation
100+
73101
## GitHub Workflow
74102

75103
### Proposing Changes

README.md

+14-1
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ low-rank tensor decompositions:
3232
[`cp_apr`](https://pyttb.readthedocs.io/en/stable/cpapr.html "CP decomposition via Alternating Poisson Regression"),
3333
[`gcp_opt`](https://pyttb.readthedocs.io/en/stable/gcpopt.html "Generalized CP decomposition"),
3434
[`hosvd`](https://pyttb.readthedocs.io/en/stable/hosvd.html "Tucker decomposition via Higher Order Singular Value Decomposition"),
35-
[`tucker_als`](https://pyttb.readthedocs.io/en/stable/tuckerals.html "Tucker decompostion via Alternating Least Squares")
35+
[`tucker_als`](https://pyttb.readthedocs.io/en/stable/tuckerals.html "Tucker decomposition via Alternating Least Squares")
3636

3737
## Quick Start
3838

@@ -56,6 +56,19 @@ CP_ALS:
5656
Final f = 7.508253e-01
5757
```
5858

59+
### Memory layout
60+
For historical reasons we use Fortran memory layouts, where numpy by default uses C.
61+
This is relevant for indexing. In the future we hope to extend support for both.
62+
```python
63+
>>> import numpy as np
64+
>>> c_order = np.arange(8).reshape((2,2,2))
65+
>>> f_order = np.arange(8).reshape((2,2,2), order="F")
66+
>>> print(c_order[0,1,1])
67+
3
68+
>>> print(f_order[0,1,1])
69+
6
70+
```
71+
5972
<!-- markdown-link-check-disable -->
6073
### Getting Help
6174
- [Documentation](https://pyttb.readthedocs.io)

conftest.py

+131
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,13 @@
44
# U.S. Government retains certain rights in this software.
55

66
import numpy
7+
import numpy as np
78

89
# content of conftest.py
910
import pytest
1011

1112
import pyttb
13+
import pyttb as ttb
1214

1315

1416
@pytest.fixture(autouse=True)
@@ -17,6 +19,12 @@ def add_packages(doctest_namespace): # noqa: D103
1719
doctest_namespace["ttb"] = pyttb
1820

1921

22+
@pytest.fixture(params=[{"order": "F"}, {"order": "C"}])
23+
def memory_layout(request):
24+
"""Test C and F memory layouts."""
25+
return request.param
26+
27+
2028
def pytest_addoption(parser): # noqa: D103
2129
parser.addoption(
2230
"--packaging",
@@ -30,3 +38,126 @@ def pytest_addoption(parser): # noqa: D103
3038
def pytest_configure(config): # noqa: D103
3139
if not config.option.packaging:
3240
config.option.markexpr = "not packaging"
41+
42+
43+
@pytest.fixture()
44+
def sample_tensor_2way(): # noqa: D103
45+
data = np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
46+
shape = (2, 3)
47+
params = {"data": data, "shape": shape}
48+
tensorInstance = ttb.tensor(data, shape)
49+
return params, tensorInstance
50+
51+
52+
@pytest.fixture()
53+
def sample_tensor_3way(): # noqa: D103
54+
data = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0])
55+
shape = (2, 3, 2)
56+
params = {"data": np.reshape(data, np.array(shape), order="F"), "shape": shape}
57+
tensorInstance = ttb.tensor(data, shape)
58+
return params, tensorInstance
59+
60+
61+
@pytest.fixture()
62+
def sample_ndarray_1way(): # noqa: D103
63+
shape = (16,)
64+
ndarrayInstance = np.reshape(np.arange(1, 17), shape, order="F")
65+
params = {"data": ndarrayInstance, "shape": shape}
66+
return params, ndarrayInstance
67+
68+
69+
@pytest.fixture()
70+
def sample_ndarray_2way(): # noqa: D103
71+
shape = (4, 4)
72+
ndarrayInstance = np.reshape(np.arange(1, 17), shape, order="F")
73+
params = {"data": ndarrayInstance, "shape": shape}
74+
return params, ndarrayInstance
75+
76+
77+
@pytest.fixture()
78+
def sample_ndarray_4way(): # noqa: D103
79+
shape = (2, 2, 2, 2)
80+
ndarrayInstance = np.reshape(np.arange(1, 17), shape, order="F")
81+
params = {"data": ndarrayInstance, "shape": shape}
82+
return params, ndarrayInstance
83+
84+
85+
@pytest.fixture()
86+
def sample_tenmat_4way(): # noqa: D103
87+
shape = (4, 4)
88+
data = np.reshape(np.arange(1, 17), shape, order="F")
89+
tshape = (2, 2, 2, 2)
90+
rdims = np.array([0, 1])
91+
cdims = np.array([2, 3])
92+
tenmatInstance = ttb.tenmat()
93+
tenmatInstance.tshape = tshape
94+
tenmatInstance.rindices = rdims.copy()
95+
tenmatInstance.cindices = cdims.copy()
96+
tenmatInstance.data = data.copy()
97+
params = {
98+
"data": data,
99+
"rdims": rdims,
100+
"cdims": cdims,
101+
"tshape": tshape,
102+
"shape": shape,
103+
}
104+
return params, tenmatInstance
105+
106+
107+
@pytest.fixture()
108+
def sample_tensor_4way(): # noqa: D103
109+
data = np.arange(1, 17)
110+
shape = (2, 2, 2, 2)
111+
params = {"data": np.reshape(data, np.array(shape), order="F"), "shape": shape}
112+
tensorInstance = ttb.tensor(data, shape)
113+
return params, tensorInstance
114+
115+
116+
@pytest.fixture()
117+
def sample_ktensor_2way(): # noqa: D103
118+
weights = np.array([1.0, 2.0])
119+
fm0 = np.array([[1.0, 2.0], [3.0, 4.0]])
120+
fm1 = np.array([[5.0, 6.0], [7.0, 8.0]])
121+
factor_matrices = [fm0, fm1]
122+
data = {"weights": weights, "factor_matrices": factor_matrices}
123+
ktensorInstance = ttb.ktensor(factor_matrices, weights)
124+
return data, ktensorInstance
125+
126+
127+
@pytest.fixture()
128+
def sample_ktensor_3way(): # noqa: D103
129+
rank = 2
130+
shape = (2, 3, 4)
131+
vector = np.arange(1, rank * sum(shape) + 1).astype(float)
132+
weights = 2 * np.ones(rank).astype(float)
133+
vector_with_weights = np.concatenate((weights, vector), axis=0)
134+
# vector_with_weights = vector_with_weights.reshape((len(vector_with_weights), 1))
135+
# ground truth
136+
fm0 = np.array([[1.0, 3.0], [2.0, 4.0]])
137+
fm1 = np.array([[5.0, 8.0], [6.0, 9.0], [7.0, 10.0]])
138+
fm2 = np.array([[11.0, 15.0], [12.0, 16.0], [13.0, 17.0], [14.0, 18.0]])
139+
factor_matrices = [fm0, fm1, fm2]
140+
data = {
141+
"weights": weights,
142+
"factor_matrices": factor_matrices,
143+
"vector": vector,
144+
"vector_with_weights": vector_with_weights,
145+
"shape": shape,
146+
}
147+
ktensorInstance = ttb.ktensor(factor_matrices, weights)
148+
return data, ktensorInstance
149+
150+
151+
@pytest.fixture()
152+
def sample_ktensor_symmetric(): # noqa: D103
153+
weights = np.array([1.0, 1.0])
154+
fm0 = np.array(
155+
[[2.340431417384394, 4.951967353890655], [4.596069112758807, 8.012451489774961]]
156+
)
157+
fm1 = np.array(
158+
[[2.340431417384394, 4.951967353890655], [4.596069112758807, 8.012451489774961]]
159+
)
160+
factor_matrices = [fm0, fm1]
161+
data = {"weights": weights, "factor_matrices": factor_matrices}
162+
ktensorInstance = ttb.ktensor(factor_matrices, weights)
163+
return data, ktensorInstance

docs/source/index.rst

+12
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,18 @@ algorithms for computing low-rank tensor models.
3939

4040
Getting Started
4141
===============
42+
For historical reasons we use Fortran memory layouts, where numpy by default uses C.
43+
This is relevant for indexing. In the future we hope to extend support for both.
44+
45+
.. code-block:: python
46+
47+
>>> import numpy as np
48+
>>> c_order = np.arange(8).reshape((2,2,2))
49+
>>> f_order = np.arange(8).reshape((2,2,2), order="F")
50+
>>> print(c_order[0,1,1])
51+
3
52+
>>> print(f_order[0,1,1])
53+
6
4254
4355
.. toctree::
4456
:maxdepth: 1

docs/source/matlab/ktensor.rst

+2-4
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,5 @@ Methods
2222
+-----------------+----------------------+------------------------------------------------------------------------+
2323
| ``tensor`` | ``to_tensor`` | ``X.to_tensor()`` |
2424
+-----------------+----------------------+------------------------------------------------------------------------+
25-
26-
MATLAB methods not included in ``pyttb``
27-
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
28-
* ``viz``
25+
| ``viz`` | ``viz`` | ``X.viz()`` |
26+
+-----------------+----------------------+------------------------------------------------------------------------+

docs/source/tutorial/algorithm_cp_als.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@
122122
"cell_type": "markdown",
123123
"metadata": {},
124124
"source": [
125-
"## Increase the maximium number of iterations\n",
125+
"## Increase the maximum number of iterations\n",
126126
"Note that the previous run kicked out at only 10 iterations, before reaching the specified convegence tolerance. Let's increase the maximum number of iterations and try again, using the same initial guess."
127127
]
128128
},
@@ -337,7 +337,7 @@
337337
"source": [
338338
"## Recommendations\n",
339339
"* Run multiple times with different guesses and select the solution with the best fit.\n",
340-
"* Try different ranks and choose the solution that is the best descriptor for your data based on the combination of the fit and the interpretaton of the factors, e.g., by visualizing the results."
340+
"* Try different ranks and choose the solution that is the best descriptor for your data based on the combination of the fit and the interpretation of the factors, e.g., by visualizing the results."
341341
]
342342
}
343343
],

docs/source/tutorial/algorithm_gcp_opt.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@
1919
"tags": []
2020
},
2121
"source": [
22-
"This document outlines usage and examples for the generalized CP (GCP) tensor decomposition implmented in `pyttb.gcp_opt`. GCP allows alternate objective functions besides sum of squared errors, which is the standard for CP. The code support both dense and sparse input tensors, but the sparse input tensors require randomized optimization methods.\n",
22+
"This document outlines usage and examples for the generalized CP (GCP) tensor decomposition implemented in `pyttb.gcp_opt`. GCP allows alternate objective functions besides sum of squared errors, which is the standard for CP. The code support both dense and sparse input tensors, but the sparse input tensors require randomized optimization methods.\n",
2323
"\n",
2424
"GCP is described in greater detail in the manuscripts:\n",
2525
"* D. Hong, T. G. Kolda, J. A. Duersch, Generalized Canonical Polyadic Tensor Decomposition, SIAM Review, 62:133-163, 2020, https://doi.org/10.1137/18M1203626\n",

docs/source/tutorial/algorithm_hosvd.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@
9494
"metadata": {},
9595
"source": [
9696
"## Generate a core with different accuracies for different shapes\n",
97-
"We will create a core `tensor` that has is nearly block diagonal. The blocks are expontentially decreasing in norm, with the idea that we can pick off one block at a time as we increate the prescribed accuracy of the HOSVD. To do this, we define and use a function `tenrandblk()`."
97+
"We will create a core `tensor` that has is nearly block diagonal. The blocks are expontentially decreasing in norm, with the idea that we can pick off one block at a time as we increase the prescribed accuracy of the HOSVD. To do this, we define and use a function `tenrandblk()`."
9898
]
9999
},
100100
{

docs/source/tutorial/class_sptensor.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
"metadata": {},
1818
"source": [
1919
"## Creating a `sptensor`\n",
20-
"The `sptensor` class stores the data in coordinate format. A sparse `sptensor` can be created by passing in a list of subscripts and values. For example, here we pass in three subscripts and a scalar value. The resuling sparse `sptensor` has three nonzero entries, and the `shape` is the size of the largest subscript in each dimension."
20+
"The `sptensor` class stores the data in coordinate format. A sparse `sptensor` can be created by passing in a list of subscripts and values. For example, here we pass in three subscripts and a scalar value. The resulting sparse `sptensor` has three nonzero entries, and the `shape` is the size of the largest subscript in each dimension."
2121
]
2222
},
2323
{

docs/source/tutorial/class_sumtensor.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@
5454
"metadata": {},
5555
"source": [
5656
"## Creating sumtensors\n",
57-
"A sumtensor `T` can only be delared as a sum of same-shaped tensors T1, T2,...,TN. The summand tensors are stored internally, which define the \"parts\" of the `sumtensor`. The parts of a `sumtensor` can be (dense) tensors (`tensor`), sparse tensors (` sptensor`), Kruskal tensors (`ktensor`), or Tucker tensors (`ttensor`). An example of the use of the sumtensor constructor follows."
57+
"A sumtensor `T` can only be declared as a sum of same-shaped tensors T1, T2,...,TN. The summand tensors are stored internally, which define the \"parts\" of the `sumtensor`. The parts of a `sumtensor` can be (dense) tensors (`tensor`), sparse tensors (` sptensor`), Kruskal tensors (`ktensor`), or Tucker tensors (`ttensor`). An example of the use of the sumtensor constructor follows."
5858
]
5959
},
6060
{

0 commit comments

Comments
 (0)