Skip to content

Commit b0a2866

Browse files
Merge pull request #6937 from havarddj/snippet-test-doc-fix
Add documentation of snippet testing; fix lang tag issue in #6934
2 parents fe9407c + a56982d commit b0a2866

3 files changed

Lines changed: 74 additions & 8 deletions

File tree

Development.md

Lines changed: 57 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -337,7 +337,7 @@ Code Attribution
337337
----------------
338338

339339
Each file should begin with a short copyright information, mentioning the people
340-
who are mainly involved in coding this particular python file. In practice,
340+
who are mainly involved in coding this particular python file.
341341

342342

343343
Testing
@@ -360,6 +360,62 @@ Testing
360360
```
361361
it produces beautiful coverage scores in `lmfdb/cover/index.html`
362362

363+
Code Snippets
364+
-------------
365+
366+
Many of the LMFDB pages include code snippets which describe how to generate the objects on the page using computer algebra systems such as SageMath, Magma, pari/GP or Oscar. To add code snippets to a new page, the relevant code is placed in a file named `code.yaml`. The file should always define the languages used using the `prompt` tag, for example:
367+
```
368+
prompt:
369+
sage: 'sage'
370+
pari: 'gp'
371+
magma: 'magma'
372+
oscar: 'oscar'
373+
```
374+
375+
The code snippets should be formatted as template strings, which will then be formatted in the code when the page is loaded.
376+
```yaml # From number_fields/code.yaml
377+
field:
378+
comment: Define the number field
379+
sage: x = polygen(QQ); K.<a> = NumberField(%s)
380+
pari: K = bnfinit(%s, 1)
381+
magma: R<x> := PolynomialRing(Rationals()); K<a> := NumberField(%s);
382+
oscar: Qx, x = polynomial_ring(QQ); K, a = number_field(%s)
383+
```
384+
385+
### Snippet testing
386+
387+
Sometimes computer algebra systems make breaking changes which render the code snippets invalid. To catch this, there's a testing system for snippets implemented in `lmfdb/tests/generate_snippet_tests.py`. This runs the code line by line in the various CAS-es, excluding magma. It does not test correctness of the results, only consistency, by comparing with previous results stored in the `lmfdb/tests/snippet_tests` directory. Normally (and at the time of writing), this is run automatically by Github Actions, which generates the relevant log files. However, it is possible to run it manually using the CLI tool
388+
389+
```bash
390+
sage --python ./lmfdb/tests/generate_snippet_tests.py -h
391+
```
392+
which takes a number of arguments (of which one of the positional arguments `generate` and `test` are required).
393+
394+
To specify which code snippets should be tested in the Github Action, add a tag of the form
395+
396+
```yaml
397+
# e.g. in /lmfdb/number_fields/code.yaml
398+
...
399+
snippet_test:
400+
testQ:
401+
label: 1.1.1.1
402+
langs:
403+
- sage
404+
- magma
405+
- oscar
406+
- gp
407+
url: NumberField/1.1.1.1/download/{lang}
408+
```
409+
410+
The tag `langs` here is optional, and if omitted, all (available) languages in the `prompt` tag will be used.
411+
412+
### Snippet test and generate Github Actions
413+
414+
Part of the code snippet testing system is a pair of code actions to generate evaluation log files automatically. The first one runs when a change to a `code*.yaml` file is pulled to the main branch, and if so, regenerates the evaluation files. If the output is different from the previous evaluation files, it will make a pull request, allowing you to manually check that the output is as expected. The logic is in `.github/workflows/snippet_generate.yml`.
415+
416+
Additionally, there's a second CI which runs twice a month and checks that the evaluations are consistent - it can also be run on demand. This ensures that if and when SageMath (or another CAS) deprecates a function, we'll know without having to wait for someone to update the yaml files, rerun the code or submit a bug report. If this action generates a different output than what's stored in the evaluation files, it will error, and upload the diff as an artifact. This is in `.github/workflows/snippet_test.yml`. Furthermore, it will create/update an issue (see https://github.com/LMFDB/lmfdb/issues/6810 for example) which keeps track of all the evaluation errors.
417+
418+
363419
Pro Tip: Debugging
364420
-------------------
365421

lmfdb/elliptic_curves/code.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -277,6 +277,7 @@ snippet_test:
277277
url: EllipticCurve/Q/11/a/3/download/{lang}?label=11.a3
278278
test37a:
279279
label: 37.a
280+
langs:
280281
- sage
281282
- magma
282283
- gp

lmfdb/tests/generate_snippet_tests.py

Lines changed: 16 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,12 @@
1-
# Helper function for generating test files
1+
### CLI tools for testing consistency (but not correctness) of code snippets
2+
# See the heading "Code Snippets" in Development.md for more details about usage.
3+
#
4+
#
5+
# Author: Håvard Damm-Johnsen <havard-dj@proton.me>
6+
27
# NB: magma is currently not supported, run manually instead
38

9+
410
from pathlib import Path
511
import yaml
612
import argparse
@@ -120,7 +126,6 @@ def _eval_code_file(data, lang, proc, logfile):
120126
for line in lines:
121127
if lang == 'magma':
122128
print(line)
123-
124129
try:
125130
proc.run_command(line, timeout=60*3)
126131
except Exception:
@@ -223,12 +228,16 @@ def create_snippet_tests(yaml_file_path=None, ignore_langs=[], test=False, only_
223228
snippet_test = contents['snippet_test']
224229

225230
snippet_langs = {'gp' if k == 'pari' else k for k in contents['prompt'].keys()}
226-
snippet_langs &= langs # intersection of sets
231+
snippet_langs &= langs # intersect set with langs
227232

228233
for _, items in snippet_test.items():
229234
label = items['label']
230235

231236
for lang in snippet_langs:
237+
test_langs = items.get('langs')
238+
# If we specify languages, only test those. Should fix the problem in PR #6934
239+
if test_langs is not None and lang not in test_langs:
240+
continue
232241
url = items['url'].format(lang=lang)
233242
filename = code_file.stem + "-" + label + "-" + lang + ".log"
234243

@@ -271,10 +280,10 @@ def create_snippet_tests(yaml_file_path=None, ignore_langs=[], test=False, only_
271280
if __name__ == '__main__':
272281
parser = argparse.ArgumentParser("Generate snippet tests")
273282
parser.add_argument("cmd", help="*generate* test files or run *test*s", choices=['generate', 'test'])
274-
parser.add_argument("-i", "--ignore", help="ignore languages", action='append', nargs='+', default=[])
275-
parser.add_argument("-o", "--only", help="only languages", action='append', nargs='+', default=None)
276-
parser.add_argument("-f", "--file", help="run on single file", type=str)
277-
parser.add_argument("-e", "--error-file", help="write errors to file", type=str)
283+
parser.add_argument("-i", "--ignore", help="Ignore languages - these will not be run", action='append', nargs='+', default=[])
284+
parser.add_argument("-o", "--only", help="Only languages - only these languages will be run ", action='append', nargs='+', default=None)
285+
parser.add_argument("-f", "--file", help="Run test or generate on a single file", type=str)
286+
parser.add_argument("-e", "--error-file", help="Specify error log file (otherwise stdout)", type=str)
278287

279288
args = parser.parse_args()
280289

0 commit comments

Comments
 (0)