You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Development.md
+57-1Lines changed: 57 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -337,7 +337,7 @@ Code Attribution
337
337
----------------
338
338
339
339
Each file should begin with a short copyright information, mentioning the people
340
-
who are mainly involved in coding this particular python file. In practice,
340
+
who are mainly involved in coding this particular python file.
341
341
342
342
343
343
Testing
@@ -360,6 +360,62 @@ Testing
360
360
```
361
361
it produces beautiful coverage scores in `lmfdb/cover/index.html`
362
362
363
+
Code Snippets
364
+
-------------
365
+
366
+
Many of the LMFDB pages include code snippets which describe how to generate the objects on the page using computer algebra systems such as SageMath, Magma, pari/GP or Oscar. To add code snippets to a new page, the relevant code is placed in a file named `code.yaml`. The file should always define the languages used using the `prompt` tag, for example:
367
+
```
368
+
prompt:
369
+
sage: 'sage'
370
+
pari: 'gp'
371
+
magma: 'magma'
372
+
oscar: 'oscar'
373
+
```
374
+
375
+
The code snippets should be formatted as template strings, which will then be formatted in the code when the page is loaded.
oscar: Qx, x = polynomial_ring(QQ); K, a = number_field(%s)
383
+
```
384
+
385
+
### Snippet testing
386
+
387
+
Sometimes computer algebra systems make breaking changes which render the code snippets invalid. To catch this, there's a testing system for snippets implemented in `lmfdb/tests/generate_snippet_tests.py`. This runs the code line by line in the various CAS-es, excluding magma. It does not test correctness of the results, only consistency, by comparing with previous results stored in the `lmfdb/tests/snippet_tests` directory. Normally (and at the time of writing), this is run automatically by Github Actions, which generates the relevant log files. However, it is possible to run it manually using the CLI tool
which takes a number of arguments (of which one of the positional arguments `generate` and `test` are required).
393
+
394
+
To specify which code snippets should be tested in the Github Action, add a tag of the form
395
+
396
+
```yaml
397
+
# e.g. in /lmfdb/number_fields/code.yaml
398
+
...
399
+
snippet_test:
400
+
testQ:
401
+
label: 1.1.1.1
402
+
langs:
403
+
- sage
404
+
- magma
405
+
- oscar
406
+
- gp
407
+
url: NumberField/1.1.1.1/download/{lang}
408
+
```
409
+
410
+
The tag `langs` here is optional, and if omitted, all (available) languages in the `prompt` tag will be used.
411
+
412
+
### Snippet test and generate Github Actions
413
+
414
+
Part of the code snippet testing system is a pair of code actions to generate evaluation log files automatically. The first one runs when a change to a `code*.yaml` file is pulled to the main branch, and if so, regenerates the evaluation files. If the output is different from the previous evaluation files, it will make a pull request, allowing you to manually check that the output is as expected. The logic is in `.github/workflows/snippet_generate.yml`.
415
+
416
+
Additionally, there's a second CI which runs twice a month and checks that the evaluations are consistent - it can also be run on demand. This ensures that if and when SageMath (or another CAS) deprecates a function, we'll know without having to wait for someone to update the yaml files, rerun the code or submit a bug report. If this action generates a different output than what's stored in the evaluation files, it will error, and upload the diff as an artifact. This is in `.github/workflows/snippet_test.yml`. Furthermore, it will create/update an issue (see https://github.com/LMFDB/lmfdb/issues/6810 for example) which keeps track of all the evaluation errors.
0 commit comments