You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Enhance documentation for turn_by_turn; add usage examples for read_tbt and convert_to_tbt functions, clarify writing data process, and detail supported formats and options.
Copy file name to clipboardExpand all lines: doc/index.rst
+57Lines changed: 57 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,6 +5,63 @@ Welcome to turn_by_turn' documentation!
5
5
6
6
It provides a custom dataclass ``TbtData`` to do so, with attributes corresponding to the relevant measurements information.
7
7
8
+
How to Use turn_by_turn
9
+
=======================
10
+
11
+
There are two main ways to create a ``TbtData`` object:
12
+
13
+
1. **Reading from file (disk):**
14
+
Use ``read_tbt`` to load measurement data from a file on disk. This is the standard entry point for working with measurement files in supported formats.
15
+
16
+
2. **In-memory conversion:**
17
+
Use ``convert_to_tbt`` to convert data that is already loaded in memory (such as a pandas DataFrame, tfs DataFrame, or xtrack.Line) into a ``TbtData`` object. This is useful for workflows where you generate or manipulate data in Python before standardizing it.
18
+
19
+
Both methods produce a ``TbtData`` object, which can then be used for further analysis or written out to supported formats.
20
+
21
+
Supported Modules and Limitations
22
+
=================================
23
+
24
+
The following table summarizes which modules support disk reading and in-memory conversion, and any important limitations:
- Only ``madng`` and ``xtrack`` support in-memory conversion.
51
+
- Most modules are for disk reading only.
52
+
- Some modules (e.g., ``esrf``) are experimental or have limited support.
53
+
- For writing, see the next section.
54
+
55
+
Writing Data
56
+
============
57
+
58
+
To write a ``TbtData`` object to disk, use the ``write_tbt`` function. This function supports writing in the LHC SDDS format by default, as well as other supported formats depending on the ``datatype`` argument. The output format is determined by the ``datatype`` you specify, but for most workflows, SDDS is the standard output.
Copy file name to clipboardExpand all lines: turn_by_turn/io.py
+28-3Lines changed: 28 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -2,9 +2,34 @@
2
2
IO
3
3
--
4
4
5
-
This module contains high-level I/O functions to read and write turn-by-turn data objects in different
6
-
formats. While data can be loaded from the formats of different machines / codes, each format getting its
7
-
own reader module, writing functionality is at the moment always done in the ``LHC``'s **SDDS** format.
5
+
This module contains high-level I/O functions to read and write turn-by-turn data objects in different formats.
6
+
7
+
There are two main entry points for users:
8
+
9
+
1. ``read_tbt``: Reads turn-by-turn data from disk (file-based). Use this when you have a measurement file on disk and want to load it into a ``TbtData`` object. The file format is detected or specified by the ``datatype`` argument.
10
+
11
+
2. ``convert_to_tbt``: Converts in-memory data (such as a pandas DataFrame, tfs DataFrame, or xtrack.Line) to a ``TbtData`` object. Use this when your data is already loaded in memory and you want to standardize it for further processing or writing.
12
+
13
+
Writing Data
14
+
============
15
+
16
+
The single entry point for writing is ``write_tbt``. This function writes a ``TbtData`` object to disk, typically in the LHC SDDS format (default), but other formats are supported via the ``datatype`` argument. The output file extension and format are determined by the ``datatype`` you specify.
17
+
18
+
- If you specify ``datatype='lhc'``, ``'sps'``, or ``'ascii'``, the output will be in SDDS format and the file extension will be set to ``.sdds`` if not already present (for compatibility with downstream tools).
19
+
- If you specify ``datatype='madng'``, the output will be in MAD-NG TFS format (extension ``.tfs`` is recommended).
20
+
- Other supported datatypes (see ``WRITERS``) will use their respective formats and conventions.
21
+
- If you provide the ``noise`` argument, random noise will be added to the data before writing. The ``seed`` argument can be used for reproducibility.
22
+
- The ``datatype`` argument controls both the output format and any additional options passed to the underlying writer.
23
+
- The interface is extensible: new formats can be added by implementing a module with a ``write_tbt`` function and adding it to ``TBT_MODULES`` and ``WRITERS``.
24
+
25
+
Example::
26
+
27
+
from turn_by_turn.io import write_tbt
28
+
write_tbt("output.sdds", tbt_data) # writes in SDDS format by default
29
+
write_tbt("output.tfs", tbt_data, datatype="madng") # writes in MAD-NG TFS format
30
+
write_tbt("output.sdds", tbt_data, noise=0.01, seed=42) # add noise before writing
31
+
32
+
While data can be loaded from the formats of different machines/codes (each format getting its own reader module), writing functionality is at the moment always done in the ``LHC``'s **SDDS** format by default, unless another supported format is specified. The interface is designed to be future-proof and easy to extend for new formats.
0 commit comments