Skip to content

Commit 3f6945f

Browse files
committed
quickstart as its own section
1 parent 4b2e16e commit 3f6945f

File tree

3 files changed

+104
-103
lines changed

3 files changed

+104
-103
lines changed

doc/index.rst

Lines changed: 5 additions & 101 deletions
Original file line numberDiff line numberDiff line change
@@ -29,112 +29,17 @@ One can also install in a `conda`/`mamba` environment via the `conda-forge` chan
2929
3030
conda install -c conda-forge tfs-pandas
3131
32+
You can now start using the package.
33+
You can find a :doc:`quickstart guide <quickstart>`.
3234

33-
2 Minutes to tfs-pandas
34-
=======================
3535

36-
Yes, 2 minutes.
37-
That's how little it takes!
38-
39-
.. hint::
40-
41-
You can click the function names in the code examples below to go directly to their documentation.
42-
43-
Basic Usage
44-
-----------
45-
46-
The package is imported as `tfs`, and exports top-level functions for reading and writing:
47-
48-
.. code-block:: python
49-
50-
import tfs
51-
52-
# Loading a TFS file is simple
53-
df = tfs.read("path_to_input.tfs", index="index_column")
54-
55-
# Writing out to disk is simple too
56-
tfs.write("path_to_output.tfs", df, save_index="index_column")
57-
58-
Once loaded, you get your data in a `~.TfsDataFrame`, which is a `pandas.DataFrame` with a `dict` of headers attached to it.
59-
You can access and manipulate all data as you would with a `DataFrame`:
60-
61-
.. autolink-preface:: import tfs
62-
.. code-block:: python
63-
64-
# Access and modify the headers with the .headers attribute
65-
useful_variable = data_frame.headers["SOME_KEY"]
66-
data_frame.headers["NEW_KEY"] = some_variable
67-
68-
# Manipulate data as you do with pandas DataFrames
69-
data_frame["NEWCOL"] = data_frame.COLUMN_A * data_frame.COLUMN_B
70-
71-
# You can check the TfsDataFrame validity, and choose the behavior in case of errors
72-
tfs.frame.validate(data_frame, non_unique_behavior="raise") # or choose "warn"
73-
74-
Compression
75-
-----------
76-
77-
A **TFS** file being text-based, it benefits heavily from compression.
78-
Thankfully, `tfs-pandas` supports automatic reading and writing of various compression formats.
79-
Just use the API as you would normally, and the compression will be handled automatically:
80-
81-
.. autolink-preface:: import tfs
82-
.. code-block:: python
83-
84-
# Compression format is inferred from the file extension
85-
df = tfs.read("filename.tfs.gz", index="index_column")
86-
87-
# Same thing when writing to disk
88-
tfs.write("path_to_output.tfs.zip", df)
89-
90-
A special module is provided to interface to the ``HDF5`` format.
91-
First though, one needs to install the package with the `hdf5` extra requirements:
92-
93-
.. code-block:: bash
94-
95-
python -m pip install --upgrade tfs-pandas[hdf5]
96-
97-
Then, access the functionality from `tfs.hdf`.
98-
99-
.. autolink-preface:: import tfs
100-
.. code-block:: python
101-
102-
from tfs.hdf import read_hdf, write_hdf
103-
104-
# Read a TfsDataFrame from an HDF5 file
105-
df = tfs.hdf.read("path_to_input.hdf5", key="key_in_hdf5_file")
106-
107-
# Write a TfsDataFrame to an HDF5 file
108-
tfs.hdf.write("path_to_output.hdf5", df, key="key_in_hdf5_file")
109-
110-
Compatibility
111-
-------------
112-
113-
Finally, some replacement functions are provided for some `pandas` operations which, if used, would return a `pandas.DataFrame` instead of a `~.TfsDataFrame`.
114-
115-
.. autolink-preface:: import tfs, pandas as pd
116-
.. code-block:: python
117-
118-
df1 = tfs.read("file1.tfs")
119-
df2 = tfs.read("file2.tfs")
120-
121-
# This returns a pandas.DataFrame and makes you lose the headers
122-
result = pd.concat([df1, df2])
123-
124-
# Instead, use our own
125-
result = tfs.frame.concat([df1, df2]) # you can choose how to merge headers too
126-
assert isinstance(result, tfs.TfsDataFrame) # that's ok!
127-
128-
That's it!
129-
Happy using :)
130-
131-
132-
Package Reference
133-
=================
36+
Contents
37+
========
13438

13539
.. toctree::
13640
:maxdepth: 2
13741

42+
quickstart
13843
modules/index
13944

14045

@@ -144,4 +49,3 @@ Indices and tables
14449
* :ref:`genindex`
14550
* :ref:`modindex`
14651
* :ref:`search`
147-

doc/modules/index.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
TFS-Pandas Modules
2-
==================
1+
API Reference
2+
=============
33

44
.. automodule:: tfs.collection
55
:members:

doc/quickstart.rst

Lines changed: 97 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,97 @@
1+
2 Minutes to tfs-pandas
2+
=======================
3+
4+
Yes, 2 minutes.
5+
That's how little it takes!
6+
7+
.. hint::
8+
9+
You can click the function names in the code examples below to go directly to their documentation.
10+
11+
Basic Usage
12+
-----------
13+
14+
The package is imported as `tfs`, and exports top-level functions for reading and writing:
15+
16+
.. code-block:: python
17+
18+
import tfs
19+
20+
# Loading a TFS file is simple
21+
df = tfs.read("path_to_input.tfs", index="index_column")
22+
23+
# Writing out to disk is simple too
24+
tfs.write("path_to_output.tfs", df, save_index="index_column")
25+
26+
Once loaded, you get your data in a `~.TfsDataFrame`, which is a `pandas.DataFrame` with a `dict` of headers attached to it.
27+
You can access and manipulate all data as you would with a `DataFrame`:
28+
29+
.. autolink-preface:: import tfs
30+
.. code-block:: python
31+
32+
# Access and modify the headers with the .headers attribute
33+
useful_variable = data_frame.headers["SOME_KEY"]
34+
data_frame.headers["NEW_KEY"] = some_variable
35+
36+
# Manipulate data as you do with pandas DataFrames
37+
data_frame["NEWCOL"] = data_frame.COLUMN_A * data_frame.COLUMN_B
38+
39+
# You can check the TfsDataFrame validity, and choose the behavior in case of errors
40+
tfs.frame.validate(data_frame, non_unique_behavior="raise") # or choose "warn"
41+
42+
Compression
43+
-----------
44+
45+
A **TFS** file being text-based, it benefits heavily from compression.
46+
Thankfully, `tfs-pandas` supports automatic reading and writing of various compression formats.
47+
Just use the API as you would normally, and the compression will be handled automatically:
48+
49+
.. autolink-preface:: import tfs
50+
.. code-block:: python
51+
52+
# Compression format is inferred from the file extension
53+
df = tfs.read("filename.tfs.gz", index="index_column")
54+
55+
# Same thing when writing to disk
56+
tfs.write("path_to_output.tfs.zip", df)
57+
58+
A special module is provided to interface to the ``HDF5`` format.
59+
First though, one needs to install the package with the `hdf5` extra requirements:
60+
61+
.. code-block:: bash
62+
63+
python -m pip install --upgrade tfs-pandas[hdf5]
64+
65+
Then, access the functionality from `tfs.hdf`.
66+
67+
.. autolink-preface:: import tfs
68+
.. code-block:: python
69+
70+
from tfs.hdf import read_hdf, write_hdf
71+
72+
# Read a TfsDataFrame from an HDF5 file
73+
df = tfs.hdf.read("path_to_input.hdf5", key="key_in_hdf5_file")
74+
75+
# Write a TfsDataFrame to an HDF5 file
76+
tfs.hdf.write("path_to_output.hdf5", df, key="key_in_hdf5_file")
77+
78+
Compatibility
79+
-------------
80+
81+
Finally, some replacement functions are provided for some `pandas` operations which, if used, would return a `pandas.DataFrame` instead of a `~.TfsDataFrame`.
82+
83+
.. autolink-preface:: import tfs, pandas as pd
84+
.. code-block:: python
85+
86+
df1 = tfs.read("file1.tfs")
87+
df2 = tfs.read("file2.tfs")
88+
89+
# This returns a pandas.DataFrame and makes you lose the headers
90+
result = pd.concat([df1, df2])
91+
92+
# Instead, use our own
93+
result = tfs.frame.concat([df1, df2]) # you can choose how to merge headers too
94+
assert isinstance(result, tfs.TfsDataFrame) # that's ok!
95+
96+
That's it!
97+
Happy using :)

0 commit comments

Comments
 (0)