Skip to content

Commit 1eede62

Browse files
committed
update docs for 0.7.0
1 parent 83e6dc8 commit 1eede62

File tree

5 files changed

+69
-153
lines changed

5 files changed

+69
-153
lines changed

CHANGELOG.md

+12-7
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,18 @@
1-
## 0.7.0 (XXXX-XX-XX)
1+
## 0.7.0 (2024-05-21)
22

33
major overhaul for ADQL 2.1 recommendation 2023-12-15
4-
- COOSYS is not required for the geometry constructors
5-
- the geometry constructors return the correct datatype (doube precission[])
4+
- COOSYS is not required for the geometry constructors anymore, since it's deprecated
5+
- the geometry constructors return the correct datatype (double precision[])
66
and correct units (degrees)
7-
- drop the maintenance/support for the translation from ADQL to MySQL.
8-
- fix `BOX` constructor
9-
- new requirements for the `pg_sphere` and postgreSQL
10-
- ...
7+
- droped the maintenance/support for the translation from ADQL to MySQL.
8+
- bumped the version of `antlr4-python3-runtime` to 4.13.1
9+
- fixed `BOX` constructor, although it's deprecated in ADQL 2.1
10+
- fixed `CONTAINS` for the case `0=CONTAINS()`
11+
- fixed `INTERSECTS` for the case `0=INTERSECTS()`
12+
- new requirements for the `pg_sphere` extension
13+
([link](https://github.com/kimakan/pgsphere/tree/aiprdbms16))
14+
- removed not supported optional ADQL functions, such as `CENTROID`, `REGION`, etc.
15+
- replaced `setup.py` by `pyproject.toml` since `python setup.py install` is deprecated
1116

1217
## 0.6.1 (2022-11-17)
1318

README.md

+31-27
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Designed to be used in conjunction with [django-daiquri](https://github.com/djan
88
as a query processing backend but it can be easily used as a stand-alone tool
99
or integrated into another project.
1010

11-
**\*NOTE: Since version 0.7.0, MySQL is not activelly supported/maintained anymore.**
11+
**\*NOTE: Since version 0.7.0 MySQL is not supported (maintained) anymore.**
1212

1313

1414
[![pytest Workflow Status](https://github.com/aipescience/queryparser/actions/workflows/pytest.yml/badge.svg)](https://github.com/aipescience/queryparser/actions/workflows/pytest.yml)
@@ -24,9 +24,7 @@ Installation
2424
The easiest way to install the package is by using the pip tool:
2525

2626
```bash
27-
28-
pip install queryparser-python3
29-
27+
python -m pip install queryparser-python3
3028
```
3129

3230
Alternatively, you can clone the repository and install it from there.
@@ -39,29 +37,35 @@ Generating the parser from the git repository
3937

4038
To generate the parsers you need `python3` , `java` above version
4139
7, and `antlr4` (`antlr-4.*-complete.jar` has to be installed inside the
42-
`/usr/local/lib/` or `/usr/local/bin/` directories).
40+
`/usr/local/lib/`, `/usr/local/bin/` or root directory of the project).
41+
42+
The current version of `antlr-4.*-complete.jar` can be downloaded via
43+
44+
```bash
45+
wget http://www.antlr.org/download/antlr-4.13.1-complete.jar
46+
```
4347

4448
After cloning the project run
4549

4650
```bash
47-
make
51+
make
4852
```
4953

5054
and a `lib` directory will be created. After that, run
5155

5256
```bash
53-
python setup.py install
57+
python -m pip install .
5458
```
5559

5660
to install the generated parser in your virtual environment.
5761

5862

5963
Additional requirements
6064
-----------------------
61-
The queryparser assumes that the PostgreSQL database has the extension
62-
[pg_sphere](https://github.com/kimakan/pgsphere/tree/aiprdbms16) installed. Although the `pg_sphere` is not required for the
63-
python module, the PostgreSQL **queries will not run** without that extension
64-
installed on the database.
65+
The queryparser assumes that the PostgreSQL database has the extension
66+
[pg_sphere](https://github.com/kimakan/pgsphere/tree/aiprdbms16) installed.
67+
Although the `pg_sphere` is not required for the python module, the PostgreSQL
68+
**queries will not run** without this extension installed on the database.
6569

6670

6771
Parsing MySQL and PostgreSQL
@@ -74,21 +78,21 @@ Parsing and processing of MySQL queries can be done by creating an instance
7478
of the `MySQLQueryProcessor` class
7579

7680
```python
77-
from queryparser.mysql import MySQLQueryProcessor
78-
qp = MySQLQueryProcessor()
81+
from queryparser.mysql import MySQLQueryProcessor
82+
qp = MySQLQueryProcessor()
7983
```
8084

8185
feeding it a MySQL query
8286

8387
```python
84-
sql = "SELECT a FROM db.tab;"
85-
qp.set_query(sql)
88+
sql = "SELECT a FROM db.tab;"
89+
qp.set_query(sql)
8690
```
8791

8892
and running it with
8993

9094
```python
91-
qp.process_query()
95+
qp.process_query()
9296
```
9397

9498
After the processing is completed, the processor object `qp` will include
@@ -101,8 +105,8 @@ PostgreSQL parsing is very similar to MySQL, except it requires importing
101105
the `PostgreSQLProcessor` class:
102106

103107
```python
104-
from queryparser.postgresql import PostgreSQLQueryProcessor
105-
qp = PostgreSQLQueryProcessor()
108+
from queryparser.postgresql import PostgreSQLQueryProcessor
109+
qp = PostgreSQLQueryProcessor()
106110
```
107111

108112
The rest of the functionality remains the same.
@@ -115,15 +119,15 @@ Translation of ADQL queries is done similarly by first creating an instance of
115119
the `ADQLQueryTranslator` class
116120

117121
```python
118-
from queryparser.adql import ADQLQueryTranslator
119-
adql = "SELECT TOP 100 POINT('ICRS', ra, de) FROM db.tab;"
120-
adt = ADQLQueryTranslator(adql)
122+
from queryparser.adql import ADQLQueryTranslator
123+
adql = "SELECT TOP 100 POINT('ICRS', ra, de) FROM db.tab;"
124+
adt = ADQLQueryTranslator(adql)
121125
```
122126

123127
and calling
124128

125129
```python
126-
adt.to_postgresql()
130+
adt.to_postgresql()
127131
```
128132

129133
which returns a translated string representing a valid MySQL query if
@@ -133,16 +137,16 @@ the ADQL query had no errors. The PostgreSQL query can then be parsed with the
133137
Testing
134138
-------
135139

136-
First, install `pytest`
140+
First in the root directory of the project, install optional dependencies
141+
(`PyYAML` and `pytest`) by running
137142

138143
```bash
139-
pip install pytest
144+
python -m pip install .[test]
140145
```
141146

142-
then run the test suite for a version of python you would like to test with
147+
then run the test suite with
143148

144149
```bash
145-
pytest lib/
150+
python -m pytest lib/
146151
```
147152

148-
More elaborate testing procedures can be found in the development notes.

docs/development.md

+25-49
Original file line numberDiff line numberDiff line change
@@ -26,22 +26,23 @@ package) and activate it:
2626

2727
```bash
2828
python -m venv qpenv
29-
source qpenv /bin/activate
29+
source qpenv/bin/activate
3030
```
3131

32-
After the virtual environment has been activated we can install the package
33-
from the root directory of the package with
32+
After the virtual environment has been activated we can build and install
33+
the package from the root directory of the package with
3434

3535
```bash
36-
pip install -r requirements.txt .
36+
make
37+
python -m pip install .
3738
```
3839

3940
## Testing
4041

4142
All tests from the test suite can be executed with
4243

4344
```bash
44-
pytest lib
45+
pytest lib/
4546
```
4647

4748
Individual dialect functionality (MySQL in this case) with increased verbosity
@@ -65,9 +66,9 @@ can be generated with
6566
pytest --cov=queryparser --cov-report html lib
6667
```
6768

68-
Continuous integration is enabled through Travis CI. The configuration is
69-
specified inside of `.travis.yml` file. Edit as necessary. Coverage exclusions
70-
are defined within `.coveragerc`.
69+
Continuous integration is enabled through GitHub Actions. The configuration is
70+
specified inside of `.github/workflows/pytest.yml` file. Edit as necessary.
71+
Coverage exclusions are defined within `.coveragerc`.
7172

7273
### Writing new tests
7374

@@ -148,53 +149,28 @@ The main queryparser class that includes this antlr functionality is called
148149
`process_query()` that binds the processing together. MySQL and PostgreSQL
149150
processors inherit from this class and extend it with their own listeners.
150151

151-
### Indexed objects
152152

153-
The need for indexed objects is easiest to explain through an example. Let us
154-
consider the following fairly typical ADQL query,
155-
156-
```SQL
157-
SELECT ra, dec FROM gdr2.gaia_source
158-
WHERE 1=CONTAINS(POINT('ICRS', ra, dec), CIRCLE('ICRS', 31, -19, 0.5));
159-
```
160-
161-
Translating it to PostgreSQL and using pgsphere functions yields
162-
163-
```SQL
164-
SELECT * FROM gdr2.gaia_source
165-
WHERE spoint(RADIANS(ra), RADIANS(dec)) @ scircle(spoint(RADIANS(31.0), RADIANS(-19.0)), RADIANS(0.5));
166-
```
167-
168-
While the translated query is syntactically fine, it would take a very long time
169-
to run since the first `spoint` in the translated query needs to be computed
170-
for the whole catalog every time the query is executed. To avoid this drawback
171-
we pre-compute its value across
172-
the whole catalog (let us name it `pos`) and index it. Since we know the value
173-
of the column `pos` was computed from columns `ra` and `dec` of the catalog,
174-
we can pass this information to the PostgreSQL processor and it will replace
175-
its part in the query:
176-
177-
```python
178-
adt = ADQLQueryTranslator(query)
179-
pgq = adt.to_postgresql()
180-
181-
iob = {'spoint': ((('gdr2', 'gaia_source', 'ra'),
182-
('gdr2', 'gaia_source', 'dec'), 'pos'),)}
153+
## New releases
183154

184-
qp = PostgreSQLQueryProcessor()
185-
qp.set_query(pgq)
186-
qp.process_query(indexed_objects=iob)
155+
### Requirements
156+
Install `build` and `twine` with
157+
```bash
158+
pip install twine
159+
pip install build
187160
```
161+
Make sure you have accounts on `https://pypi.org/` and `https://test.pypi.org/`,
162+
and you are `Maintainer` of the `queryparser-python3` project.
188163

189-
In the indexed object dictionary `iob` we define which columns in the database
190-
should be replaced with which indexed column for each type of pgsphere object
191-
functions (spoint, scircle, sbox...).
164+
- https://pypi.org/project/queryparser-python3/
165+
- https://test.pypi.org/project/queryparser-python3/
192166

193-
## New releases
167+
### Publishing
194168

195169
1. Change the version number in `src/queryparser/__init__.py`
196-
2. `python setup.py sdist bdist_wheel`
170+
2. `python -m build .`
197171
3. `twine check dist/*`
198172
4. `twine upload --repository-url https://test.pypi.org/legacy/ dist/*`
199-
5. `twine upload dist/*`
200-
6. Create a new release on github.
173+
5. Check whether the project was correctly uploaded on `test.pypi.org` by executing
174+
`python3 -m pip install --index-url https://test.pypi.org/simple/ queryparser-python3`
175+
6. `twine upload dist/*`
176+
7. Create a new release on github.

docs/examples.md

-69
Original file line numberDiff line numberDiff line change
@@ -102,72 +102,3 @@ print(qp.keywords)
102102
```
103103

104104

105-
### ADQL to PostgreSQL using indexed spoint object
106-
107-
The need indexed objects is explained in the development document. Here we will
108-
demonstrate how to use them.
109-
110-
Let us start with the following ADQL query
111-
112-
```SQL
113-
SELECT gaia.source_id, gaia.ra, gaia.dec, gd.r_est
114-
FROM gdr2.gaia_source gaia, gdr2_contrib.geometric_distance gd
115-
WHERE 1 = CONTAINS(POINT('ICRS', gaia.ra, gaia.dec),
116-
CIRCLE('ICRS',245.8962, -26.5222, 0.5))
117-
AND gaia.phot_g_mean_mag < 15
118-
AND gd.r_est > 1500 AND gd.r_est < 2300
119-
AND gaia.source_id = gd.source_id;
120-
```
121-
122-
We first translate it to PostgreSQL
123-
124-
```python
125-
adt = ADQLQueryTranslator(query)
126-
postgres_query = adt.to_postgresql()
127-
```
128-
129-
which yields
130-
131-
```SQL
132-
SELECT gaia.source_id, gaia.ra, gaia.dec, gd.r_est
133-
FROM gdr2.gaia_source gaia, gdr2_contrib.geometric_distance gd
134-
WHERE spoint(RADIANS(gaia.ra), RADIANS(gaia.dec)) @ scircle(spoint(RADIANS(245.8962), RADIANS(-26.5222)), RADIANS(0.5))
135-
AND gaia.phot_g_mean_mag < 15
136-
AND gd.r_est > 1500 AND gd.r_est < 2300
137-
AND gaia.source_id = gd.source_id;
138-
```
139-
140-
The issue with this query is that the computation of the
141-
142-
```SQL
143-
spoint(RADIANS(gaia.ra), RADIANS(gaia.dec))
144-
```
145-
146-
can take a very long time if the table we are querying on is large. To avoid
147-
that we can pre-compute its value, however, in that case we need to replace
148-
this `spoint` with the name of the pre-computed column. This can be achieved
149-
by defining the `indexed_objects` dictionary and passing it to the processor.
150-
151-
```python
152-
iob = {'spoint': ((('gdr2', 'gaia_source', 'ra'),
153-
('gdr2', 'gaia_source', 'dec'), 'pos'),)}
154-
qp = PostgreSQLQueryProcessor()
155-
qp.set_query(postgres_query)
156-
qp.process_query(indexed_objects=iob)
157-
```
158-
159-
The `qp.query` string will now give us
160-
161-
```SQL
162-
SELECT gaia.source_id, gaia.ra, gaia.dec, gd.r_est
163-
FROM gdr2.gaia_source gaia, gdr2_contrib.geometric_distance gd
164-
WHERE gaia.pos @ scircle(spoint(RADIANS(245.8962), RADIANS(-26.5222)), RADIANS(0.5))
165-
AND gaia.phot_g_mean_mag < 15
166-
AND gd.r_est > 1500 AND gd.r_est < 2300
167-
AND gaia.source_id = gd.source_id;
168-
```
169-
170-
We see that the `spoint` was replaced with the column `gaia.pos`. Although we
171-
only defined the column as `pos`, we had to attach the alias to it since we
172-
are using this alias for the table in the query. This is done automatically
173-
by the processor.

pyproject.toml

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
44

55
[project]
66
name = "queryparser-python3"
7-
description = "Parses PostgreSQL/MySQL and translates ADQL to PostgreSQL/MySQL."
7+
description = "Package for parsing PostgreSQL/MySQL and translating ADQL to PostgreSQL/MySQL."
88
readme = "README.md"
99
dynamic = ["version"]
1010
license = {text = "Apache-2.0"}

0 commit comments

Comments
 (0)