Skip to content

Commit c623306

Browse files
authored
Merge pull request #447 from legendu-net/dev
Merge dev into main
2 parents 74939ee + 32357f0 commit c623306

File tree

2 files changed

+15
-6
lines changed

2 files changed

+15
-6
lines changed

README.md

Lines changed: 15 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -26,11 +26,6 @@ This is a Python pacakage that contains misc utils for AI/ML.
2626
- An improved version of `spark_submit`.
2727
- Other misc PySpark functions.
2828

29-
## Supported Operating Systems and Python Versions
30-
31-
Python 3.10.x on Linux and macOS.
32-
It might work on Windows but is not tested on Windows.
33-
3429
## Installation
3530

3631
```bash
@@ -41,3 +36,18 @@ Available additional components are `cv`, `docker`, `pdf`, `jupyter`, `admin` an
4136
```bash
4237
pip3 install --user -U aiutil[all]
4338
```
39+
40+
## Executable Scripts
41+
42+
- snb: Search for content in Jupyter notebooks.
43+
- logf: A Spark application log analyzing tool for identify root causes of failed Spark applications.
44+
- pyspark_submit: Makes it easy to run Scala/Python Spark job.
45+
- pykinit: Make it easier to authenticate users' personal accounts on Hadoop.
46+
- match_memory: Query and consume memory.
47+
48+
You can run those executable scripts using uv
49+
(so that you don't have to manually install this Python package)
50+
.
51+
For example,
52+
53+
uvx --from aiutil snb -h

pyproject.toml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,6 @@ Repository = "https://github.com/legendu-net/aiutil"
7171

7272
[project.scripts]
7373
logf = "aiutil.hadoop:logf.main"
74-
repart_hdfs = "aiutil.hadoop:repart_hdfs.main"
7574
pyspark_submit = "aiutil.hadoop:pyspark_submit.main"
7675
pykinit = "aiutil.hadoop:kerberos.main"
7776
match_memory = "aiutil:memory.main"

0 commit comments

Comments
 (0)