Skip to content

Commit f6ba003

Browse files
committed
Add database schemas, convenience functions, and small tutorial
1 parent b8a5a8b commit f6ba003

File tree

13 files changed

+786
-12
lines changed

13 files changed

+786
-12
lines changed

codecov.yml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
coverage:
2+
status:
3+
project:
4+
default:
5+
target: 95%
6+
patch:
7+
default:
8+
target: 95%
9+
range: "90...95"

docs/Project.toml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,9 @@
11
[deps]
22
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
33
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
4+
DuckDB = "d2f5444f-75bc-4fdf-ac35-56f514c445e1"
45
LiveServer = "16fef848-5104-11e9-1b77-fb7a48bbb589"
6+
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
57
TulipaClustering = "314fac8b-c762-4aa3-9d12-851379729163"
68

79
[compat]

docs/src/10-tutorial.md

Lines changed: 177 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,177 @@
1+
# Tutorial
2+
3+
## Explanation
4+
5+
To simplify, let's consider a single profile, for a single year.
6+
Let's denote it as $p_i$, where $i = 1,\dots,N$.
7+
The clustering process consists of:
8+
9+
1. Split `N` into (let's assume equal) _periods_ of size `m = period_duration`.
10+
We can rename $p_i$ as
11+
12+
$$p_{j,k}, \qquad \text{where} \qquad j = 1,\dots,m, \quad k = 1,\dots,N/m.$$
13+
2. Compute `num_rps` representative periods
14+
15+
$$r_{j,\ell}, \qquad \text{where} \qquad j = 1,\dots,m, \qquad \ell = 1,\dots,\text{num\_rps}.$$
16+
3. During computation of the representative periods, we obtained weight
17+
$w_{k,\ell}$ between the period $k$ and the representative period $\ell$,
18+
such that
19+
20+
$$p_{j,k} = \sum_{\ell = 1}^{\text{num\_rps}} r_{j,\ell} \ w_{k,\ell}, \qquad \forall j = 1,\dots,m, \quad k = 1,\dots,N/m$$
21+
22+
## High level API/DuckDB API
23+
24+
!!! note "High level API"
25+
This tutorial focuses on the highest level of the API, which requires the
26+
use of a DuckDB connection.
27+
28+
The high-level API of TulipaClustering focuses on using TulipaClustering as part of the [Tulipa workflow](https://tulipaenergy.github.io/TulipaEnergyModel.jl/stable/).
29+
This API consists of three main functions: [`transform_wide_to_long!`](@ref), [`cluster!`](@ref), and [`dummy_cluster!`](@ref).
30+
In this tutorial we'll use all three.
31+
32+
Normally, you will have the DuckDB connection from the larger Tulipa workflow,
33+
so here we will create a temporary connection with fake data to show an example
34+
of the workflow. You can look into the source code of this documentation to see
35+
how to create this fake data.
36+
37+
```@setup duckdb_example
38+
using DuckDB
39+
40+
connection = DBInterface.connect(DuckDB.DB)
41+
DuckDB.query(
42+
connection,
43+
"CREATE TABLE profiles_wide AS
44+
SELECT
45+
2030 AS year,
46+
i + 24 * (p - 1) AS timestep,
47+
4 + 0.3 * cos(4 * 3.14 * i / 24) + random() * 0.2 AS avail,
48+
solar_rand * greatest(0, (5 + random()) * cos(2 * 3.14 * (i - 12.5) / 24)) AS solar,
49+
3.6 + 3.6 * sin(3.14 * i / 24) ^ 2 * (1 + 0.3 * random()) AS demand,
50+
FROM
51+
generate_series(1, 24) AS _timestep(i)
52+
CROSS JOIN (
53+
SELECT p, RANDOM() AS solar_rand
54+
FROM generate_series(1, 7 * 4) AS _period(p)
55+
)
56+
ORDER BY timestep
57+
",
58+
)
59+
```
60+
61+
Here is the content of that connection:
62+
63+
```@example duckdb_example
64+
using DataFrames, DuckDB
65+
66+
nice_query(str) = DataFrame(DuckDB.query(connection, str))
67+
nice_query("show tables")
68+
```
69+
70+
And here is the first rows of `profiles_wide`:
71+
72+
```@example duckdb_example
73+
nice_query("from profiles_wide limit 10")
74+
```
75+
76+
And finally, this is the plot of the data:
77+
78+
```@example duckdb_example
79+
using Plots
80+
81+
table = DuckDB.query(connection, "from profiles_wide")
82+
plot(size=(800, 400))
83+
timestep = [row.timestep for row in table]
84+
for profile_name in (:avail, :solar, :demand)
85+
value = [row[profile_name] for row in table]
86+
plot!(timestep, value, lab=string(profile_name))
87+
end
88+
plot!()
89+
```
90+
91+
## Transform a wide profiles table into a long table
92+
93+
!!! warning "Required"
94+
The long table format is a requirement of TulipaClustering, even for the dummy clustering example.
95+
96+
In this context, a wide table is a table where each new profile occupies a new column. A long table is a table where the profile names are stacked in a column with the corresponding values in a separate column.
97+
Given the name of the source table (in this case, `profiles_wide`), we can create a long table with the following call:
98+
99+
```@example duckdb_example
100+
using TulipaClustering
101+
102+
# We will save the output in the 'input' schema
103+
DuckDB.query(connection, "CREATE SCHEMA IF NOT EXISTS input")
104+
transform_wide_to_long!(connection, "profiles_wide", "input.profiles")
105+
106+
nice_query("FROM input.profiles LIMIT 10")
107+
```
108+
109+
Here, we decided to save the long profiles table in the `input` schema to use in the clustering below.
110+
111+
## Dummy Clustering
112+
113+
A dummy cluster will essentially ignore the clustering and create the necessary tables for the next steps in the Tulipa workflow.
114+
115+
```@example duckdb_example
116+
for table_name in (
117+
"cluster.rep_periods_data",
118+
"cluster.rep_periods_mapping",
119+
"cluster.profiles_rep_periods",
120+
"cluster.timeframe_data",
121+
)
122+
DuckDB.query(connection, "DROP TABLE IF EXISTS $table_name")
123+
end
124+
125+
clusters = dummy_cluster!(connection)
126+
127+
nice_query("FROM cluster.rep_periods_data LIMIT 5")
128+
```
129+
130+
```@example duckdb_example
131+
nice_query("FROM cluster.rep_periods_mapping LIMIT 5")
132+
```
133+
134+
```@example duckdb_example
135+
nice_query("FROM cluster.profiles_rep_periods LIMIT 5")
136+
```
137+
138+
```@example duckdb_example
139+
nice_query("FROM cluster.timeframe_data LIMIT 5")
140+
```
141+
142+
## Clustering
143+
144+
We can perform a real clustering by using the [`cluster!`](@ref) function with two extra arguments (see [Explanation](@ref) for their deeped meaning):
145+
146+
- `period_duration`: How long are the split periods;
147+
- `num_rps`: How many representative periods.
148+
149+
```@example duckdb_example
150+
period_duration = 24
151+
num_rps = 3
152+
153+
for table_name in (
154+
"cluster.rep_periods_data",
155+
"cluster.rep_periods_mapping",
156+
"cluster.profiles_rep_periods",
157+
"cluster.timeframe_data",
158+
)
159+
DuckDB.query(connection, "DROP TABLE IF EXISTS $table_name")
160+
end
161+
162+
clusters = cluster!(connection, period_duration, num_rps)
163+
164+
nice_query("FROM cluster.rep_periods_data LIMIT 5")
165+
```
166+
167+
```@example duckdb_example
168+
nice_query("FROM cluster.rep_periods_mapping LIMIT 5")
169+
```
170+
171+
```@example duckdb_example
172+
nice_query("FROM cluster.profiles_rep_periods LIMIT 5")
173+
```
174+
175+
```@example duckdb_example
176+
nice_query("FROM cluster.timeframe_data LIMIT 5")
177+
```

src/TulipaClustering.jl

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,10 @@ using SparseArrays
1212
using Statistics
1313

1414
include("structures.jl")
15+
include("data-validation.jl")
1516
include("io.jl")
1617
include("weight_fitting.jl")
1718
include("cluster.jl")
19+
include("convenience.jl")
1820

1921
end

0 commit comments

Comments
 (0)