You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The file `assets-timeframe-partitions` has the information on how often we want to evaluate the inter-temporal constraints that combine the information of the representative periods. In this example, the file is missing in the folder, meaning that the default of a `uniform` distribution of one period will be use in the model, see [model parameters](@ref schemas) section. This assumption implies that the model will check the inter-storage level every day of the week timeframe.
641
+
The file `assets-timeframe-partitions` has the information on how often we want to evaluate the inter-temporal constraints that combine the information of the representative periods. In this example, the file is missing in the folder, meaning that the default of a `uniform` distribution of one period will be use in the model, see the [schemas](@ref schemas) section. This assumption implies that the model will check the inter-storage level every day of the week timeframe.
642
642
643
643
!!! info
644
644
For the sake of simplicity, we show how using three representative days can recover part of the chronological information of one week. The same method can be applied to more representative periods to analyze the seasonality across a year or longer timeframe.
@@ -651,7 +651,12 @@ using DuckDB, TulipaIO, TulipaEnergyModel
651
651
input_dir = "../../test/inputs/Storage" # hide
652
652
# input_dir should be the path to the Storage example
Copy file name to clipboardExpand all lines: docs/src/50-schemas.md
+52-2
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,56 @@
1
-
# [Model Parameters](@idschemas)
1
+
# [Data pipeline/workflow](@iddata)
2
2
3
-
The optimization model parameters with the input data must follow the schema below for each table. To create these tables we currently use CSV files that follow this same schema and then convert them into tables using TulipaIO, as shown in the basic example of the [Tutorials](@ref basic-example) section.
3
+
---
4
+
TODO:
5
+
6
+
- diagrams
7
+
- Replace
8
+
> To create these tables we currently use CSV files that follow this same schema and then convert them into tables using TulipaIO, as shown in the basic example of the [Tutorials](@ref basic-example) section.
9
+
- Review below
10
+
11
+
---
12
+
13
+
Tulipa uses a DuckDB database to store the input data, the representation of variables, constraints, and other internal tables, as well as the output.
14
+
This database is informed through the `connection` argument in various parts of the API. Most notably, for [`run_scenario`](@ref) and [`EnergyProblem`](@ref).
15
+
16
+
## [Minimum data and using defaults](@id minimum_data)
17
+
18
+
Since `TulipaEnergyModel` is at a late stage in the workflow, its input data requirements are stricter.
19
+
Therefore, the input data required by the Tulipa model must follow the schema in the follow section.
20
+
21
+
Dealing with defaults is hard. A missing value might represent two different things to different people. That is why we require the tables to be complete.
22
+
However, we also understand that it is not reasonable to expect people to fill a lot of things that they don't need for their models.
23
+
Therefore, we have created the function [`populate_with_defaults!`](@ref) to fill the remaining columns of your tables with default values.
24
+
25
+
To know the defaults, check the table [Schemas](@ref schemas) below.
26
+
27
+
!!! warning "Beware implicit assumptions"
28
+
When data is missing and you automatically fill it with defaults, beware of your assumptions on what that means.
29
+
Check what are the default values and decide if you want to use them or not.
30
+
If you think a default does not make sense, open an issue, or a discussion thread.
31
+
32
+
### Example of using `populate_with_defaults!`
33
+
34
+
```@example
35
+
using TulipaEnergyModel, TulipaIO, DuckDB
36
+
```
37
+
38
+
## Namespaces
39
+
40
+
After creating a `connection` and loading data in a way that follows the schema (see the previous section on [minimum data](@ref minimum_data)), then Tulipa will create tables to handle the model data and various internal tables.
41
+
To differentiate between these tables, we use a prefix. This should also help differentiate between the data you might want to create yourself.
42
+
Here are the different namespaces:
43
+
44
+
-`input_`: Tables expected by `TulipaEnergyModel`.
45
+
-`var_`: Variable indices.
46
+
-`cons_`: Constraints indices.
47
+
-`expr_`: Expressions indices.
48
+
-`resolution_`: Unrolled partition blocks of assets and flows.
49
+
-`t_*`: Temporary tables.
50
+
51
+
## [Schemas](@id schemas)
52
+
53
+
The optimization model parameters with the input data must follow the schema below for each table.
4
54
5
55
The schemas can be found in the `input-schemas.json`. For more advanced users, they can also access the schemas at any time after loading the package by typing `TulipaEnergyModel.schema_per_table_name` in the Julia console. Here is the complete list of model parameters in the schemas per table (or CSV file):
0 commit comments