You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Line Weight Remains Constant, Line Color is Divergent (#181)
* Line Weight Remains Constant, Line Color is Divergent
* Line Color Update, Legend Update, JSONtoCSV function modified to incorporate County and State information
* Added instructions to DATABASE setup. Fixed line weights and colors.
* Update viz/README.md
* Updated legacy pre-generated json files
* Updated React + Chat plugins + FlowMap
---------
Co-authored-by: Alam, Maksudul <alamm@ornl.gov>
Co-authored-by: pelesh <peless@ornl.gov>
Before launching the visualization, one needs to install these packages. This can be done with the following steps:
21
-
1. Install Node Version Manager (NVM). On MAC use `brew install nvm`.
22
-
2. Install [Node.js] version 16 (https://nodejs.org/en/) using `nvm install 16`
23
-
3. Do `npm install --legacy-peer-deps` in this directory (`viz`)to install all the dependencies.
24
22
25
-
> [!WARNING]
26
-
> Per https://github.com/pnnl/ExaGO/issues/129`--legacy-peer-deps` is required as an argumnet to `npm install`. This will ideally be removed once the vizualization is no longer experimnetal.
23
+
1. Install Node Version Manager (NVM). On MAC use `brew install nvm`.
24
+
2. Install [Node.js] version 24 (https://nodejs.org/en/) using `nvm install 24`
25
+
3. Select Node 24 by `nvm use 24`
26
+
4. Install Yarn. Do `npm install --global yarn`
27
+
5. Do `yarn install` in this directory (`viz`) to install all the dependencies.
28
+
6. Go to the `viz/backend` subdirectory and use the `pip install -r requirements.txt` command to install all the Python dependencies.
27
29
28
30
29
31
## Preparing input data files for visualization
30
32
The visualization uses a `JSON` formatted file as an input. This `JSON` file has a specific structure (To do: explain structure for the file) and there are several sample files for different network in the `data` subdirectory.
31
-
This input JSON file can be either created externally OR generated as an output of the `OPFLOW` application. When using OPFLOW, the following command will generate the input JSON file.
33
+
This input JSON file can be either created externally OR generated as an output of the `OPFLOW` application. When using OPFLOW, the following command will generate the input JSON file. The generated file will be name as `opflowout.json`.
Copy over the `opflowout.json` file to the `viz/data` subdirectory. Next, run the python script `geninputfile.py` from `viz` folder to load the JSON file in the visualization script.
49
+
Copy over the newly generated `opflowout.json` file to the `viz/data` subdirectory. Next, run the python script `geninputfile.py` from `viz` folder to load the JSON file in the visualization script. Note, the python script only takes the name of the file `opflowout.json` as an argument but does not open the file so the full file path need not be provided. The visualization tool will expect the file (`opflowout.json`) to be present in `viz/data` forlder. The following code will create/overwrite a file named `viz/src/module_casedata.js`. The `module_casedata.js` file is an application source file to load the data file `opflowout.json`.
47
50
48
51
```
49
52
python geninputfile.py opflowout.json
50
53
```
51
54
52
-
You are ready to launch the visualization now.
55
+
Now this creates the `viz/src/module_casedata.js` file. You are ready to launch the visualization now.
53
56
54
57
Note: If you have created the JSON file externally then simply copy it over in the `viz/data` subdirectory and run the `geninputfile.py` script using the above command.
55
58
56
59
## Launch visualization
57
-
The visualization expects a file name `case_data.json` in the `viz/data` subdirectory. Copy/Rename the file as `case_data.json` in that subdirectory to be used by the visualization tool.
58
-
59
60
To launch the visualization, run
61
+
60
62
```
61
-
npm start
63
+
yarn start
62
64
```
63
-
This will open a webpage with the visualization of the given network.
65
+
66
+
This will open a webpage with the visualization of the given network. If the network is large, it may take a while to load the visualization. The browser may show option to terminate or Wait and you should click on Wait button.
64
67
65
68
66
69
The figures show the visualization of the synthetic electric grid. The data for developing this visualization was created by merging the synthetic dataset for the [Eastern](https://electricgrids.engr.tamu.edu/electric-grid-test-cases/activsg70k/), [Western](https://electricgrids.engr.tamu.edu/electric-grid-test-cases/activsg10k/), and [Texas](https://electricgrids.engr.tamu.edu/electric-grid-test-cases/activsg2000/) interconnects from the [Electric Grid Test Case Repository](https://electricgrids.engr.tamu.edu/)
@@ -88,6 +91,7 @@ ChatGrid is a natural language query tool for ExaGO visualizations. It is powere
88
91
89
92
### Dependencies
90
93
ChatGrid is built upon the following services and tools.
@@ -99,13 +103,34 @@ Behind the scenes, LLM translates natural language queries into SQL queries to r
99
103
1. Convert data formats.
100
104
101
105
First, we need to convert the ExaGO output `.json` files to `.csv` files. The difference between the two data formats is that JSON stores attributes and values as dictionary pairs but CSV stores attributes and values as tables. You can write your own script for this conversion or use the provided script.
106
+
107
+
* Go to the `viz/backend` subdirectory and use the `pip install -r requirements.txt` command to install all the Python dependencies if already not done in previous steps. (Note: These steps are tested with Python 3.13.)
102
108
103
-
To use the provided script, first copy the ExaGO output `.json` file to the `viz/data` subdirectory and simply run the following script in the `viz/data` subdirectory (replace the example filename with your json filename). This will output three CSV files: `generation.csv`, `bus.csv`, and `tranmission_line.csv`.
109
+
110
+
To use the provided script, first copy the ExaGO output `.json` file to the `viz/data` subdirectory (if not already performed) and simply run the following script in the `viz/backend` subdirectory (replace the example filename with your json filename). This will create three CSV files: `generation.csv`, `bus.csv`, and `tranmission_line.csv`. We are assuming `opflowout.json` is the data json file present in `viz/data` folder.
2. Download PostgreSQL database from this [link](https://www.postgresql.org/download/) and install it.
116
+
Now there should be 5 CSV files in `viz/backend` folder:
117
+
118
+
* `bus.csv`
119
+
* `generation.csv`
120
+
* `us_states.csv`
121
+
* `counties.csv`
122
+
* `transmission_line.csv`
123
+
124
+
125
+
2. Download PostgreSQL database from this [link](https://www.postgresql.org/download/) and install it.
126
+
127
+
* For MAC using brew you can install postgresql 14 using: `brew install postgresql@14`
128
+
* Start the postgressql service: `rew services start postgresql@14`
129
+
* Create a role: `psql -U "$USER" -d postgres`
130
+
* Execute the create role query: `CREATE ROLE postgres WITH LOGIN SUPERUSER PASSWORD 'ExaGO.2025';` Here `ExaGO.2025` is a password. Change to your preference.
131
+
* Exit to shell by entering `quit` and hitting Enter.
132
+
* From command prompt type: `psql -U postgres -d postgres` If it works and you are in `psql` shell you are done. Exit from the shell using `quit`.
133
+
109
134
110
135
3. Create a PostgreSQL database and import the `.csv` files to it.
111
136
@@ -117,7 +142,11 @@ Behind the scenes, LLM translates natural language queries into SQL queries to r
117
142
118
143
b. Please be informative and accurate about your table names, and attribute names. Because this information can help LLM understand the dataset and performs better when dealing with user queries.
119
144
120
-
c. Include US state and county information in your database to support spatial queries that related to state or county.
145
+
c. Include US state and county information in your database to support spatial queries that related to state or county.
146
+
147
+
d. To enter the CSV files into database using command prompt do: `PGPASSWORD=ExaGO.2025 ./create_db.sh --db exago_db --schema-sql ./schema.sql --drop --truncate`. Here `exago_70k` is the database name. Use it in the configuration `config.py` file.
148
+
149
+
e. This will create a database named `exago_db` with password `ExaGO.2025`. This information will be used to update the `config.py` file.
121
150
122
151
123
152
4. Connect to your database.
@@ -132,15 +161,15 @@ Open the `config.py` in the `viz/backend` subdirectory replace `YOUR_OPENAI_KEY`
132
161
133
162
134
163
<!-- data script -->
135
-
<!-- installation: pip install -r requirements.txt in the backend directory-->
136
164
### Launch backend
137
165
ChatGrid uses Flask to host the service of receiving user queries and returning the data output and text summaries to update the visualizations on the frontend. Please follow the steps below to run the backend server.
138
166
139
-
1. Go to the `viz/backend` subdirectory and use the `pip install -r requirements.txt` command to install all the Python dependencies.
140
-
2. Run the following command in the `viz/backend` subdirectory
167
+
168
+
* Run the following command in the `viz/backend` subdirectory
141
169
```
142
170
python server.py
143
171
```
172
+
144
173
This will start the backend server for receiving user queries, processing it by LLM and returning data outputs to the frontend.
145
174
146
175
Now open the chat window on the frontend, type your queries, and enjoy ChatGrid!
0 commit comments