You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/datacube_install.md
+15-10Lines changed: 15 additions & 10 deletions
Original file line number
Diff line number
Diff line change
@@ -119,18 +119,17 @@ pip install netcdf4
119
119
```
120
120
121
121
Please note that the installed gdal version should be as close to your system gdal version as possible.
122
+
At the time of this writing, the `gdalinfo` command below outputs 1.11.3, which means that version 1.11.2 is the closest version that satisfies our requirements.
122
123
We try to install a non-existent version (99999999999) to have pip print all available version.
123
124
124
125
```
125
126
gdalinfo --version
126
127
pip install gdal==99999999999
127
128
```
128
129
129
-
At the time this is being written, the above command outputs 1.11.3, which means that version 1.11.2 is the closest version that satisfies our requirements.
130
-
131
130
Now that all requirements have been satisfied, run the setup.py script in the agdc-v2 directory:
132
131
133
-
**It has come to our attention that the setup.py script fails the first time it is run due to some NetCDF/Cython issues. Run the script a second time to install if this occurs.**
132
+
**It has come to our attention that the setup.py script can fail the first time it is run due to some NetCDF/Cython issues. Run the script a second time to install if this occurs.**
134
133
```
135
134
cd ~/Datacube/agdc-v2
136
135
python setup.py develop
@@ -156,7 +155,7 @@ Open this file in your editor of choice and find the line that starts with 'time
156
155
timezone = 'UTC'
157
156
```
158
157
159
-
This will ensure that all of the datetime fields in the database are stored in UTC. Next, open the pg_hba.conf file found at:
158
+
This will ensure that all of the datetime fields in the database are stored in UTC. Next, open the `pg_hba.conf` file found at:
160
159
161
160
```
162
161
/etc/postgresql/9.5/main/pg_hba.conf
@@ -184,7 +183,7 @@ sudo service postgresql restart
184
183
185
184
Data Cube Configuration file
186
185
---------------
187
-
The Data Cube requires a configuration file that points to the correct database and provides credentials. The file's contents looks like below should be named '.datacube.conf':
186
+
The Data Cube requires a configuration file that points to the correct database and provides credentials. The contents of the `.datacube.conf` file should appear as follows:
This will move the required .datacube.conf file to the home directory. The user's home directory is the default location for the configuration file and will be used for all commandlinebased Data Cube operations. The next step is to create the database specified in the configuration file.
210
+
This will copy the required `.datacube.conf` file to the home directory. The user's home directory is the default location for the configuration file and will be used for all command-line-based Data Cube operations. The next step is to create the database specified in the configuration file.
212
211
213
-
To create the database use the following:
212
+
To create the database run the following commands:
214
213
215
214
```
216
215
sudo -u postgres createuser --superuser dc_user
@@ -244,9 +243,15 @@ Done.
244
243
245
244
If you have PGAdmin3 installed, you can view the default schemas and relationships by connecting to the database named 'datacube' and viewing the tables, views, and indexes in the schema 'agdc'.
246
245
246
+
Alternatively, you can do the same from the command line. First log in with the command `psql -U dc_user datacube`.
247
+
To view schemas, run `psql \dn`.
248
+
View the full documentation of the `psql` command [here](https://www.postgresql.org/docs/9.5/static/app-psql.html).
249
+
247
250
<aname="next_steps"></a> Next Steps
248
251
========
249
-
Now that the Data Cube system is installed and initialized, the next step is to ingest some sample data. Our focus is on ARD (Analysis Ready Data) - the best introduction to the ingestion/indexing process is to use a single Landsat 7 or Landsat 8 SR product. Download a sample dataset from [Earth Explorer](https://earthexplorer.usgs.gov/) and proceed to the next document in this series, [The ingestion process](ingestion.md). Please ensure that the dataset you download is an SR product - the L\*.tar.gz should contain .tif files with the file pattern `L**_sr_band*.tif` This will correspond to datasets labeled "Collection 1 Higher-Level".
252
+
Now that the Data Cube system is installed and initialized, the next step is to ingest some sample data. Our focus is on ARD (Analysis Ready Data) - the best introduction to the ingestion/indexing process is to use a single Landsat 7 or Landsat 8 SR product.
253
+
There is a sample ingestion file provided in [the ingestion documentation](ingestion.md) in the "Prerequisites" section.
254
+
More generally, download a sample dataset from [Earth Explorer](https://earthexplorer.usgs.gov/) and proceed to the next document in this series, [the ingestion process](ingestion.md). Please ensure that the dataset you download is an SR product - the L\*.tar.gz should contain .tif files with the file pattern `L**_sr_band*.tif` This will correspond to datasets labeled "Collection 1 Higher-Level".
250
255
251
256
252
257
<aname="faqs"></a> Common problems/FAQs
@@ -281,7 +286,7 @@ Q:
281
286
>Can the Data Cube be accessed from R/C++/IDL/etc.?
282
287
283
288
A:
284
-
>This is not currently directly supported, the Data Cube is a Pythonbased API. The base technology managing data access PostgreSQL, so theoretically the functionality can be ported to any language that can interact with the database. An additional option is just shelling out from those languages, accessing data using the Python API, then passing the result back to the other program/language.
289
+
>This is not currently directly supported. The Data Cube is a Python-based API. The technology managing data access is PostgreSQL, so theoretically the functionality can be ported to any language that can interact with the database. An additional option is just shelling out from those languages, accessing data using the Python API, then passing the result back to the other program/language.
285
290
286
291
---
287
292
@@ -297,7 +302,7 @@ Q:
297
302
>I want to store more metadata that isn't mentioned in the documentation. Is this possible?
298
303
299
304
A:
300
-
>This entire process is completely customizable. Users can configure exactly what metadata they want to capture for each dataset - we use the default for simplicities sake.
305
+
>This entire process is completely customizable. Users can configure exactly what metadata they want to capture for each dataset - we use the default for simplicity's sake.
If you have not yet completed our Data Cube Installation Guide, please do so before continuing.
@@ -720,6 +720,7 @@ Q:
720
720
721
721
A:
722
722
> If your dataset is already in an optimized format and you don't desire any projection or resampling changes, then you can simply index the data and then begin to use the Data Cube.
723
+
You will have to specify CRS when loading indexed data, since the ingestion process - which informs the Data Cube about the metadata - has not occurred.
Copy file name to clipboardExpand all lines: docs/notebook_install.md
+33-26Lines changed: 33 additions & 26 deletions
Original file line number
Diff line number
Diff line change
@@ -23,14 +23,18 @@ Jupyter notebooks are extremely useful as a learning tool and as an introductory
23
23
24
24
To run our Jupyter notebook examples, the following prerequisites must be complete:
25
25
26
-
* The full Data Cube Installation Guide must have been followed and completed. This includes:
27
-
* You have a local user that is used to run the Data Cube commands/applications
28
-
* You have a database user that is used to connect to your 'datacube' database
29
-
* The Data Cube is installed and you have successfully run 'datacube system init'
30
-
* All code is checked out and you have a virtual environment in the correct directories: `~/Datacube/{data_cube_ui, data_cube_notebooks, datacube_env, agdc-v2}`
31
-
* The full Ingestion guide must have been followed and completed. This includes:
32
-
* A sample Landsat 7 scene was downloaded and uncompressed in your `/datacube/original_data` directory
33
-
* The ingestion process was completed for that sample Landsat 7 scene
26
+
The full Data Cube Installation Guide must have been followed and completed before proceeding. This includes:
27
+
* You have a local user that is used to run the Data Cube commands/applications
28
+
* You have a database user that is used to connect to your 'datacube' database
29
+
* The Data Cube is installed and you have successfully run 'datacube system init'
30
+
* All code is checked out and you have a virtual environment in the correct directories: `~/Datacube/{data_cube_ui, data_cube_notebooks, datacube_env, agdc-v2}`
31
+
32
+
If these requirements are not met, please see the associated documentation.
33
+
34
+
You can view the notebooks without ingesting any data, but to be able to run notebooks with the sample ingested data,
35
+
the ingestion guide must have been followed and completed. The steps include:
36
+
* A sample Landsat 7 scene was downloaded and uncompressed in your `/datacube/original_data` directory
37
+
* The ingestion process was completed for that sample Landsat 7 scene
34
38
35
39
<aname="installation_process"></a> Installation Process
Jupyter will create a configuration file in `~/.jupyter/jupyter_notebook_config.py`. Now set the password and edit the server details:
66
+
Jupyter will create a configuration file in `~/.jupyter/jupyter_notebook_config.py`.
67
+
Now set the password and edit the server details. Remember this password for future reference.
68
68
69
69
```
70
-
#enter a password - remember this for future reference.
71
70
jupyter notebook password
72
-
73
-
gedit ~/.jupyter/jupyter_notebook_config.py
74
71
```
75
72
76
-
Edit the generated configuration file to include relevant details - You'll need to find the relevant entries in the file:
73
+
Now edit the Jupyter notebook configuration file `~/.jupyter/jupyter_notebook_config.py` with your favorite text editor.
74
+
75
+
Edit the generated configuration file to include relevant details.
76
+
You'll need to set the relevant entries in the file:
77
77
78
78
```
79
79
c.NotebookApp.ip = '*'
@@ -90,16 +90,19 @@ cd ~/Datacube/data_cube_notebooks
90
90
jupyter notebook
91
91
```
92
92
93
-
Open a web browser and go to localhost:8888 if you're on the server, or use 'ifconfig' to list your ip address and go to {ip}:8888. You should be greeted with a password field - enter the password from the previous step.
93
+
Open a web browser and navigate to the notebook URL. If you are running your browser from the same machine that is
94
+
hosting the notebooks, you can use `localhost:{jupyter_port_num}` as the URL, where `jupyter_port_num` is the port number set for `c.NotebookApp.port` in the configuration file.
95
+
If you are connecting from another machine, you will need to enter the public IP address of the server in the URL (which can be determined by running the `ifconfig` command on the server) in place of `localhost`.
96
+
You should be greeted with a password field. Enter the password from the previous step.
94
97
95
98
<aname="using_notebooks"></a> Using the Notebooks
96
99
========
97
100
98
101
Now that your notebook server is running and the Data Cube is set up, you can run any of our examples.
99
102
100
-
Open the notebook titled 'Data_Cube_API_Demo' and run through all of the cells using either the button on the toolbar or CTRL+Enter.
103
+
Open the notebook titled 'Data_Cube_Test' and run through all of the cells using either the "Run" button on the toolbar or `Shift+Enter`.
101
104
102
-
You'll see that a connection to the Data Cube is established, some metadata is listed, and some data is loaded and plotted. Further down the page, you'll see that we are also demonstrating our API that includes getting acquisition dates, scene metadata, and data.
105
+
You'll see that a connection to the Data Cube is established, some metadata is queried, and some data is loaded and plotted.
103
106
104
107
<aname="next_steps"></a> Next Steps
105
108
========
@@ -114,6 +117,10 @@ Q:
114
117
>I’m having trouble connecting to my notebook server from another computer.
115
118
116
119
A:
117
-
>There can be a variety of problems that can cause this issue. Check your notebook configuration file, your network settings, and your firewall settings.
120
+
>There can be a variety of problems that can cause this issue.<br><br>
121
+
First check the IP and port number in your notebook configuration file.
122
+
Be sure you are connecting to `localhost:<port>` if your browser is running on the same
123
+
machine as the Jupyter server, and `<IP>:<port>` otherwise.
124
+
Also check that your firewall is not blocking the port that it is running on.
0 commit comments