File tree 2 files changed +7
-18
lines changed
2 files changed +7
-18
lines changed Original file line number Diff line number Diff line change 7
7
"source" : [
8
8
" # How to Build Time Series Applications in CrateDB\n " ,
9
9
" \n " ,
10
- " This notebook guides you through an example of how to batch import \n " ,
10
+ " This notebook guides you through an example of how to import and work with \n " ,
11
11
" time series data in CrateDB. It uses Dask to import data into CrateDB.\n " ,
12
12
" Dask is a framework to parallelize operations on pandas Dataframes.\n " ,
13
13
" \n " ,
60
60
{
61
61
"cell_type" : " code" ,
62
62
"execution_count" : null ,
63
- "id" : " e0649e64 " ,
63
+ "id" : " a31d75fa072055fe " ,
64
64
"metadata" : {
65
- "scrolled " : true
65
+ "collapsed " : false
66
66
},
67
67
"outputs" : [],
68
68
"source" : [
69
- " !pip install --upgrade 'cratedb-toolkit' 'dask' 'kaggle' 'pandas==2.0.*' 'pueblo>=0.0.7' 'sqlalchemy-cratedb' "
69
+ " # !pip install -r requirements.txt "
70
70
]
71
71
},
72
72
{
106
106
{
107
107
"cell_type" : " code" ,
108
108
"execution_count" : 3 ,
109
- "id" : " 8fcc014a" ,
110
- "metadata" : {},
111
- "outputs" : [
112
- {
113
- "name" : " stdout" ,
114
- "output_type" : " stream" ,
115
- "text" : [
116
- " Dataset URL: https://www.kaggle.com/datasets/guillemservera/global-daily-climate-data\n "
117
- ]
118
- }
119
- ],
120
109
"source" : [
121
110
" from pueblo.util.environ import getenvpass\n " ,
122
111
" from cratedb_toolkit.datasets import load_dataset\n " ,
Original file line number Diff line number Diff line change 1
1
cratedb-toolkit [datasets ]== 0.0.13
2
- refinitiv-data < 1.7
3
- pandas == 1.*
2
+ pandas == 2.*
4
3
pycaret == 3.3.2
5
4
pydantic < 2
6
- sqlalchemy == 1.*
5
+ refinitiv-data < 1.7
6
+ sqlalchemy == 2.*
7
7
sqlalchemy-cratedb >= 0.36.1 ,< 1
You can’t perform that action at this time.
0 commit comments