Skip to content

Commit de4d886

Browse files
committed
logs from latest cluster run
1 parent c3168df commit de4d886

File tree

4 files changed

+324
-0
lines changed

4 files changed

+324
-0
lines changed
Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
Warning messages:
2+
1: In min(historical_data$fecha, na.rm = TRUE) :
3+
no non-missing arguments to min; returning Inf
4+
2: In max(historical_data$fecha, na.rm = TRUE) :
5+
no non-missing arguments to max; returning -Inf
6+
── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
7+
✔ forcats 1.0.0 ✔ stringr 1.5.1
8+
✔ ggplot2 3.5.2 ✔ tibble 3.3.0
9+
✔ purrr 1.1.0 ✔ tidyr 1.3.1
10+
✔ readr 2.1.5
11+
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
12+
✖ dplyr::between() masks data.table::between()
13+
✖ dplyr::filter() masks stats::filter()
14+
✖ dplyr::first() masks data.table::first()
15+
✖ lubridate::hour() masks data.table::hour()
16+
✖ lubridate::isoweek() masks data.table::isoweek()
17+
✖ dplyr::lag() masks stats::lag()
18+
✖ dplyr::last() masks data.table::last()
19+
✖ lubridate::mday() masks data.table::mday()
20+
✖ lubridate::minute() masks data.table::minute()
21+
✖ lubridate::month() masks data.table::month()
22+
✖ lubridate::quarter() masks data.table::quarter()
23+
✖ lubridate::second() masks data.table::second()
24+
✖ purrr::transpose() masks data.table::transpose()
25+
✖ lubridate::wday() masks data.table::wday()
26+
✖ lubridate::week() masks data.table::week()
27+
✖ lubridate::yday() masks data.table::yday()
28+
✖ lubridate::year() masks data.table::year()
29+
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
30+
Using libcurl 8.7.1 with OpenSSL/3.2.2
31+
32+
Attaching package: ‘curl’
33+
34+
The following object is masked from ‘package:readr’:
35+
36+
parse_date
37+
38+
39+
Attaching package: ‘jsonlite’
40+
41+
The following object is masked from ‘package:purrr’:
42+
43+
flatten
44+
45+
Loading required package: R.oo
46+
Loading required package: R.methodsS3
47+
R.methodsS3 v1.8.2 (2022-06-13 22:00:14 UTC) successfully loaded. See ?R.methodsS3 for help.
48+
R.oo v1.27.1 (2025-05-02 21:00:05 UTC) successfully loaded. See ?R.oo for help.
49+
50+
Attaching package: ‘R.oo’
51+
52+
The following object is masked from ‘package:R.methodsS3’:
53+
54+
throw
55+
56+
The following objects are masked from ‘package:methods’:
57+
58+
getClasses, getMethods
59+
60+
The following objects are masked from ‘package:base’:
61+
62+
attach, detach, load, save
63+
64+
R.utils v2.13.0 (2025-02-24 21:20:02 UTC) successfully loaded. See ?R.utils for help.
65+
66+
Attaching package: ‘R.utils’
67+
68+
The following object is masked from ‘package:jsonlite’:
69+
70+
validate
71+
72+
The following object is masked from ‘package:tidyr’:
73+
74+
extract
75+
76+
The following object is masked from ‘package:utils’:
77+
78+
timestamp
79+
80+
The following objects are masked from ‘package:base’:
81+
82+
cat, commandArgs, getOption, isOpen, nullfile, parse, use, warnings
83+
84+
Error: object 'results' not found
85+
In addition: Warning message:
86+
In Sys.setlocale("LC_ALL", "en_US.UTF-8") :
87+
OS reports request to set locale to "en_US.UTF-8" cannot be honored
88+
Execution halted
Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
=== SLURM Job Information ===
2+
Job ID: 17923
3+
Job Name: weather_four_datasets
4+
Node: node6
5+
Started at: Wed Aug 27 20:38:30 CEST 2025
6+
7+
Loading modules...
8+
R version: R version 4.4.2 (2024-10-31) -- "Pile of Leaves"
9+
GDAL version: 3.10.0
10+
11+
Starting four-dataset weather collection...
12+
Approach: Original variable names, separate data sources
13+
Expected completion time: 2-6 hours depending on data volumes
14+
15+
- The project is out-of-sync -- use `renv::status()` for details.
16+
========================================
17+
FOUR-DATASET WEATHER COLLECTION
18+
========================================
19+
Approach: Separate datasets, original variable names
20+
Started at: 2025-08-27 20:38:56
21+
22+
=== DATASET 1: HISTORICAL DAILY STATIONS ===
23+
Source: AEMET historical climatological API (2013 to T-4 days)
24+
Output: daily_stations_historical.csv.gz
25+
=== Daily Stations Historical Aggregation ===
26+
Source: AEMET historical climatological API
27+
Period: 2013 to T-4 days (as far as historical API provides)
28+
Variables: Original AEMET names preserved
29+
30+
Using historical file: daily_station_historical.csv.gz
31+
Loaded 1056468 historical records
32+
Date range: Inf to -Inf
33+
Stations: 904
34+
Variables: date, indicativo, ta, tamax, tamin, hr, prec, vv, pres
35+
❌ Dataset 1 failed: Historical data missing 'fecha' column
36+
37+
=== DATASET 2: CURRENT DAILY STATIONS ===
38+
Source: Hourly API aggregated to daily (T-4 days to yesterday)
39+
Output: daily_stations_current.csv.gz
40+
=== Daily Stations Current Aggregation ===
41+
Source: Hourly station data aggregated to daily
42+
Period: Gap between historical data end and yesterday
43+
Variables: Original AEMET names preserved
44+
45+
Loading hourly data from: hourly_station_ongoing.csv.gz
46+
Hourly data shape: 62641 rows, 5 columns
47+
Columns: station_id, datetime, date, variable_type, value
48+
❌ Dataset 2 failed: No station ID column found in hourly data
49+
50+
=== DATASET 3: HOURLY STATIONS ===
51+
Source: AEMET hourly observations API
52+
Output: hourly_station_ongoing.csv.gz
53+
=== Hourly Station Ongoing Collection ===
54+
Source: AEMET hourly observations API
55+
Purpose: Most recent hourly weather observations
56+
Variables: Original AEMET names preserved
57+
58+
Existing hourly file found, age: 15.8 hours
59+
Collecting new hourly data...
60+
<curl_error_recv_error in curl_fetch_memory(paste0("https://opendata.aemet.es/opendata/api/observacion/convencional/todas"), handle = h): Failure when receiving data from the peer [opendata.aemet.es]:
61+
OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 0>
62+
<curl_error_recv_error in curl_fetch_memory(paste0("https://opendata.aemet.es/opendata/api/observacion/convencional/todas"), handle = h): Failure when receiving data from the peer [opendata.aemet.es]:
63+
OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 0>
64+
<curl_error_recv_error in curl_fetch_memory(paste0("https://opendata.aemet.es/opendata/api/observacion/convencional/todas"), handle = h): Failure when receiving data from the peer [opendata.aemet.es]:
65+
OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 0>
66+
<curl_error_recv_error in curl_fetch_memory(paste0("https://opendata.aemet.es/opendata/api/observacion/convencional/todas"), handle = h): Failure when receiving data from the peer [opendata.aemet.es]:
67+
OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 0>
68+
<curl_error_recv_error in curl_fetch_memory(paste0("https://opendata.aemet.es/opendata/api/observacion/convencional/todas"), handle = h): Failure when receiving data from the peer [opendata.aemet.es]:
69+
OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 0>
70+
<curl_error_recv_error in curl_fetch_memory(paste0("https://opendata.aemet.es/opendata/api/observacion/convencional/todas"), handle = h): Failure when receiving data from the peer [opendata.aemet.es]:
71+
OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 0>
72+
[1] "No new data retrieved. Nothing saved."
73+
✅ Hourly collection completed
74+
❌ Dataset 3 failed: object 'hourly_files' not found
75+
76+
77+
=== Job Completion ===
78+
Exit code: 1
79+
Completed at: Wed Aug 27 20:45:47 CEST 2025
80+
❌ Four-dataset collection failed with exit code 1
81+
Check the log file for details
Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
2+
✔ forcats 1.0.0 ✔ stringr 1.5.1
3+
✔ ggplot2 3.5.2 ✔ tibble 3.3.0
4+
✔ purrr 1.1.0 ✔ tidyr 1.3.1
5+
✔ readr 2.1.5
6+
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
7+
✖ dplyr::between() masks data.table::between()
8+
✖ dplyr::filter() masks stats::filter()
9+
✖ dplyr::first() masks data.table::first()
10+
✖ lubridate::hour() masks data.table::hour()
11+
✖ lubridate::isoweek() masks data.table::isoweek()
12+
✖ dplyr::lag() masks stats::lag()
13+
✖ dplyr::last() masks data.table::last()
14+
✖ lubridate::mday() masks data.table::mday()
15+
✖ lubridate::minute() masks data.table::minute()
16+
✖ lubridate::month() masks data.table::month()
17+
✖ lubridate::quarter() masks data.table::quarter()
18+
✖ lubridate::second() masks data.table::second()
19+
✖ purrr::transpose() masks data.table::transpose()
20+
✖ lubridate::wday() masks data.table::wday()
21+
✖ lubridate::week() masks data.table::week()
22+
✖ lubridate::yday() masks data.table::yday()
23+
✖ lubridate::year() masks data.table::year()
24+
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
25+
Using libcurl 8.7.1 with OpenSSL/3.2.2
26+
27+
Attaching package: ‘curl’
28+
29+
The following object is masked from ‘package:readr’:
30+
31+
parse_date
32+
33+
34+
Attaching package: ‘jsonlite’
35+
36+
The following object is masked from ‘package:purrr’:
37+
38+
flatten
39+
40+
Loading required package: R.oo
41+
Loading required package: R.methodsS3
42+
R.methodsS3 v1.8.2 (2022-06-13 22:00:14 UTC) successfully loaded. See ?R.methodsS3 for help.
43+
R.oo v1.27.1 (2025-05-02 21:00:05 UTC) successfully loaded. See ?R.oo for help.
44+
45+
Attaching package: ‘R.oo’
46+
47+
The following object is masked from ‘package:R.methodsS3’:
48+
49+
throw
50+
51+
The following objects are masked from ‘package:methods’:
52+
53+
getClasses, getMethods
54+
55+
The following objects are masked from ‘package:base’:
56+
57+
attach, detach, load, save
58+
59+
R.utils v2.13.0 (2025-02-24 21:20:02 UTC) successfully loaded. See ?R.utils for help.
60+
61+
Attaching package: ‘R.utils’
62+
63+
The following object is masked from ‘package:jsonlite’:
64+
65+
validate
66+
67+
The following object is masked from ‘package:tidyr’:
68+
69+
extract
70+
71+
The following object is masked from ‘package:utils’:
72+
73+
timestamp
74+
75+
The following objects are masked from ‘package:base’:
76+
77+
cat, commandArgs, getOption, isOpen, nullfile, parse, use, warnings
78+
79+
Error: object 'results' not found
80+
In addition: Warning message:
81+
In Sys.setlocale("LC_ALL", "en_US.UTF-8") :
82+
OS reports request to set locale to "en_US.UTF-8" cannot be honored
83+
Execution halted
Lines changed: 72 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,72 @@
1+
=== SLURM Job Information ===
2+
Job ID: 17930
3+
Job Name: weather_four_datasets
4+
Node: node6
5+
Started at: Wed Aug 27 21:55:39 CEST 2025
6+
7+
Loading modules...
8+
R version: R version 4.4.2 (2024-10-31) -- "Pile of Leaves"
9+
GDAL version: 3.10.0
10+
11+
Starting four-dataset weather collection...
12+
Approach: Original variable names, separate data sources
13+
Expected completion time: 2-6 hours depending on data volumes
14+
15+
- The project is out-of-sync -- use `renv::status()` for details.
16+
========================================
17+
FOUR-DATASET WEATHER COLLECTION
18+
========================================
19+
Approach: Separate datasets, original variable names
20+
Started at: 2025-08-27 21:56:05
21+
22+
=== DATASET 1: HISTORICAL DAILY STATIONS ===
23+
Source: AEMET historical climatological API (2013 to T-4 days)
24+
Output: daily_stations_historical.csv.gz
25+
=== Daily Stations Historical Aggregation ===
26+
Source: AEMET historical climatological API
27+
Period: 2013 to T-4 days (as far as historical API provides)
28+
Variables: Original AEMET names preserved
29+
30+
Using historical file: daily_station_historical.csv.gz
31+
Loaded 46874 historical records
32+
Date range: 20269 to 20323
33+
Stations: 889
34+
Variables: fecha, indicativo, tmed, tmax, tmin, hrMedia, prec, velmedia, presMax
35+
36+
=== Historical Aggregation Complete ===
37+
Output file: data/output/daily_stations_historical.csv.gz
38+
Records: 46874
39+
File size: 0.6 MB
40+
Date range: 20269 to 20323
41+
Stations: 889
42+
43+
Data completeness by variable:
44+
tmed: 98.3% complete
45+
tmax: 98.4% complete
46+
tmin: 98.3% complete
47+
hrMedia: 95.5% complete
48+
prec: 97.2% complete
49+
velmedia: 81.6% complete
50+
presMax: 24.9% complete
51+
✅ Dataset 1 completed in 0.01 minutes
52+
53+
=== DATASET 2: CURRENT DAILY STATIONS ===
54+
Source: Hourly API aggregated to daily (T-4 days to yesterday)
55+
Output: daily_stations_current.csv.gz
56+
=== Daily Stations Current Aggregation ===
57+
Source: Hourly station data aggregated to daily
58+
Period: Gap between historical data end and yesterday
59+
Variables: Original AEMET names preserved
60+
61+
No hourly data found. Running hourly collection...
62+
[1] "Downloaded 62692 new rows of data with 7 core variables."
63+
[1] "Creating new expanded weather dataset file."
64+
[1] "Total dataset now contains 62692 rows."
65+
❌ Dataset 2 failed: object 'hourly_files' not found
66+
67+
68+
=== Job Completion ===
69+
Exit code: 1
70+
Completed at: Wed Aug 27 21:56:27 CEST 2025
71+
❌ Four-dataset collection failed with exit code 1
72+
Check the log file for details

0 commit comments

Comments
 (0)