Open
Description
What happened?
I removed all of my previous work arounds and tried sending 24 gigs across 12 parquet files which contains 370 million rows via adbc_ingest().
From the logs I can see 1666 parquet files being generated (assuming these are 10 mb in size by default) and PUT..
But ultimately it fails with errors after 5 attempts to run COPY INTO(s).
File "../miniconda3/lib/python3.9/site-packages/adbc_driver_manager/dbapi.py", line 937, in adbc_ingest
return _blocking_call(self._stmt.execute_update, (), {}, self._stmt.cancel)
File "adbc_driver_manager/_lib.pyx", line 1569, in adbc_driver_manager._lib._blocking_call_impl
File "adbc_driver_manager/_lib.pyx", line 1562, in adbc_driver_manager._lib._blocking_call_impl
File "adbc_driver_manager/_lib.pyx", line 1295, in adbc_driver_manager._lib.AdbcStatement.execute_update
File "adbc_driver_manager/_lib.pyx", line 260, in adbc_driver_manager._lib.check_error
adbc_driver_manager.InternalError: INTERNAL: some files not loaded by COPY command, 901 files remain after 5 retries
ERRO[0619]connection.go:410 gosnowflake.(*snowflakeConn).queryContextInternal error: context canceled
ERRO[0619]connection.go:410 gosnowflake.(*snowflakeConn).queryContextInternal error: context canceled
I'm going try my old workarounds using adbc_ingest() with one parquet file at a time, etc..
Stack Trace
No response
How can we reproduce the bug?
No response
Environment/Setup
No response
Activity