-
Notifications
You must be signed in to change notification settings - Fork 105
Support parallel DuckDB threads for Postgres table scan #762
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…lism controlled by guc `duckdb.threads_for_postgres_scan`
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very cool! The perf differences you report are very impressive. I think a bit more code comments would be quite helpful to make understanding this easier.
Similarly to #688 I'm postponing this until after 1.0 though, given it touches a very core part of pg_duckdb.
@@ -1505,6 +1505,7 @@ AppendString(duckdb::Vector &result, Datum value, idx_t offset, bool is_bpchar) | |||
|
|||
static void | |||
AppendJsonb(duckdb::Vector &result, Datum value, idx_t offset) { | |||
std::lock_guard<std::recursive_mutex> lock(GlobalProcessLock::GetLock()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these the only types that need this additional locking now? Would be good to explicitely state in a comment for each other type why they are thread safe. That way we won't forget to check when introducing support for new types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, only JSON and LIST. Additionally, as mentioned in #750, both of these types have memory issues.
I will add a comment for ConvertPostgresToDuckValue
. The overall rule is that it is thread-safe as long as it does not use Postgres MemContext.
bool is_parallel_scan = local_state.global_state->MaxThreads() > 1; | ||
if (!is_parallel_scan) { | ||
std::lock_guard<std::recursive_mutex> lock(GlobalProcessLock::GetLock()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this difference in behaviour between 1 and more than one threads doesn't completely make sense. Even if max_threads_per_postgres_scan is 1, it's still possible to have two different postgres scans running in parallel. Those two concurrent postgres scans would still benefit from not holding a lock during InsertTupleIntoChunk.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After reading more I now realize this is probably important for the case where we don't use backgroundworkers for the scan. So maybe we should keep this functionality. But I think it's worth refactoring and/or commenting this a bit more, because now the logic is quite hard to follow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, okay. I was trying to keep the original code unchanged for the single-thread case.
void | ||
SlotGetAllAttrsUnsafe(TupleTableSlot *slot) { | ||
slot_getallattrs(slot); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seemed scary to me, but looking closely at the implementation of slot_getallattrs
it doesn't use memory contexts nor can it throw an error. The only place where it throws an error is:
if (unlikely(attnum > slot->tts_tupleDescriptor->natts))
elog(ERROR, "invalid attribute number %d", attnum);
But that can condition can never be false, due to the fact that slot->tts_tupleDescriptor->natts
is passed as attnum
.
Could you merge this function function with SlotGetAllAttrs
? And add the above information in a code comment for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, sure. I am using the term "unsafe" to refer to the fact that it is not protected by PostgresFunctionGuard
, even though it does not actually require that protection :)
}; | ||
|
||
// Local State | ||
|
||
#define LOCAL_STATE_SLOT_BATCH_SIZE 32 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 32? Maybe we should we make this configurable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was concerned about burdening users with another GUC hyperparameter.
I tested batch sizes of 8, 16, 32, and 64, and found that 32 performs the best. BTW, the batch size helps to amortize the lock overhead across threads.
TupleTableSlot *InitTupleSlot(); | ||
bool | ||
IsParallelScan() const { | ||
return nworkers_launched > 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I cannot find the place where you change nworkers_launched
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is assigned during the initialization of Postgres parallel workers, which is part of the original code logic. I simply added 0
initialization and the interface.
SlotGetAllAttrs(slot); | ||
InsertTupleIntoChunk(output, local_state, slot); | ||
for (size_t j = 0; j < valid_slots; j++) { | ||
MinimalTuple minmal_tuple = reinterpret_cast<MinimalTuple>(local_state.minimal_tuple_buffer[j].data()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Y-- I need your C++ knowledge. Is this a good way to keep a buffer of MinimalTuples?
One thought I had is that now we do two copies of the minimal tuple:
- Once from the stack into the buffer (in GetnextMinimalTuple)
- Once from the buffer back to the stack (here).
I think if we instead have an array of MinimalTuple
that we realloc instead of using vectors of bytes, then we only need to copy once and we can pass the minimal tuple from the buffer directly into ExecStoreMinimalTupleUnsafe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't do any copy (and thus doesn't extend/modify its lifetime).
It forces the compiler to accept that the bytes stored in the minimal_tuple_buffer[j]
vector are a MinimalTuple
(aka MinimalTupleData *
, where MinimalTupleData
is itself a struct).
I haven't read the code yet, but my first question would be: why are they vector<uint8_t>
instead of vector<MinimalTuple>
in the first place?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As @Y-- pointed out, only one copy occurs (from Postgres parallel workers' shared memory to the buffer).
One benefit of using vector<uint8_t>
is that we have an off-the-shelf API to enlarge or shrink the buffer (i.e., resize). Additionally, there is no need to worry about memory leaks, as they are handled by RAII.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It forces the compiler to accept that the bytes stored in the minimal_tuple_buffer[j] vector are a MinimalTuple (aka MinimalTupleData *, where MinimalTupleData is itself a struct).
Sounds like that could cause alignment problems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, yes. Let me confirm the alignment issue.
} | ||
|
||
TupleTableSlot * | ||
ExecStoreMinimalTupleUnsafe(MinimalTuple minmal_tuple, TupleTableSlot *slot, bool shouldFree) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly to the comment I left above. Let's add a comment why this is safe to use without the lock. Something like:
It's safe to call ExecStoreMinimalTuple without the PostgresFunctionGuard because it does not allocate in memory contexts and the only error it can throw is when the slot is not a minimal slot. That error is an obvious programming error so we can ignore it here.
And just like the function above let's drop the Unsafe
from the name. (you probably need to change the body to call the original like ::ExecStoreMinimalTuple(...)
)
@JelteF Thanks for the review! YES, go for 1.1.0 is reasonable. |
Currently, we use a single DuckDB thread for Postgres table scan, even though multiple Postgres workers will be initialized. This leads to a performance bottleneck when scanning large amounts of data.
This PR parallelizes the conversion from Postgres tuple to DuckDB data chunk. Below are benchmark results on a 5GB TPCH lineitem table.
select * from lineitem order by 1 limit 1
duckdb.max_workers_per_postgres_scan
= 2duckdb.threads_for_postgres_scan
)