When using langextract to extract the information contained in documents with HTML tables,suppose the content of my table is very long, will the chunking logic cut the table, causing the subsequent chunks to lack the table header and preventing the LLM from understanding, ultimately resulting in the loss of the extracted result data? is there a better solution to this problem?