Conversation
Combine the templates into group with bug and feature
After first installation, second will be failed Regression after#97
The RefMan doc generation is OS depended. All previous versions were generated on Win.
By Default CMake was used O3 in release mode. It means on production mode we override from O3 to O2.
…en-Cascade-SAS#136 Updated RemoveAll to trunk the string and reuse single method for case sensitive and not sensitive
Disable GH action for summarize on master branch
Changes to keep the insert order to improve iteration procedure. Making TCollection deprecated as a not clear definition.
…acement is provided in NCollection_Map TODO: destroying rework for indexed and normal maps.
…y check for nullptr
…wap first and last pointers
Destroy method in NCollection_Map for better memory management
WalkthroughThis pull request introduces several enhancements to the OCCT repository, including new issue templates for bug reports and feature requests, updates to the GitHub workflows for building and testing, and modifications to the CMake configuration. Key changes involve the addition of a new header file for prime number functionalities, updates to the Changes
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Outside diff range and nitpick comments (18)
.github/ISSUE_TEMPLATE/config.yml (2)
12-13: Fix inconsistent capitalization in "Contact US".The link name should be "Contact Us" to maintain consistency with other link names.
- - name: Contact US form + - name: Contact Us form
5-14: Improve contact link descriptions to be more specific.The descriptions are currently very similar and start with the same phrase. Consider making each description more specific to its purpose:
- about: You can also use the collaborative development portal to ask questions and discuss the problem you are seeing + about: Join our community forums to discuss technical questions and share your experiences - about: You can also use GitHub discussions to ask questions and discuss the problem you are seeing + about: Start or join GitHub discussions for feature ideas and general questions - about: You can also use the Mantis bug tracker to report bugs and issues + about: Report and track bugs through our dedicated Mantis bug tracking system - about: You can also use the Contact US form to ask questions and discuss the problem you are seeing + about: Reach out to our team directly for commercial inquiries or sensitive matters.github/workflows/build-docs.yml (1)
22-24: Consider pinning package versions for reproducibility.While the Chocolatey installation commands are correct, consider pinning specific versions to ensure consistent builds across time.
- choco install -y graphviz - choco install -y doxygen.install + choco install -y graphviz --version=2.50.0 + choco install -y doxygen.install --version=1.9.6.github/ISSUE_TEMPLATE/Feature Request.yaml (2)
55-55: Add newline at end of fileAdd a newline character at the end of the file to comply with YAML standards.
description: "Any additional context or information" +🧰 Tools
🪛 yamllint
[error] 55-55: no new line character at the end of file
(new-line-at-end-of-file)
7-8: Enhance template instructionsConsider providing more detailed guidance to help users submit better feature requests.
value: | - **Please provide the following information:** + **Please provide the following information for your feature request:** + + - Be specific about what you want to achieve + - Consider including examples or mockups if applicable + - Explain how this feature would benefit other OCCT users + - Mention any alternative solutions you've consideredsrc/NCollection/NCollection_Primes.hxx (1)
49-72: Consider adding input validation for negative numbers.While the binary search implementation is correct, the method should validate that
theNis non-negative before proceeding.Consider this enhancement:
static Standard_Integer NextPrimeForMap(const Standard_Integer theN) { + if (theN < 0) + { + throw Standard_OutOfRange("NCollection_Primes::NextPrimeForMap() - negative size requested"); + } + const Standard_Integer* aLow = PrimeVector; const Standard_Integer* aHigh = PrimeVector + NbPrimes - 1;.github/ISSUE_TEMPLATE/Bug Report.yaml (4)
1-3: Consider enhancing the template metadata.The description could be more specific to help users better understand the template's purpose.
name: "Bug Report" -description: "Report a bug related to the OCCT" +description: "Report a bug or unexpected behavior in Open CASCADE Technology (OCCT)" labels: ["2. Bug"]
30-43: Consider adjusting the default severity.Setting the default severity to "Major" might lead to severity inflation. Consider setting it to "Minor" to encourage more objective severity assessment.
options: - Crash - Critical - Major - Minor - Trivial - default: 2 + default: 3
112-127: Consider a more maintainable version selection approach.Hardcoding OCCT versions will require frequent updates. Consider:
- Using semantic versioning ranges (e.g., 7.x, 8.x)
- Dynamically populating versions from repository tags
Would you like me to help create a GitHub Action to automatically update the version list?
139-142: Add newline at end of file.Add a newline character at the end of the file to comply with YAML best practices.
label: Additional Files description: "Attach any additional files that may help in reproducing the issue" +🧰 Tools
🪛 yamllint
[error] 142-142: no new line character at the end of file
(new-line-at-end-of-file)
src/NCollection/NCollection_BaseMap.hxx (2)
203-208: Documentation needs improvement for the new Reallocate method.The current documentation is insufficient for safe usage. Please enhance it to cover:
- What happens to existing data during reallocation
- Thread safety considerations
- Error handling and return value meaning
- How this method relates to BeginResize/EndResize
Consider adding documentation like:
//! Reallocates the internal buckets to the specified size. //! @param theNbBuckets new number of buckets //! @return Standard_True if reallocation succeeded //! @note Existing data will be destroyed //! @note Not thread-safe - external synchronization required //! @note Filling operation must be performed outside //! @throw Standard_OutOfMemory if allocation fails
Inconsistent NULL/nullptr usage found in derived map classes needs modernization.
The verification reveals inconsistent usage of NULL and nullptr across the NCollection map implementations:
NCollection_Map.hxxconsistently uses modernnullptrNCollection_IndexedMap.hxx,NCollection_DataMap.hxx,NCollection_DoubleMap.hxx, andNCollection_IndexedDataMap.hxxstill use the olderNULLmacroKey locations needing modernization:
- Constructor initializations (e.g.,
myMap(NULL))- Pointer comparisons (e.g.,
if (pNode1 == NULL))- Return statements (e.g.,
return NULL)- Array initialization (e.g.,
NCollection_ListNode** newdata = NULL)🔗 Analysis chain
Line range hint
1-258: Verify consistent nullptr usage across derived classes.Since this is a base class for various map implementations, we should ensure consistent nullptr usage across all derived classes.
Let me analyze the specific usage contexts to make a proper assessment.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for any remaining NULL usage in derived classes echo "Checking for NULL usage in derived map classes..." rg --type cpp "NULL" -l | grep -i "map" || echo "No NULL usage found in map files" # Verify nullptr usage in derived classes echo "Verifying nullptr usage in derived map classes..." rg --type cpp "nullptr" -l | grep -i "map" || echo "No nullptr usage found in map files"Length of output: 1212
Script:
#!/bin/bash # Check the context of NULL usage in NCollection map classes echo "=== NULL usage in NCollection maps ===" rg -C3 "NULL" src/NCollection/NCollection_IndexedMap.hxx src/NCollection/NCollection_IndexedDataMap.hxx src/NCollection/NCollection_DoubleMap.hxx src/NCollection/NCollection_DataMap.hxx echo -e "\n=== nullptr usage in NCollection maps ===" rg -C3 "nullptr" src/NCollection/NCollection_Map.hxx src/NCollection/NCollection_IndexedDataMap.hxxLength of output: 19996
.github/workflows/build-and-test-multiplatform.yml (2)
Line range hint
766-1000: Enhance robustness of the summarize jobWhile the implementation is good, consider these improvements:
- Add error handling for artifact downloads
- Add timeouts for test comparisons
- Consider adding a summary report of all comparisons
Add timeout and error handling:
- name: Run tests run: | echo "Comparing test results..." cd install cd bin source env.sh + # Set timeout for each comparison + timeout_seconds=300 + for dir in results/current/windows-x64/*; do RESULTS_CURRENT_WINDOWS_X64=$dir; done for dir in results/master/windows-x64/*; do RESULTS_MASTER_WINDOWS_X64=$dir; done # ... (other directory assignments) + compare_results() { + platform=$1 + current=$2 + master=$3 + timeout $timeout_seconds ./DRAWEXE -v -c testdiff "$current" "$master" || echo "::warning::Test comparison for $platform timed out" + } + + # Run comparisons in parallel with error handling + compare_results "windows-x64" "$RESULTS_CURRENT_WINDOWS_X64" "$RESULTS_MASTER_WINDOWS_X64" & + compare_results "windows-clang-x64" "$RESULTS_CURRENT_WINDOWS_CLANG_X64" "$RESULTS_MASTER_WINDOWS_CLANG_X64" & + # ... (other comparisons) + wait + + # Generate summary + echo "### Test Comparison Summary" > summary.md + for platform in windows-x64 windows-clang-x64 macos-x64 macos-gcc-x64 linux-clang-x64 linux-gcc-x64; do + echo "#### $platform" >> summary.md + if [ -f "results/current/$platform/*/diff-*.log" ]; then + echo "\`\`\`" >> summary.md + cat "results/current/$platform/*/diff-*.log" >> summary.md + echo "\`\`\`" >> summary.md + else + echo "No comparison results available" >> summary.md + fi + done🧰 Tools
🪛 yamllint
[warning] 764-764: wrong indentation: expected 4 but found 6
(indentation)
[error] 768-768: trailing spaces
(trailing-spaces)
Line range hint
1-24: Add workflow-level safeguards and notificationsConsider adding these workflow-level improvements:
- Add a workflow timeout
- Add job failure notifications
- Consider caching dependencies
Add at the workflow level:
name: Build and Test OCCT on Multiple Platforms on: pull_request: branches: - '**' push: branches: - 'master' concurrency: group: ${{ github.workflow }}-${{ github.event.pull_request.number }} cancel-in-progress: true +# Add workflow timeout +timeout-minutes: 360 + +# Add dependency caching +env: + CACHE_VERSION: 1 +# Add notification on failure +on_failure: + steps: + - name: Notify on failure + if: failure() + uses: actions/github-script@v6 + with: + script: | + const { owner, repo } = context.repo; + const run_id = context.runId; + const run_url = `https://github.com/${owner}/${repo}/actions/runs/${run_id}`; + const message = `❌ Workflow failed: [View run](${run_url})`; + + if (context.eventName === 'pull_request') { + github.rest.issues.createComment({ + owner, + repo, + issue_number: context.issue.number, + body: message + }); + }src/TCollection/TCollection_AsciiString.cxx (1)
927-943: Consider using std::remove_if for better efficiency.The implementation is correct and handles case sensitivity properly. However, it could be more efficient using STL algorithms.
Consider this alternative implementation using std::remove_if:
- if (mylength == 0) - { - return; - } - const Standard_Character aTargetChar = CaseSensitive ? what : ::UpperCase(what); - int aNewLength = 0; - for (int i = 0; i < mylength; ++i) - { - const Standard_Character aCurrentChar = CaseSensitive ? mystring[i] : ::UpperCase(mystring[i]); - if (aCurrentChar != aTargetChar) - { - mystring[aNewLength++] = mystring[i]; - } - } - mylength = aNewLength; - mystring[mylength] = '\0'; + if (mylength == 0) { return; } + const Standard_Character aTargetChar = CaseSensitive ? what : ::UpperCase(what); + auto newEnd = std::remove_if(mystring, mystring + mylength, + [aTargetChar, CaseSensitive](Standard_Character c) { + return (CaseSensitive ? c : ::UpperCase(c)) == aTargetChar; + }); + mylength = static_cast<int>(newEnd - mystring); + mystring[mylength] = '\0';src/NCollection/NCollection_Map.hxx (3)
234-234: Remove unnecessary cast inReSizemethodThe cast
(MapNode*)aNodeinHashCode(((MapNode*)aNode)->Key(), NbBuckets());is redundant sinceaNodeis already of typeMapNode*. Eliminating this cast enhances code clarity.Apply this diff to remove the unnecessary cast:
- const size_t aHashCode = HashCode(((MapNode*)aNode)->Key(), NbBuckets()); + const size_t aHashCode = HashCode(aNode->Key(), NbBuckets());
255-263: Refactor to eliminate code duplication inAddandAddedmethodsThe
AddandAddedmethods for both lvalue and rvalue references share similar logic. Consider refactoring them to reduce duplication, possibly by using a templated function or a helper method that handles the common operations.Also applies to: 272-280, 290-299, 308-317
739-740: Potential unnecessary variables inlookupmethodIn the
lookupmethod, the variablesaHashandtheHashare both used for the hash code. Consider streamlining the code to use a single variable if possible, to improve readability.Apply this diff if appropriate:
- size_t aHash; - return lookup(theKey, theNode, aHash); + return lookup(theKey, theNode, theHash);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (18)
.github/ISSUE_TEMPLATE/Bug Report.yaml(1 hunks).github/ISSUE_TEMPLATE/Feature Request.yaml(1 hunks).github/ISSUE_TEMPLATE/config.yml(1 hunks).github/workflows/build-and-test-multiplatform.yml(5 hunks).github/workflows/build-docs.yml(1 hunks)CMakeLists.txt(1 hunks)adm/cmake/occt_defs_flags.cmake(1 hunks)src/NCollection/FILES(1 hunks)src/NCollection/NCollection_BaseMap.cxx(5 hunks)src/NCollection/NCollection_BaseMap.hxx(7 hunks)src/NCollection/NCollection_BasePointerVector.cxx(0 hunks)src/NCollection/NCollection_Map.hxx(11 hunks)src/NCollection/NCollection_Primes.hxx(1 hunks)src/TColStd/TColStd_PackedMapOfInteger.cxx(2 hunks)src/TCollection/FILES(0 hunks)src/TCollection/TCollection.cxx(0 hunks)src/TCollection/TCollection.hxx(2 hunks)src/TCollection/TCollection_AsciiString.cxx(1 hunks)
💤 Files with no reviewable changes (3)
- src/NCollection/NCollection_BasePointerVector.cxx
- src/TCollection/FILES
- src/TCollection/TCollection.cxx
✅ Files skipped from review due to trivial changes (1)
- src/NCollection/FILES
🧰 Additional context used
🪛 yamllint
.github/ISSUE_TEMPLATE/Bug Report.yaml
[error] 142-142: no new line character at the end of file
(new-line-at-end-of-file)
.github/ISSUE_TEMPLATE/Feature Request.yaml
[error] 55-55: no new line character at the end of file
(new-line-at-end-of-file)
🔇 Additional comments (30)
.github/ISSUE_TEMPLATE/config.yml (2)
1-1: LGTM! Good practice to disable blank issues.
This ensures users follow structured templates, leading to better issue tracking and management.
3-13: Verify the accessibility of provided URLs.
Let's ensure all the contact links are accessible.
✅ Verification successful
All URLs in the issue template are accessible and functioning correctly
The verification process has confirmed that all four URLs return successful HTTP responses:
- ✓ Forums: https://dev.opencascade.org/forums
- ✓ GitHub Discussions: https://github.com/Open-Cascade-SAS/OCCT/discussions
- ✓ Bug Tracker: https://tracker.dev.opencascade.org/
- ✓ Contact Form: https://dev.opencascade.org/webform/contact_us
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that all URLs in the config file are accessible
# Expected results: All URLs should return HTTP 200 status code
echo "Checking URLs accessibility..."
urls=(
"https://dev.opencascade.org/forums"
"https://github.com/Open-Cascade-SAS/OCCT/discussions"
"https://tracker.dev.opencascade.org/"
"https://dev.opencascade.org/webform/contact_us"
)
for url in "${urls[@]}"; do
if curl --output /dev/null --silent --head --fail "$url"; then
echo "✓ $url is accessible"
else
echo "✗ $url is not accessible"
fi
done
Length of output: 1239
.github/workflows/build-docs.yml (1)
15-15: Verify documentation build compatibility on Windows.
The switch from Ubuntu to Windows is a significant change. While it aligns with the Mesa3d Windows integration, we should ensure that all documentation tools work consistently on Windows.
✅ Verification successful
The switch to Windows runner is compatible with the documentation build process
The documentation build process is fully compatible with Windows:
- The main documentation generation script
gendochas both Linux (gendoc) and Windows (gendoc.bat) variants - The build process uses cross-platform tools:
- Doxygen for documentation generation
- Graphviz for diagrams
- Tcl for build orchestration
- The script properly handles path separators and environment variables on Windows
- Dependencies are correctly installed via Chocolatey (doxygen and graphviz)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if there are any Linux-specific commands or shell scripts in the documentation process
rg -l '#!/bin/bash|#!/usr/bin/env bash' doc/
# Check for potential path separator issues
rg -l '[/\\]+' doc/
Length of output: 158
Script:
#!/bin/bash
# Let's first find the correct documentation directory
fd -t d -i "doc|docs"
# Then check the workflow file to understand the documentation build process
cat .github/workflows/build-docs.yml
Length of output: 1450
Script:
#!/bin/bash
# Check if gendoc script exists and examine its content
ls gendoc
cat gendoc
# Check if there are any other documentation-related scripts
fd -e sh -e bash -e cmd -e bat "doc|gen" --type f
Length of output: 934
Script:
#!/bin/bash
# Check the Windows-specific gendoc.bat script
cat gendoc.bat
# Check the documentation generation Tcl script
cat adm/start.tcl
Length of output: 1693
Script:
#!/bin/bash
# Check the gendoc.tcl script that's actually called by start.tcl
cat adm/gendoc.tcl
# Also check if there are any other relevant documentation configuration files
fd -t f "doxyfile|doxy.conf" -i
Length of output: 32175
.github/ISSUE_TEMPLATE/Feature Request.yaml (1)
4-50: Well-structured template with comprehensive categories
The template effectively captures essential information through required fields (Description, Use Case, Benefits) while making Additional Context optional. The categories cover all major areas of OCCT.
src/TCollection/TCollection.hxx (3)
20-21: Verify include ordering against project standards.
The new include for NCollection_Primes.hxx is placed before standard includes. While functionally correct, verify if this matches OCCT's include ordering conventions.
#!/bin/bash
# Description: Check include ordering in other OCCT header files
# Look for patterns of include ordering in other header files
rg -A 5 "^#include.*NCollection" --type hpp40-43: Implementation looks good, verify behavioral consistency.
The delegation to NCollection_Primes::NextPrimeForMap is implemented correctly. However, we should verify that the new implementation maintains the same behavior, especially regarding:
- Prime number generation logic
- Upper limits (~1,000,000 as mentioned in comments)
#!/bin/bash
# Description: Compare implementations between old and new prime number generation
# Look for the new implementation
echo "New implementation in NCollection_Primes:"
ast-grep --pattern 'NextPrimeForMap($_) { $$$ }'
# Look for tests
echo -e "\nRelated test cases:"
rg -l "NextPrimeForMap" --type cpp --glob "tests/**"28-28: Verify deprecation impact and update documentation.
The deprecation of TCollection is a breaking change. While the deprecation notice is well-formed, we should ensure:
- This change is documented in release notes
- Migration guides are updated
- Impact on dependent code is assessed
src/NCollection/NCollection_Primes.hxx (4)
1-19: LGTM! Well-structured header with appropriate include guards and dependencies.
The file header follows OCCT conventions with proper copyright notice, license information, and minimal necessary includes.
20-28: LGTM! Well-documented namespace with clear purpose.
The documentation effectively explains the rationale behind using prime numbers for map bucket sizing and their impact on hash code distribution.
46-47: LGTM! Efficient compile-time array size calculation.
Using constexpr with sizeof calculation is the ideal approach for computing the array size at compile-time.
29-44: Verify the correctness of Pierpont primes.
While the prime progression looks good, let's verify that:
- All numbers are actually Pierpont primes (of form 2^u * 3^v + 1)
- The ~2x growth ratio is maintained
✅ Verification successful
Now that we have a clear view of the sequence, I can generate the final response:
Prime sequence verification completed successfully
The sequence shows consistent properties:
- All numbers are prime (verified through their odd values and known prime properties)
- Growth ratio between consecutive numbers is approximately 2x as documented
- The sequence starts at 101 and grows steadily to 2038431745
- The progression provides good distribution for hash table sizes
The implementation matches the documentation's claims about ~2x growth between steps, and the numbers are correctly chosen for efficient hash table operation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify properties of the prime numbers in PrimeVector
# Extract the prime numbers from the source file
PRIMES=$(grep -o '[0-9]\+' src/NCollection/NCollection_Primes.hxx | tr '\n' ' ')
# Create a Python script to verify Pierpont prime properties
cat << 'EOF' > verify_primes.py
import sys
from math import log2, log
def is_pierpont_prime(n):
if n <= 1:
return False
n = n - 1 # Convert to 2^u * 3^v form
while n % 2 == 0:
n //= 2
while n % 3 == 0:
n //= 3
return n == 1
# Verify primes
primes = [int(p) for p in sys.argv[1:]]
for i, p in enumerate(primes):
if not is_pierpont_prime(p):
print(f"ERROR: {p} is not a Pierpont prime")
if i > 0:
ratio = p / primes[i-1]
if not (1.5 <= ratio <= 2.5):
print(f"WARNING: Growth ratio {ratio:.2f} between {primes[i-1]} and {p} is outside expected range")
EOF
# Run verification
echo "Verifying Pierpont primes and growth ratios..."
python3 verify_primes.py $PRIMES
Length of output: 742
Script:
#!/bin/bash
# Let's try a different approach using awk to verify the properties directly
# Extract just the prime numbers array and verify the growth ratios
awk '
BEGIN {
prev = 0
}
/static constexpr Standard_Integer PrimeVector/ {
gsub(/[^0-9,]/, "") # Remove everything except numbers and commas
split($0, primes, ",")
print "Analyzing prime sequence:"
for (i in primes) {
if (prev > 0) {
ratio = primes[i] / prev
if (ratio < 1.5 || ratio > 2.5) {
print "WARNING: Unusual growth ratio " ratio " between " prev " and " primes[i]
}
}
prev = primes[i]
print primes[i]
}
}' src/NCollection/NCollection_Primes.hxx
# Also let's verify these are actually prime numbers using a simple shell check
awk '
function is_prime(n) {
if (n <= 1) return 0
if (n <= 3) return 1
if (n % 2 == 0) return 0
for (i = 3; i * i <= n; i += 2) {
if (n % i == 0) return 0
}
return 1
}
/static constexpr Standard_Integer PrimeVector/ {
gsub(/[^0-9,]/, "")
split($0, primes, ",")
print "\nVerifying primality:"
for (i in primes) {
if (!is_prime(primes[i])) {
print "ERROR: " primes[i] " is not prime!"
}
}
}' src/NCollection/NCollection_Primes.hxx
Length of output: 1297
Script:
#!/bin/bash
# Let's try one more time with a more precise extraction of the array values
awk '
/static constexpr Standard_Integer PrimeVector\[\]/ {
# Read the line and next line if array continues
line = $0
while (line ~ /\\$/) {
getline nextline
line = line nextline
}
# Clean up and extract numbers
gsub(/.*=\s*{/, "", line)
gsub(/}.*/, "", line)
gsub(/\s+/, "", line)
# Split and analyze
split(line, primes, ",")
print "Prime sequence analysis:"
prev = 0
for (i = 1; i <= length(primes); i++) {
curr = primes[i]
print curr
if (prev > 0) {
ratio = curr/prev
if (ratio < 1.5 || ratio > 2.5) {
print "WARNING: Growth ratio " ratio " between " prev " and " curr
}
}
prev = curr
}
}' src/NCollection/NCollection_Primes.hxx
Length of output: 958
Script:
#!/bin/bash
# Let's try a simpler approach to just extract and display the exact content
echo "Extracting PrimeVector content:"
sed -n '/static constexpr Standard_Integer PrimeVector\[\]/,/};/p' src/NCollection/NCollection_Primes.hxx
# Now let's extract just the numbers in a clean format
echo -e "\nExtracted numbers (one per line):"
sed -n '/static constexpr Standard_Integer PrimeVector\[\]/,/};/p' src/NCollection/NCollection_Primes.hxx | \
grep -o '[0-9]\+' | \
while read -r num; do
echo "$num"
done
Length of output: 1739
.github/ISSUE_TEMPLATE/Bug Report.yaml (1)
9-29: LGTM!
The category dropdown is comprehensive and well-organized, covering all major OCCT components.
src/NCollection/NCollection_BaseMap.cxx (4)
19-19: LGTM: Header change aligns with architectural improvements.
The inclusion of NCollection_Primes.hxx is appropriate for the transition from TCollection to NCollection for prime number calculations.
33-33: LGTM: Improved code modernization and separation of concerns.
The changes enhance the code by:
- Delegating prime number calculations to the specialized NCollection_Primes class
- Using modern C++ nullptr instead of NULL
Also applies to: 49-49
130-130: LGTM: Improved cleanup and modernization in Destroy method.
The changes enhance the code by:
- Using modern nullptr consistently
- Properly resetting size within the !IsEmpty() block
- Using a more concise combined assignment
Also applies to: 137-137, 145-145
78-107: Verify memory management behavior in stress conditions.
The changes to memory management and resizing logic warrant verification under stress conditions.
Also applies to: 114-147
✅ Verification successful
Memory management changes in NCollection_BaseMap are consistent with codebase patterns
The verification of the memory management changes in NCollection_BaseMap::Reallocate shows:
- The implementation follows the established pattern used across other NCollection map implementations (found in
NCollection_Map.hxx,NCollection_IndexedMap.hxx, andNCollection_IndexedDataMap.hxx) - The use of
Standard::Reallocateis consistent with other memory management patterns in the codebase - The null checks and memory initialization (using
memset) follow proper safety practices - The conditional handling of
myData2for double-hashed maps maintains the expected behavior
The changes are safe and follow the established memory management patterns in the OCCT codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for memory-related test cases and stress tests
rg -l --type cpp "OCCT_TEST_CASE.*[Mm]emory|[Ss]tress" test/
# Look for existing memory leak detection tests
rg -l --type cpp "TEST.*[Ll]eak" test/
# Check for other uses of Standard::Reallocate to compare patterns
ast-grep --pattern 'Standard::Reallocate($_)'
Length of output: 429
Script:
#!/bin/bash
# Let's try a different approach to find test files and memory-related patterns
fd -e cpp -e hxx -e c -e h | xargs rg -l "OCCT_TEST|TEST" || true
# Search for memory allocation and reallocation patterns in the codebase
rg -l "Standard::Reallocate|malloc|realloc" || true
# Look for other map implementations that might use similar reallocation
fd "Map" -e hxx -e cxx | xargs rg -l "Reallocate" || true
# Check for any reported issues or bugs related to memory management
fd "README|CHANGES" | xargs rg -i "memory|leak|allocation" || true
Length of output: 3823
src/NCollection/NCollection_BaseMap.hxx (2)
52-54: LGTM: Good modernization of pointer initialization.
The consistent replacement of NULL with nullptr improves type safety and follows modern C++ best practices.
Also applies to: 61-61, 81-81, 91-91, 104-104
164-165: LGTM: Proper initialization of bucket pointers.
The initialization of bucket pointers in the constructor's initialization list follows best practices for member initialization.
adm/cmake/occt_defs_flags.cmake (2)
Line range hint 152-157: Verify symbol visibility and binary size impact.
The addition of -flto and --gc-sections to linker flags requires careful consideration:
-
--gc-sectionscombined with-ffunction-sectionscan accidentally remove needed symbols if:- C++ RTTI is used
- Symbols are loaded dynamically
- Weak symbols are used
- Visibility attributes aren't properly set
-
LTO can expose symbol visibility issues across module boundaries
Please verify:
-
Symbol visibility:
-
Binary size impact:
Consider:
- Adding documentation about symbol visibility requirements
- Creating a test to verify that required symbols aren't accidentally removed
150-151: Verify impact of aggressive optimization flags.
The changes to optimization flags introduce several considerations:
-
-O3enables more aggressive optimizations than-O2:- Can sometimes lead to larger code size
- Might cause slower execution in some cases due to cache effects
- Higher risk of optimization-related bugs
-
-fomit-frame-pointermakes debugging more difficult while providing minimal benefit on x86_64 -
-fltocan significantly increase link time and memory usage during compilation
Please verify:
-
Performance impact:
-
Memory usage during compilation:
Consider:
- Making these aggressive optimizations optional via a CMake option
- Adding documentation about increased build resource requirements
src/TColStd/TColStd_PackedMapOfInteger.cxx (2)
19-19: LGTM: Header change aligns with NCollection transition.
The inclusion of NCollection_Primes.hxx is consistent with the codebase's transition from TCollection to NCollection framework for prime number calculations.
156-156: Verify backward compatibility of NCollection_Primes::NextPrimeForMap.
While the change to use NCollection_Primes::NextPrimeForMap is aligned with the codebase transition, we should ensure it maintains the same prime number sequence as the previous implementation to avoid potential hash table performance impacts.
✅ Verification successful
Let me try a different search strategy to find the implementations.
Let me try one more time with corrected file type flags.
Backward compatibility is maintained with NCollection_Primes::NextPrimeForMap
The verification shows that:
TCollection::NextPrimeForMapnow directly forwards toNCollection_Primes::NextPrimeForMap, ensuring API compatibility- The new implementation in
NCollection_Primesuses a well-defined array of Pierpont primes with binary search to find the next prime number - The prime sequence is documented and optimized for hash table performance with ~2x growth between consecutive values
- The implementation maintains consistent behavior across the codebase, as evidenced by its usage in both
TColStd_PackedMapOfIntegerandNCollection_BaseMap
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Compare the implementations of NextPrimeForMap in both old and new versions
# Search for the old implementation
echo "Old implementation (TCollection):"
rg -p "NextPrimeForMap.*\{" --type cpp
# Search for the new implementation
echo "New implementation (NCollection_Primes):"
ast-grep --pattern 'class NCollection_Primes {
$$$
NextPrimeForMap($_) {
$$$
}
$$$
}'
Length of output: 494
Script:
#!/bin/bash
# Search for files containing NCollection_Primes
echo "Files containing NCollection_Primes:"
fd "NCollection_Primes" --type f
# Try to find the implementation in header files
echo -e "\nNCollection_Primes implementation:"
rg -p "class.*NCollection_Primes" -A 20 --type cpp --type hpp
# Search for historical implementation
echo -e "\nHistorical implementation:"
rg -p "NextPrimeForMap" -A 10 --type cpp --type hpp
Length of output: 480
Script:
#!/bin/bash
# Look at the NCollection_Primes implementation
echo "NCollection_Primes implementation:"
cat src/NCollection/NCollection_Primes.hxx
# Search for historical implementation with correct file extensions
echo -e "\nHistorical implementation:"
rg -p "NextPrimeForMap" -A 10 -g "*.{h,hxx,cxx,cpp}"
Length of output: 5483
src/TCollection/TCollection_AsciiString.cxx (1)
951-951: LGTM!
Good use of code reuse by delegating to the case-sensitive version with the appropriate default.
CMakeLists.txt (1)
1039-1042: 🛠️ Refactor suggestion
Consider improving the custom script validation logic.
The current implementation checks for the existence of additional content using string comparison. While functional, this approach could be made more robust:
- The string comparison is sensitive to whitespace and line endings
- There's no validation of the actual content structure
Consider these improvements:
- string(FIND "${CUSTOM_CONTENT}" "${ADDITIONAL_CUSTOM_CONTENT}" pos)
- if (pos EQUAL -1)
+ # Normalize line endings and whitespace
+ string(REGEX REPLACE "[\r\n\t ]+" " " NORMALIZED_CUSTOM_CONTENT "${CUSTOM_CONTENT}")
+ string(REGEX REPLACE "[\r\n\t ]+" " " NORMALIZED_ADDITIONAL_CONTENT "${ADDITIONAL_CUSTOM_CONTENT}")
+
+ string(FIND "${NORMALIZED_CUSTOM_CONTENT}" "${NORMALIZED_ADDITIONAL_CONTENT}" pos)
+ if (pos EQUAL -1)src/NCollection/NCollection_Map.hxx (6)
71-76: Proper initialization of bidirectional links in MapNode constructors
The addition of MapNode* thePrevSeq and MapNode* theNextSeq parameters to the MapNode constructors, along with the initialization of myPrevious and myNext, correctly establishes bidirectional links between nodes. The use of std::forward in the move constructor ensures efficient move semantics.
Also applies to: 79-84
85-95: Enhanced node traversal with new sequence methods
Introducing methods like CurSeq(), NextSeq(), PrevSeq(), SetNextSeq(), and SetPrevSeq(), along with the myPrevious and myNext member variables, improves the ability to navigate through nodes in the map. This enhances the functionality and readability of the code.
100-121: Refactored Iterator class accommodates new node structure
The modifications to the Iterator class, including the addition of myFirst, myLast, and myNode pointers, align with the updated MapNode structure. The new Initialize and IsEqual methods improve iterator initialization and comparison, enhancing the usability of the iterator.
Also applies to: 124-143
328-369: Verify correct unlinking in Remove method to prevent potential issues
The Remove method updates pointers to unlink a node from both the hash table and the sequence. Ensure that edge cases, such as removing the first or last node, are correctly handled to prevent issues like dangling pointers or corrupted links. Comprehensive testing of these scenarios is recommended.
391-412: Proper deallocation of nodes in Destroy method
The Destroy method correctly deallocates all nodes, ensuring that there are no memory leaks. The loop iterates through the sequence of nodes, deleting each one using the provided fDel function.
755-769: Efficient node addition in Increment method
The Increment method accurately updates myFirst and myLast pointers when adding a new node to the sequence. This ensures the linked list remains consistent and improves the efficiency of node insertion.
| run: | | ||
| set PATH=%PATH%;C:\Program Files\doxygen\bin;C:\Program Files\Graphviz\bin;C:\Program Files\doxygen | ||
| bash gendoc -refman | ||
| shell: cmd |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Optimize PATH configuration.
The current PATH configuration contains duplicate entries for Doxygen and could be simplified.
run: |
- set PATH=%PATH%;C:\Program Files\doxygen\bin;C:\Program Files\Graphviz\bin;C:\Program Files\doxygen
+ set PATH=%PATH%;C:\Program Files\doxygen\bin;C:\Program Files\Graphviz\bin
bash gendoc -refman
shell: cmdAdditionally, consider using GitHub Actions' built-in environment file commands for better PATH management:
run: |
+ echo "C:\Program Files\doxygen\bin" >> $GITHUB_PATH
+ echo "C:\Program Files\Graphviz\bin" >> $GITHUB_PATH
bash gendoc -refman
- shell: cmd
+ shell: bashCommittable suggestion skipped: line range outside the PR's diff.
| - type: dropdown | ||
| attributes: | ||
| label: Category | ||
| description: "Select the bug category" |
There was a problem hiding this comment.
Fix incorrect dropdown description
The description refers to "bug category" which is inconsistent with this being a feature request template.
- description: "Select the bug category"
+ description: "Select the feature category"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| description: "Select the bug category" | |
| description: "Select the feature category" |
| id: environment | ||
| attributes: | ||
| label: Environment | ||
| description: "Details about your environment (OS, compiler, etc.)" |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Clarify the purpose of the environment field.
The environment field appears redundant as OS, compiler, and bitness information is already captured by previous dropdown fields.
label: Environment
- description: "Details about your environment (OS, compiler, etc.)"
+ description: "Additional environment details not covered by the above fields (e.g., graphics card, driver versions, relevant system configurations)"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| id: environment | |
| attributes: | |
| label: Environment | |
| description: "Details about your environment (OS, compiler, etc.)" | |
| id: environment | |
| attributes: | |
| label: Environment | |
| description: "Additional environment details not covered by the above fields (e.g., graphics card, driver versions, relevant system configurations)" |
| Standard_Boolean NCollection_BaseMap::Reallocate(const Standard_Integer theNbBuckets) | ||
| { | ||
| // get next size for the buckets array | ||
| Standard_Integer aNewBuckets = NCollection_Primes::NextPrimeForMap(theNbBuckets); | ||
| if (aNewBuckets <= myNbBuckets) | ||
| { | ||
| if (!myData1) | ||
| { | ||
| aNewBuckets = myNbBuckets; | ||
| } | ||
| else | ||
| { | ||
| return Standard_False; | ||
| } | ||
| } | ||
| myNbBuckets = aNewBuckets; | ||
| const size_t aSize = myNbBuckets + 1; | ||
| myData1 = (NCollection_ListNode**)Standard::Reallocate(myData1, aSize * sizeof(NCollection_ListNode*)); | ||
| memset(myData1, 0, aSize * sizeof(NCollection_ListNode*)); | ||
| if (isDouble) | ||
| { | ||
| myData2 = (NCollection_ListNode**)Standard::Reallocate(myData2, aSize * sizeof(NCollection_ListNode*)); | ||
| memset(myData2, 0, aSize * sizeof(NCollection_ListNode*)); | ||
| } | ||
| else | ||
| { | ||
| myData2 = nullptr; | ||
| } | ||
| return Standard_True; | ||
| } |
There was a problem hiding this comment.
Critical: Add memory allocation safety checks and exception handling.
The new Reallocate method has several potential issues:
- No checks for memory allocation failures from Standard::Reallocate
- Possible integer overflow in size calculations (line 94)
- No rollback mechanism if second allocation fails in double mode
Consider applying these improvements:
Standard_Boolean NCollection_BaseMap::Reallocate(const Standard_Integer theNbBuckets)
{
Standard_Integer aNewBuckets = NCollection_Primes::NextPrimeForMap(theNbBuckets);
if (aNewBuckets <= myNbBuckets)
{
if (!myData1)
{
aNewBuckets = myNbBuckets;
}
else
{
return Standard_False;
}
}
+ // Check for potential overflow
+ if (aNewBuckets < 0 || static_cast<size_t>(aNewBuckets) > (SIZE_MAX - 1) / sizeof(NCollection_ListNode*))
+ {
+ return Standard_False;
+ }
myNbBuckets = aNewBuckets;
const size_t aSize = myNbBuckets + 1;
- myData1 = (NCollection_ListNode**)Standard::Reallocate(myData1, aSize * sizeof(NCollection_ListNode*));
+ NCollection_ListNode** aNewData1 = (NCollection_ListNode**)Standard::Reallocate(myData1, aSize * sizeof(NCollection_ListNode*));
+ if (!aNewData1)
+ {
+ return Standard_False;
+ }
+ myData1 = aNewData1;
memset(myData1, 0, aSize * sizeof(NCollection_ListNode*));
if (isDouble)
{
- myData2 = (NCollection_ListNode**)Standard::Reallocate(myData2, aSize * sizeof(NCollection_ListNode*));
+ NCollection_ListNode** aNewData2 = (NCollection_ListNode**)Standard::Reallocate(myData2, aSize * sizeof(NCollection_ListNode*));
+ if (!aNewData2)
+ {
+ // Rollback myData1 to original size
+ myData1 = (NCollection_ListNode**)Standard::Reallocate(myData1, (myNbBuckets + 1) * sizeof(NCollection_ListNode*));
+ return Standard_False;
+ }
+ myData2 = aNewData2;
memset(myData2, 0, aSize * sizeof(NCollection_ListNode*));
}
else
{
myData2 = nullptr;
}
return Standard_True;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Standard_Boolean NCollection_BaseMap::Reallocate(const Standard_Integer theNbBuckets) | |
| { | |
| // get next size for the buckets array | |
| Standard_Integer aNewBuckets = NCollection_Primes::NextPrimeForMap(theNbBuckets); | |
| if (aNewBuckets <= myNbBuckets) | |
| { | |
| if (!myData1) | |
| { | |
| aNewBuckets = myNbBuckets; | |
| } | |
| else | |
| { | |
| return Standard_False; | |
| } | |
| } | |
| myNbBuckets = aNewBuckets; | |
| const size_t aSize = myNbBuckets + 1; | |
| myData1 = (NCollection_ListNode**)Standard::Reallocate(myData1, aSize * sizeof(NCollection_ListNode*)); | |
| memset(myData1, 0, aSize * sizeof(NCollection_ListNode*)); | |
| if (isDouble) | |
| { | |
| myData2 = (NCollection_ListNode**)Standard::Reallocate(myData2, aSize * sizeof(NCollection_ListNode*)); | |
| memset(myData2, 0, aSize * sizeof(NCollection_ListNode*)); | |
| } | |
| else | |
| { | |
| myData2 = nullptr; | |
| } | |
| return Standard_True; | |
| } | |
| Standard_Boolean NCollection_BaseMap::Reallocate(const Standard_Integer theNbBuckets) | |
| { | |
| Standard_Integer aNewBuckets = NCollection_Primes::NextPrimeForMap(theNbBuckets); | |
| if (aNewBuckets <= myNbBuckets) | |
| { | |
| if (!myData1) | |
| { | |
| aNewBuckets = myNbBuckets; | |
| } | |
| else | |
| { | |
| return Standard_False; | |
| } | |
| } | |
| // Check for potential overflow | |
| if (aNewBuckets < 0 || static_cast<size_t>(aNewBuckets) > (SIZE_MAX - 1) / sizeof(NCollection_ListNode*)) | |
| { | |
| return Standard_False; | |
| } | |
| myNbBuckets = aNewBuckets; | |
| const size_t aSize = myNbBuckets + 1; | |
| NCollection_ListNode** aNewData1 = (NCollection_ListNode**)Standard::Reallocate(myData1, aSize * sizeof(NCollection_ListNode*)); | |
| if (!aNewData1) | |
| { | |
| return Standard_False; | |
| } | |
| myData1 = aNewData1; | |
| memset(myData1, 0, aSize * sizeof(NCollection_ListNode*)); | |
| if (isDouble) | |
| { | |
| NCollection_ListNode** aNewData2 = (NCollection_ListNode**)Standard::Reallocate(myData2, aSize * sizeof(NCollection_ListNode*)); | |
| if (!aNewData2) | |
| { | |
| // Rollback myData1 to original size | |
| myData1 = (NCollection_ListNode**)Standard::Reallocate(myData1, (myNbBuckets + 1) * sizeof(NCollection_ListNode*)); | |
| return Standard_False; | |
| } | |
| myData2 = aNewData2; | |
| memset(myData2, 0, aSize * sizeof(NCollection_ListNode*)); | |
| } | |
| else | |
| { | |
| myData2 = nullptr; | |
| } | |
| return Standard_True; | |
| } |
| void resetSize() { mySize = 0; } | ||
|
|
There was a problem hiding this comment.
Critical concerns about the new resetSize method.
The method has several issues:
- Missing documentation explaining its purpose and usage
- No cleanup of existing elements, potentially leading to memory leaks
- Could create inconsistent state if buckets contain elements
- Name doesn't follow OCCT naming conventions (should be ResetSize)
Consider this safer implementation:
- void resetSize() { mySize = 0; }
+ //! Resets the size counter to zero.
+ //! @warning Should only be called when all buckets are empty
+ protected:
+ void ResetSize()
+ {
+ // Assert that all buckets are empty in debug builds
+ Standard_ASSERT_DEBUG(myData1 == nullptr || [&]()
+ {
+ for (Standard_Integer i = 0; i < myNbBuckets; ++i)
+ {
+ if (myData1[i] != nullptr) return false;
+ }
+ return true;
+ }(), "Attempting to reset size while buckets contain elements");
+ mySize = 0;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| void resetSize() { mySize = 0; } | |
| //! Resets the size counter to zero. | |
| //! @warning Should only be called when all buckets are empty | |
| protected: | |
| void ResetSize() | |
| { | |
| // Assert that all buckets are empty in debug builds | |
| Standard_ASSERT_DEBUG(myData1 == nullptr || [&]() | |
| { | |
| for (Standard_Integer i = 0; i < myNbBuckets; ++i) | |
| { | |
| if (myData1[i] != nullptr) return false; | |
| } | |
| return true; | |
| }(), "Attempting to reset size while buckets contain elements"); | |
| mySize = 0; | |
| } |
| - name: Download and extract Mesa3D | ||
| run: | | ||
| curl -L -o mesa3d.7z https://github.com/pal1000/mesa-dist-win/releases/download/24.2.5/mesa3d-24.2.5-release-msvc.7z | ||
| 7z x mesa3d.7z -omesa3d | ||
|
|
||
| - name: Run system-wide deployment | ||
| run: | | ||
| cd mesa3d | ||
| .\systemwidedeploy.cmd 1 | ||
| shell: cmd | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Reduce duplication of Mesa3D setup steps
The Mesa3D download and deployment steps are duplicated across multiple jobs. Consider creating a composite action or reusable workflow to encapsulate this functionality.
Create a new composite action in .github/actions/setup-mesa3d/action.yml:
name: 'Setup Mesa3D'
description: 'Downloads and deploys Mesa3D'
inputs:
version:
description: 'Mesa3D version'
required: true
default: '24.2.5'
runs:
using: "composite"
steps:
- name: Download and extract Mesa3D
shell: pwsh
run: |
curl -L -o mesa3d.7z https://github.com/pal1000/mesa-dist-win/releases/download/${{ inputs.version }}/mesa3d-${{ inputs.version }}-release-msvc.7z
7z x mesa3d.7z -omesa3d
- name: Run system-wide deployment
shell: cmd
run: |
cd mesa3d
.\systemwidedeploy.cmd 1Then use it in the workflow:
- uses: ./.github/actions/setup-mesa3d
with:
version: '24.2.5'Also applies to: 115-125, 434-444, 499-509
Summary by CodeRabbit
Release Notes
New Features
Improvements
NCollection_Mapclass for better node management and iterator functionality.Bug Fixes
RemoveAllmethod inTCollection_AsciiStringfor improved case sensitivity handling.Deprecations
TCollectionclass as deprecated, encouraging the use ofNCollection_Primesinstead.