Skip to content

Vtk #944

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 4 commits into
base: master
Choose a base branch
from
Draft

Vtk #944

wants to merge 4 commits into from

Conversation

nicolasaunai
Copy link
Member

@nicolasaunai nicolasaunai commented Jan 27, 2025

Just a quick and incomplete (just handles E and B) script to convert our data to vtkhdf and open it with paraview.
Just here as a way to learn paraview and decide whether it's a viable way for future diag formats.

Copy link

coderabbitai bot commented Jan 27, 2025

📝 Walkthrough

Walkthrough

The pull request modifies the pyphare/pyphare/pharesee/tovtk.py file by introducing a new function ndim_from(npx, npy, npz) to determine the dimensionality of the grid based on input parameters. Existing functions BtoFlatPrimal, EtoFlatPrimal, and primalScalarToFlatPrimal are updated to utilize this new function, allowing them to conditionally handle both 2D and 3D cases for processing magnetic and electric field data. Additionally, a comment in the nbrNodes function clarifies the calculation of the number of nodes.

Changes

File Change Summary
pyphare/pyphare/pharesee/tovtk.py Added function: ndim_from(npx, npy, npz).
Modified functions:
- BtoFlatPrimal: Adjusted to handle 2D and 3D averaging for magnetic fields.
- EtoFlatPrimal: Adjusted to handle 2D and 3D processing for electric fields.
- primalScalarToFlatPrimal: Updated to manage scalar data for both dimensions.
Updated comment in nbrNodes function for clarity.

Sequence Diagram

sequenceDiagram
    participant Input as HDF5 Input File
    participant Converter as tovtk.py
    participant Output as VTKHDF Output File
    
    Input->>Converter: Read HDF5 File
    Converter->>Converter: Determine Dimensionality
    alt 2D Case
        Converter->>Converter: Process Magnetic Fields (2D)
        Converter->>Converter: Process Electric Fields (2D)
    else 3D Case
        Converter->>Converter: Process Magnetic Fields (3D)
        Converter->>Converter: Process Electric Fields (3D)
    end
    Converter->>Output: Write Converted Data
Loading
✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

AMRBoxOffsets = []
dataOffsets = []

cellData_g = lvl.create_group("CellData")

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable cellData_g is not used.

cellData_g = lvl.create_group("CellData")
pointData_g = lvl.create_group("PointData")
fieldData_g = lvl.create_group("FieldData")

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable fieldData_g is not used.
cellData_g = lvl.create_group("CellData")
pointData_g = lvl.create_group("PointData")
fieldData_g = lvl.create_group("FieldData")
cellDataOffset_g = steps_lvl.create_group("CellDataOffset")

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable cellDataOffset_g is not used.
fieldData_g = lvl.create_group("FieldData")
cellDataOffset_g = steps_lvl.create_group("CellDataOffset")
pointDataOffset_g = steps_lvl.create_group("PointDataOffset")
FieldDataOffset_g = steps_lvl.create_group("FieldDataOffset")

Check notice

Code scanning / CodeQL

Unused local variable Note

Variable FieldDataOffset_g is not used.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (4)
pyphare/pyphare/pharesee/tovtk.py (4)

11-43: Add detailed documentation for the Yee grid to primal conversion.

The function performs complex averaging operations to convert from Yee grid to primal format. Consider adding:

  1. Docstring explaining the input parameters and return value
  2. Mathematical explanation of the averaging operations
  3. Documentation about the 2D to 3D conversion strategy

Here's a suggested docstring:

 def BtoFlatPrimal(ph_bx, ph_by, ph_bz, npx, npy, npz, gn=2):
+    """Convert magnetic field components from Yee grid to flat primal format.
+    
+    Args:
+        ph_bx, ph_by, ph_bz: Magnetic field components on Yee grid
+        npx, npy, npz: Number of points in each dimension
+        gn: Number of ghost nodes (default=2)
+    
+    Returns:
+        numpy.ndarray: Flattened magnetic field in primal format (nbrPoints, 3)
+    """

45-77: Maintain documentation consistency with BtoFlatPrimal.

The function has good inline comments but would benefit from the same level of documentation as suggested for BtoFlatPrimal.

Here's a suggested docstring:

 def EtoFlatPrimal(ph_ex, ph_ey, ph_ez, npx, npy, npz, gn=2):
+    """Convert electric field components from Yee grid to flat primal format.
+    
+    Args:
+        ph_ex, ph_ey, ph_ez: Electric field components on Yee grid
+        npx, npy, npz: Number of points in each dimension
+        gn: Number of ghost nodes (default=2)
+    
+    Returns:
+        numpy.ndarray: Flattened electric field in primal format (nbrPoints, 3)
+    """

79-83: Remove hardcoded 2D assumption.

The function hardcodes a 2D case by setting z-coordinates to 0. Consider making it dimension-agnostic for future extensibility.

 def boxFromPatch(patch):
+    """Extract bounding box from patch attributes.
+    
+    Args:
+        patch: HDF5 group containing patch data
+    
+    Returns:
+        list: [x_min, x_max, y_min, y_max, z_min, z_max]
+    """
     lower = patch.attrs["lower"]
     upper = patch.attrs["upper"]
-    return [lower[0], upper[0], lower[1], upper[1], 0, 0]  # 2D
+    # Handle both 2D and 3D cases
+    z_min = lower[2] if len(lower) > 2 else 0
+    z_max = upper[2] if len(upper) > 2 else 0
+    return [lower[0], upper[0], lower[1], upper[1], z_min, z_max]

185-188: Optimize dictionary key check.

Use not in operator directly on the dictionary instead of calling .keys().

-            if phare_lvl_name not in phare_h5["t"][time_str].keys():
+            if phare_lvl_name not in phare_h5["t"][time_str]:
                 print(f"no level {ilvl} at time {time}")
                 continue
🧰 Tools
🪛 Ruff (0.8.2)

185-185: Use key not in dict instead of key not in dict.keys()

Remove .keys()

(SIM118)

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ce77d6b and fde2af2.

📒 Files selected for processing (1)
  • pyphare/pyphare/pharesee/tovtk.py (1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
pyphare/pyphare/pharesee/tovtk.py

163-163: Local variable cellData_g is assigned to but never used

Remove assignment to unused variable cellData_g

(F841)


165-165: Local variable fieldData_g is assigned to but never used

Remove assignment to unused variable fieldData_g

(F841)


166-166: Local variable cellDataOffset_g is assigned to but never used

Remove assignment to unused variable cellDataOffset_g

(F841)


168-168: Local variable FieldDataOffset_g is assigned to but never used

Remove assignment to unused variable FieldDataOffset_g

(F841)


185-185: Use key not in dict instead of key not in dict.keys()

Remove .keys()

(SIM118)

⏰ Context from checks skipped due to timeout of 90000ms (6)
  • GitHub Check: build (macos-latest)
  • GitHub Check: Analyze (cpp)
  • GitHub Check: build (macos-14)
  • GitHub Check: build (macos-13)
  • GitHub Check: Analyze (python)
  • GitHub Check: build (ubuntu-latest)
🔇 Additional comments (2)
pyphare/pyphare/pharesee/tovtk.py (2)

1-9: LGTM! Imports are appropriate.

The necessary imports for HDF5 handling, numerical operations, and system operations are present.


85-91: Document the reason for adding 1 twice.

The function adds 1 twice to each dimension when calculating the number of nodes. This needs explanation.

 def nbrNodes(box):
+    """Calculate number of nodes in each dimension.
+    
+    Args:
+        box: List containing [x_min, x_max, y_min, y_max, z_min, z_max]
+    
+    Returns:
+        tuple: (npx, npy, npz) number of nodes in each dimension
+    """
     lower = box[0], box[2], box[4]
     upper = box[1], box[3], box[5]
+    # Adding 1 for converting from cell count to node count
+    # and another 1 for including the end point
     npx = upper[0] - lower[0] + 1 + 1
     npy = upper[1] - lower[1] + 1 + 1
     npz = upper[2] - lower[2] + 1 + 1
✅ Verification successful

Documentation accurately explains the double +1 addition

The double addition of 1 is correct and well-documented. The first +1 converts from cell count to node count (as there's always one more node than cells), and the second +1 ensures the endpoint is included, which is required for VTK format compatibility.

🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found


Script:

#!/bin/bash
# Search for usages of nbrNodes function
rg "nbrNodes" -B 2 -A 2

# Search for similar grid calculations with double +1 pattern
rg "([^+]\+\s*1\s*\+\s*1)" -B 2 -A 2

# Look for grid/mesh related files and documentation
fd "grid|mesh" --type f

Length of output: 3453


Script:

#!/bin/bash
# Search for VTK-related code and documentation
rg -g "*.py" -g "*.hpp" -g "*.cpp" "vtk|VTK" -B 2 -A 2

# Search for toFlatPrimal implementation
rg "toFlatPrimal" -B 5 -A 5

# Search for any documentation about grid/node conversion
rg -g "*.py" -g "*.md" "grid.*node|node.*grid" -B 2 -A 2

Length of output: 4385

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (5)
pyphare/pyphare/pharesee/tovtk.py (5)

5-8: Add type hints and docstrings to improve code maintainability.

Consider adding type hints and docstrings to all functions to improve code maintainability and IDE support. This will help other developers understand the expected input/output types and function purposes.

Example for the BtoFlatPrimal function:

-def BtoFlatPrimal(ph_bx, ph_by, ph_bz, npx, npy, npz, gn=2):
+from typing import Tuple, ndarray
+
+def BtoFlatPrimal(
+    ph_bx: ndarray,
+    ph_by: ndarray,
+    ph_bz: ndarray,
+    npx: int,
+    npy: int,
+    npz: int,
+    gn: int = 2
+) -> ndarray:
+    """Convert magnetic field components from Yee grid to flat primal format.
+
+    Args:
+        ph_bx: X-component of magnetic field on Yee grid
+        ph_by: Y-component of magnetic field on Yee grid
+        ph_bz: Z-component of magnetic field on Yee grid
+        npx: Number of points in X direction
+        npy: Number of points in Y direction
+        npz: Number of points in Z direction
+        gn: Number of ghost nodes (default: 2)
+
+    Returns:
+        ndarray: Flattened magnetic field components in primal format
+    """

Also applies to: 11-11, 45-45, 79-79, 85-85, 94-94


11-43: Consider refactoring field conversion functions to reduce code duplication.

The BtoFlatPrimal and EtoFlatPrimal functions share similar structure. Consider extracting common logic into a base function.

Example refactor:

def _toFlatPrimal(components: dict, npx: int, npy: int, npz: int, gn: int = 2) -> ndarray:
    """Base function for converting field components to flat primal format.
    
    Args:
        components: Dictionary containing field components and their averaging rules
        npx, npy, npz: Number of points in each direction
        gn: Number of ghost nodes
    """
    nbrPoints = npx * npy * npz
    result = np.zeros((nbrPoints, 3), dtype="f")
    
    # Create pure primal arrays
    primal = {k: np.zeros((npx, npy, npz), dtype=np.float32) for k in components}
    
    # Convert each component using its averaging rule
    for k, (data, rule) in components.items():
        primal[k][:, :, 0] = rule(data, gn)
        primal[k][:, :, 1] = primal[k][:, :, 0]  # Copy to z-dimension
        
    # Flatten to output format
    for i, k in enumerate(components):
        result[:, i] = primal[k].flatten(order="F")
        
    return result

Also applies to: 45-77


21-32: Add detailed comments explaining the Yee grid to primal conversion process.

The ghost node handling and averaging process in the dual direction needs better documentation. Also, document that this implementation assumes 2D data.

Add comments like:

# In Yee grid, B-field components are staggered:
# Bx is defined at (i, j+1/2, k+1/2)
# By is defined at (i+1/2, j, k+1/2)
# Bz is defined at (i+1/2, j+1/2, k)
# We average in the dual direction to get values at the primal grid points

Also applies to: 55-66


85-91: Document the node number calculation logic.

The function adds 1 twice to each dimension. Add comments explaining why this is necessary (e.g., if it's related to cell-centered vs node-centered data).

 def nbrNodes(box):
+    """Calculate number of nodes in each dimension.
+    
+    The +1 is added twice because:
+    1. Convert from cell count to node count (+1)
+    2. [Explain the second +1 here]
+    """

111-113: Use os.path.join for path construction.

Replace string concatenation with os.path.join for more robust path handling across different operating systems.

-    vtk_fn = f"{data_directory}/{phare_fn}.vtkhdf"
+    vtk_fn = os.path.join(data_directory, f"{phare_fn}.vtkhdf")
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fde2af2 and 874c684.

📒 Files selected for processing (1)
  • pyphare/pyphare/pharesee/tovtk.py (1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
pyphare/pyphare/pharesee/tovtk.py

163-163: Local variable cellData_g is assigned to but never used

Remove assignment to unused variable cellData_g

(F841)


165-165: Local variable fieldData_g is assigned to but never used

Remove assignment to unused variable fieldData_g

(F841)


166-166: Local variable cellDataOffset_g is assigned to but never used

Remove assignment to unused variable cellDataOffset_g

(F841)


168-168: Local variable FieldDataOffset_g is assigned to but never used

Remove assignment to unused variable FieldDataOffset_g

(F841)


185-185: Use key not in dict instead of key not in dict.keys()

Remove .keys()

(SIM118)


223-223: Undefined name b

(F821)

⏰ Context from checks skipped due to timeout of 90000ms (6)
  • GitHub Check: build (macos-latest)
  • GitHub Check: build (macos-14)
  • GitHub Check: Analyze (cpp)
  • GitHub Check: build (macos-13)
  • GitHub Check: build (ubuntu-latest)
  • GitHub Check: Analyze (python)
🔇 Additional comments (2)
pyphare/pyphare/pharesee/tovtk.py (2)

163-168: Remove unused group variables or document their future use.

Several group variables are created but never used: cellData_g, fieldData_g, cellDataOffset_g, and FieldDataOffset_g.

🧰 Tools
🪛 Ruff (0.8.2)

163-163: Local variable cellData_g is assigned to but never used

Remove assignment to unused variable cellData_g

(F841)


165-165: Local variable fieldData_g is assigned to but never used

Remove assignment to unused variable fieldData_g

(F841)


166-166: Local variable cellDataOffset_g is assigned to but never used

Remove assignment to unused variable cellDataOffset_g

(F841)


168-168: Local variable FieldDataOffset_g is assigned to but never used

Remove assignment to unused variable FieldDataOffset_g

(F841)


94-97: Add error handling for command-line arguments.

The script assumes the input path is always provided and valid.

works for E, B, n, V
2D
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
pyphare/pyphare/pharesee/tovtk.py (4)

102-106: Consider providing 3D support instead of hardcoding 2D bounds.

Currently, boxFromPatch returns [lower[0], upper[0], lower[1], upper[1], 0, 0], artificially fixing the z-dimension to 0. If you eventually need 3D support, expand the function to handle [lower[2], upper[2]], or otherwise clarify its 2D-only intent.


155-158: Validate and handle missing command-line arguments.

Relying on sys.argv[1] may cause an IndexError if no arguments are passed. Adding a simple argument-count check and verifying file existence can prevent runtime errors.

 def main():
+    if len(sys.argv) < 2:
+        print("Usage: python tovtk.py <input_h5_file>")
+        sys.exit(1)

     path = sys.argv[1]
+    if not os.path.isfile(path):
+        print(f"Error: File '{path}' does not exist")
+        sys.exit(1)
 
     phare_h5 = h5py.File(path, "r")

231-231: Use dict membership directly instead of calling .keys().

Improve readability and conform to Python best practices by removing redundant .keys().

-            if phare_lvl_name not in phare_h5["t"][time_str].keys():
+            if phare_lvl_name not in phare_h5["t"][time_str]:
🧰 Tools
🪛 Ruff (0.8.2)

231-231: Use key not in dict instead of key not in dict.keys()

Remove .keys()

(SIM118)


297-302: Clarify the dataset shape comment for scalar data.

This comment mentions “shape (current_size, 3)” but in the scalar path, the dataset is actually 1D with shape (current_size,). Updating the comment ensures consistency and avoids confusion.

-        # dataset already created with shape (current_size,3)
+        # dataset already created with shape (current_size,) for scalar fields
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 874c684 and d9f1328.

📒 Files selected for processing (1)
  • pyphare/pyphare/pharesee/tovtk.py (1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
pyphare/pyphare/pharesee/tovtk.py

209-209: Local variable cellData_g is assigned to but never used

Remove assignment to unused variable cellData_g

(F841)


211-211: Local variable fieldData_g is assigned to but never used

Remove assignment to unused variable fieldData_g

(F841)


212-212: Local variable cellDataOffset_g is assigned to but never used

Remove assignment to unused variable cellDataOffset_g

(F841)


214-214: Local variable FieldDataOffset_g is assigned to but never used

Remove assignment to unused variable FieldDataOffset_g

(F841)


231-231: Use key not in dict instead of key not in dict.keys()

Remove .keys()

(SIM118)

⏰ Context from checks skipped due to timeout of 90000ms (6)
  • GitHub Check: Analyze (cpp)
  • GitHub Check: Analyze (python)
  • GitHub Check: build (macos-14)
  • GitHub Check: build (macos-13)
  • GitHub Check: build (macos-latest)
  • GitHub Check: build (ubuntu-latest)
🔇 Additional comments (1)
pyphare/pyphare/pharesee/tovtk.py (1)

209-214: Remove or document the unused group variables.

Variables cellData_g, fieldData_g, cellDataOffset_g, and FieldDataOffset_g are assigned but never used. They can be removed to reduce clutter or documented if you plan to use them in the future.

-        cellData_g = lvl.create_group("CellData")
         pointData_g = lvl.create_group("PointData")
-        fieldData_g = lvl.create_group("FieldData")
-        cellDataOffset_g = steps_lvl.create_group("CellDataOffset")
         pointDataOffset_g = steps_lvl.create_group("PointDataOffset")
-        FieldDataOffset_g = steps_lvl.create_group("FieldDataOffset")
🧰 Tools
🪛 Ruff (0.8.2)

209-209: Local variable cellData_g is assigned to but never used

Remove assignment to unused variable cellData_g

(F841)


211-211: Local variable fieldData_g is assigned to but never used

Remove assignment to unused variable fieldData_g

(F841)


212-212: Local variable cellDataOffset_g is assigned to but never used

Remove assignment to unused variable cellDataOffset_g

(F841)


214-214: Local variable FieldDataOffset_g is assigned to but never used

Remove assignment to unused variable FieldDataOffset_g

(F841)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
pyphare/pyphare/pharesee/tovtk.py (2)

102-106: 🛠️ Refactor suggestion

Hardcoded 2D assumption in boxFromPatch needs to be fixed

The function artificially limits functionality by hardcoding z=0, making it incompatible with 3D data.

def boxFromPatch(patch):
    lower = patch.attrs["lower"]
    upper = patch.attrs["upper"]
-    return [lower[0], upper[0], lower[1], upper[1], 0, 0]  # 2D
+    # Handle both 2D and 3D cases
+    if len(lower) > 2:  # 3D case
+        return [lower[0], upper[0], lower[1], upper[1], lower[2], upper[2]]
+    else:  # 2D case
+        return [lower[0], upper[0], lower[1], upper[1], 0, 0]

178-178: 🛠️ Refactor suggestion

Potential uninitialized variable risk

The toFlatPrimal variable might be uninitialized if primalFlattener raises a ValueError. This was flagged in previous static analysis.

-    toFlatPrimal = primalFlattener(phare_fn)
+    try:
+        toFlatPrimal = primalFlattener(phare_fn)
+    except ValueError as e:
+        print(f"Error: {e}")
+        sys.exit(1)
🧹 Nitpick comments (5)
pyphare/pyphare/pharesee/tovtk.py (5)

79-100: Vector flattening implementation has redundant flattening operation

In primalVectorToFlatPrimal, line 95-97 applies flatten(order="F") to arrays that are already flattened by the primalScalarToFlatPrimal function on lines 91-93.

-    v[:, 0] = vx.flatten(order="F")
-    v[:, 1] = vy.flatten(order="F")
-    v[:, 2] = vz.flatten(order="F")
+    v[:, 0] = vx
+    v[:, 1] = vy
+    v[:, 2] = vz

240-242: Simplify dictionary key check

The code can be simplified using Python's more idiomatic dictionary membership check.

-            if phare_lvl_name not in phare_h5["t"][time_str].keys():
+            if phare_lvl_name not in phare_h5["t"][time_str]:
🧰 Tools
🪛 Ruff (0.8.2)

240-240: Use key not in dict instead of key not in dict.keys()

Remove .keys()

(SIM118)


275-276: Unnecessary commented code

There's a commented # pass statement that should be removed.

                        pointData.resize(current_size + data.shape[0], axis=0)
                        pointData[current_size:, :] = data
-                    # pass

298-298: Unnecessary commented code

Another commented # pass statement that should be removed.

                        pointData.resize(current_size + data.shape[0], axis=0)
                        pointData[current_size:] = data
-                    # pass

303-303: Fix comment typo

There's a minor typo in the comment.

-            # of of the patch loops at that time
+            # end of the patch loops at that time
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d9f1328 and 69c2531.

📒 Files selected for processing (1)
  • pyphare/pyphare/pharesee/tovtk.py (1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
pyphare/pyphare/pharesee/tovtk.py

218-218: Local variable cellData_g is assigned to but never used

Remove assignment to unused variable cellData_g

(F841)


220-220: Local variable fieldData_g is assigned to but never used

Remove assignment to unused variable fieldData_g

(F841)


221-221: Local variable cellDataOffset_g is assigned to but never used

Remove assignment to unused variable cellDataOffset_g

(F841)


223-223: Local variable FieldDataOffset_g is assigned to but never used

Remove assignment to unused variable FieldDataOffset_g

(F841)


240-240: Use key not in dict instead of key not in dict.keys()

Remove .keys()

(SIM118)

⏰ Context from checks skipped due to timeout of 90000ms (6)
  • GitHub Check: build (macos-14)
  • GitHub Check: build (macos-latest)
  • GitHub Check: build (macos-13)
  • GitHub Check: Analyze (python)
  • GitHub Check: Analyze (cpp)
  • GitHub Check: build (ubuntu-latest)
🔇 Additional comments (5)
pyphare/pyphare/pharesee/tovtk.py (5)

45-76: E field transformation uses the correct averaging technique

The function properly transforms E field components from Yee grid to primal nodes. The comment on line 65 correctly notes that Ez is already at primal nodes in 2D configuration, so no averaging is needed.


162-165: Improved command-line argument handling

The added input validation for command-line arguments is a good improvement from the previous version.


218-223: Remove unused group variables

Several HDF5 group variables are created but never used: cellData_g, fieldData_g, cellDataOffset_g, and FieldDataOffset_g. This was flagged in static analysis.

-        cellData_g = lvl.create_group("CellData")
         pointData_g = lvl.create_group("PointData")
-        fieldData_g = lvl.create_group("FieldData")
-        cellDataOffset_g = steps_lvl.create_group("CellDataOffset")
         pointDataOffset_g = steps_lvl.create_group("PointDataOffset")
-        FieldDataOffset_g = steps_lvl.create_group("FieldDataOffset")
🧰 Tools
🪛 Ruff (0.8.2)

218-218: Local variable cellData_g is assigned to but never used

Remove assignment to unused variable cellData_g

(F841)


220-220: Local variable fieldData_g is assigned to but never used

Remove assignment to unused variable fieldData_g

(F841)


221-221: Local variable cellDataOffset_g is assigned to but never used

Remove assignment to unused variable cellDataOffset_g

(F841)


223-223: Local variable FieldDataOffset_g is assigned to but never used

Remove assignment to unused variable FieldDataOffset_g

(F841)


316-317: Script entrypoint looks good

The main function invocation is properly guarded by the if __name__ == "__main__" statement, which is a best practice.


11-42:

❓ Verification inconclusive

B field transformation logic looks correct but relies on 2D assumptions

The function is well-implemented for converting Yee grid B fields to primal nodes through averaging. However, the code only truly handles 2D data despite the 3D array structures, as seen in:

  1. The slicing operations only work with 2D data
  2. Lines 34-36 simply duplicate the same z-slice values

This approach works for the immediate 2D use case but will need refactoring to handle true 3D data in the future.


🏁 Script executed:

#!/bin/bash
# Verify if there are any 3D electromagnetic field data in the codebase
rg -l "3D" --glob "*.py" | grep -E "field|electromagnetic|B_|E_"

Length of output: 64


Action: Confirm 2D Assumptions and Plan for 3D Support

The transformation logic in pyphare/pyphare/pharesee/tovtk.py is correctly implemented for converting Yee grid B fields to primal nodes using averaging. However, note that:

  • The slicing operations and averaging are effectively handling 2D data.
  • Lines 34–36 duplicate the same z-slice values, reinforcing the 2D-only assumption.
  • To support true 3D electromagnetic fields in the future, this implementation will require refactoring.

Please manually verify if any parts of the codebase are intended for 3D field processing and consider future changes accordingly.

return e


def primalScalarToFlatPrimal(ph_scalar, npx, npy, npz, gn=2):

Check notice

Code scanning / CodeQL

Explicit returns mixed with implicit (fall through) returns Note

Mixing implicit and explicit returns may indicate an error as implicit returns always return None.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (3)
pyphare/pyphare/pharesee/tovtk.py (3)

155-158: ⚠️ Potential issue

Add command-line argument validation

The script assumes the input path is always provided and valid, which could lead to runtime errors.

 def main():
 
+    if len(sys.argv) < 2:
+        print("Usage: python tovtk.py <input_h5_file>")
+        sys.exit(1)
+        
     path = sys.argv[1]
+    
+    if not os.path.exists(path):
+        print(f"Error: File {path} does not exist")
+        sys.exit(1)
+        
     phare_h5 = h5py.File(path, "r")

200-201: 🛠️ Refactor suggestion

Hardcoded dimensionality in level spacing

The code appends a 0.0 for z-dimension spacing, again showing the 2D-only assumption.

-        lvl_spacing = [dl / 2**ilvl for dl in lvl_spacing] + [0.0]
+        # Calculate spacing based on dimensionality of the data
+        if len(root_spacing) == 2:
+            lvl_spacing = [dl / 2**ilvl for dl in lvl_spacing] + [0.0]  # Add Z for 2D data
+        else:
+            lvl_spacing = [dl / 2**ilvl for dl in lvl_spacing]  # Keep as-is for 3D data

102-106: 🛠️ Refactor suggestion

Hard-coded 2D assumption in boxFromPatch

The function sets z coordinates to 0, assuming 2D data. This prevents proper 3D data handling.

 def boxFromPatch(patch):
     lower = patch.attrs["lower"]
     upper = patch.attrs["upper"]
-    return [lower[0], upper[0], lower[1], upper[1], 0, 0]  # 2D
+    # Get dimensionality from the patch
+    ndim = len(lower)
+    
+    if ndim == 2:
+        return [lower[0], upper[0], lower[1], upper[1], 0, 0]  # 2D
+    elif ndim == 3:
+        return [lower[0], upper[0], lower[1], upper[1], lower[2], upper[2]]  # 3D
+    else:
+        raise ValueError(f"Unsupported dimensionality: {ndim}")
🧹 Nitpick comments (6)
pyphare/pyphare/pharesee/tovtk.py (6)

95-97: Redundant flattening operation

The line v[:, 0] = vx.flatten(order="F") is redundant because vx is already flattened by primalScalarToFlatPrimal.

-    v[:, 0] = vx.flatten(order="F")
-    v[:, 1] = vy.flatten(order="F")
-    v[:, 2] = vz.flatten(order="F")
+    v[:, 0] = vx
+    v[:, 1] = vy
+    v[:, 2] = vz

108-114: Add clarifying comment to nbrNodes

The +1+1 in the size calculation is confusing. Add a comment explaining why two +1s are needed.

 def nbrNodes(box):
     lower = box[0], box[2], box[4]
     upper = box[1], box[3], box[5]
-    npx = upper[0] - lower[0] + 1 + 1
-    npy = upper[1] - lower[1] + 1 + 1
-    npz = upper[2] - lower[2] + 1 + 1
+    # +1 for inclusive upper bound, +1 for cell to node conversion (n cells = n+1 nodes)
+    npx = upper[0] - lower[0] + 1 + 1
+    npy = upper[1] - lower[1] + 1 + 1
+    npz = upper[2] - lower[2] + 1 + 1
     return npx, npy, npz

134-142: Optimize max_nbr_levels_in function

The function can be simplified using max() and a list comprehension for better readability.

 def max_nbr_levels_in(phare_h5):
-    max_nbr_level = 0
     times_str = list(phare_h5["t"].keys())
-    for time in times_str:
-        nbrLevels = len(phare_h5["t"][time].keys())
-        if max_nbr_level < nbrLevels:
-            max_nbr_level = nbrLevels
-    return max_nbr_level
+    return max([len(phare_h5["t"][time].keys()) for time in times_str], default=0)

257-259: Inconsistent variable naming

The dataset is named pointData_b but seems to be used for general data storage.

                         pointData_b = pointData_g.create_dataset(
                             "data", data=data, maxshape=(None, 3)
                         )
                         
                         # elsewhere
-                        pointData_b.resize(current_size + data.shape[0], axis=0)
-                        pointData_b[current_size:, :] = data
+                        pointData.resize(current_size + data.shape[0], axis=0)
+                        pointData[current_size:, :] = data

Also applies to: 267-267


263-267: Incorrect comment in dataset resizing

The comment mentions b.shape[0] but the code uses data.shape[0].

                     else:
                         # dataset already created with shape (current_size,3)
-                        # we add b.shape[0] points (=npx*npy) to the first dim
+                        # we add data.shape[0] points (=npx*npy) to the first dim
                         # hence need to resize the dataset.
                         pointData_b.resize(current_size + data.shape[0], axis=0)
                         pointData_b[current_size:, :] = data

1-321: Overall structure improvement: extract helper functions

The main function is quite long and handles many responsibilities. Consider refactoring to improve modularity.

Extract functions for:

  1. Creating the VTKHDF file structure
  2. Processing a single patch
  3. Processing a single time step

This would make the code more maintainable and easier to test.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 69c2531 and 7869a46.

📒 Files selected for processing (4)
  • pyphare/pyphare/pharesee/tovtk.py (1 hunks)
  • pyphare/pyphare/pharesee/tovtk.py (5 hunks)
  • pyphare/pyphare/pharesee/tovtk.py (3 hunks)
  • pyphare/pyphare/pharesee/tovtk.py (5 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • pyphare/pyphare/pharesee/tovtk.py
  • pyphare/pyphare/pharesee/tovtk.py
⏰ Context from checks skipped due to timeout of 90000ms (7)
  • GitHub Check: build (macos-latest)
  • GitHub Check: build (macos-14)
  • GitHub Check: Analyze (cpp)
  • GitHub Check: build (ubuntu-latest, clang)
  • GitHub Check: build (ubuntu-latest, gcc)
  • GitHub Check: Analyze (python)
  • GitHub Check: build (macos-13)
🔇 Additional comments (1)
pyphare/pyphare/pharesee/tovtk.py (1)

209-214: Remove unused group variables

Several group variables are created but never used: cellData_g, fieldData_g, cellDataOffset_g, and FieldDataOffset_g.

-        cellData_g = lvl.create_group("CellData")
         pointData_g = lvl.create_group("PointData")
-        fieldData_g = lvl.create_group("FieldData")
-        cellDataOffset_g = steps_lvl.create_group("CellDataOffset")
         pointDataOffset_g = steps_lvl.create_group("PointDataOffset")
-        FieldDataOffset_g = steps_lvl.create_group("FieldDataOffset")

Comment on lines +45 to +77
by[:, :, 0] = (
ph_by[gn - 1 : -gn + 1, gn:-gn][1:, :]
+ ph_by[gn - 1 : -gn + 1, gn:-gn][:-1, :]
) * 0.5

bz[:, :, 0] = 0.5 * (
0.5 * (ph_bz[domainP1, domain][1:, :] + ph_bz[domainP1, domain][:-1, :])
+ 0.5 * (ph_bz[domain, domainP1][:, 1:] + ph_bz[domain, domainP1][:, :-1])
)

bx[:, :, 1] = bx[:, :, 0]
by[:, :, 1] = by[:, :, 0]
bz[:, :, 1] = bz[:, :, 0]

elif ndim_from(npx, npy, npz) == 3:
# Bx is (primal, dual, dual)
print("bx", ph_bx.shape)
print(
"bx[domain, domainP1, domain][:,1:,:]",
ph_bx[domain, domainP1, domain][:, 1:, :].shape,
)
print(
"bx[domain, domainP1, domain][:,:-1,:]",
ph_bx[domain, domainP1, domain][:, :-1, :].shape,
)
bx[:, :, :] = 0.5 * (
0.5
* (
ph_bx[domain, domainP1, domain][:, 1:, :]
+ ph_bx[domain, domainP1, domain][:, :-1, :]
)
+ 0.5
* (
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Similar dimensionality issue in EtoFlatPrimal

Like BtoFlatPrimal, this function also handles primarily 2D data. Additionally, the code comment on line 65 explicitly states "ez already primal in 2D".

Apply a similar pattern as suggested for BtoFlatPrimal to handle both 2D and 3D cases appropriately.

Comment on lines +79 to +85
+ ph_bx[domain, domain, domainP1][:, :, :-1]
)
)
# By is (dual, primal, dual)
by[:, :, :] = 0.5 * (
0.5
* (
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

primalScalarToFlatPrimal uses hardcoded 2D handling

The function duplicates the 2D array for z-dimension without checking if the data is actually 3D.

 def primalScalarToFlatPrimal(ph_scalar, npx, npy, npz, gn=2):
-
+    ndim = 2 if npz <= 2 else 3
     scalar3d = np.zeros((npx, npy, npz), dtype="f")
-    scalar3d[:, :, 0] = ph_scalar[gn:-gn, gn:-gn]
-    scalar3d[:, :, 1] = ph_scalar[gn:-gn, gn:-gn]
+    if ndim == 2:
+        scalar3d[:, :, 0] = ph_scalar[gn:-gn, gn:-gn]
+        # Duplicate the 2D slice for all z-slices
+        for i in range(1, npz):
+            scalar3d[:, :, i] = scalar3d[:, :, 0]
+    else:
+        # Handle 3D case when implemented
+        raise NotImplementedError("3D support not yet implemented")
     return scalar3d.flatten(order="F")

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +11 to +43
def ndim_from(npx, npy, npz):
if npx > 2 and npy > 2 and npz > 2:
return 3
elif npx > 2 and npy > 2 and npz == 2:
return 2
elif npx > 2 and npy == 2 and npz == 2:
return 1
else:
raise ValueError(
f" cannot infer dimension from (npx, npy, npz) = {npx} {npy} {npz}"
)


def BtoFlatPrimal(ph_bx, ph_by, ph_bz, npx, npy, npz, gn=2):

nbrPoints = npx * npy * npz
b = np.zeros((nbrPoints, 3), dtype="f")

# pure primal arrays
bx = np.zeros((npx, npy, npz), dtype=np.float32)
by = np.zeros((npx, npy, npz), dtype=np.float32)
bz = np.zeros((npx, npy, npz), dtype=np.float32)

domainP1 = slice(gn - 1, -gn + 1)
domain = slice(gn, -gn)

# converts yee to pure primal
# we average in the dual direction so we need one extract ghost data in that direction

if ndim_from(npx, npy, npz) == 2:
bx[:, :, 0] = (
ph_bx[gn:-gn, gn - 1 : -gn + 1][:, 1:]
+ ph_bx[gn:-gn, gn - 1 : -gn + 1][:, :-1]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Dimensionality handling in BtoFlatPrimal needs improvement

The function seems to handle primarily 2D data (z-dimension) by duplicating 2D slices, but the API accepts npz parameter and creates 3D arrays. Consider implementing proper 3D support for full-dimensional data handling.

 def BtoFlatPrimal(ph_bx, ph_by, ph_bz, npx, npy, npz, gn=2):
+    ndim = 2 if npz <= 2 else 3
     nbrPoints = npx * npy * npz
     b = np.zeros((nbrPoints, 3), dtype="f")
 
     # pure primal arrays
     bx = np.zeros((npx, npy, npz), dtype=np.float32)
     by = np.zeros((npx, npy, npz), dtype=np.float32)
     bz = np.zeros((npx, npy, npz), dtype=np.float32)
 
     # converts yee to pure primal
     # we average in the dual direction so we need one extract ghost data in that direction
-    bx[:, :, 0] = (
-        ph_bx[gn:-gn, gn - 1 : -gn + 1][:, 1:] + ph_bx[gn:-gn, gn - 1 : -gn + 1][:, :-1]
-    ) * 0.5
-    by[:, :, 0] = (
-        ph_by[gn - 1 : -gn + 1, gn:-gn][1:, :] + ph_by[gn - 1 : -gn + 1, gn:-gn][:-1, :]
-    ) * 0.5
-    bz[:, :, 0] = (
-        ph_bz[gn - 1 : -gn + 1, gn - 1 : -gn + 1][1:, 1:]
-        + ph_bz[gn - 1 : -gn + 1, gn - 1 : -gn + 1][:-1, :-1]
-    ) * 0.5
-
-    bx[:, :, 1] = bx[:, :, 0]
-    by[:, :, 1] = by[:, :, 0]
-    bz[:, :, 1] = bz[:, :, 0]
+    if ndim == 2:
+        bx[:, :, 0] = (
+            ph_bx[gn:-gn, gn - 1 : -gn + 1][:, 1:] + ph_bx[gn:-gn, gn - 1 : -gn + 1][:, :-1]
+        ) * 0.5
+        by[:, :, 0] = (
+            ph_by[gn - 1 : -gn + 1, gn:-gn][1:, :] + ph_by[gn - 1 : -gn + 1, gn:-gn][:-1, :]
+        ) * 0.5
+        bz[:, :, 0] = (
+            ph_bz[gn - 1 : -gn + 1, gn - 1 : -gn + 1][1:, 1:]
+            + ph_bz[gn - 1 : -gn + 1, gn - 1 : -gn + 1][:-1, :-1]
+        ) * 0.5
+        
+        # Duplicate 2D data for pseudo-3D representation
+        for i in range(1, npz):
+            bx[:, :, i] = bx[:, :, 0]
+            by[:, :, i] = by[:, :, 0]
+            bz[:, :, i] = bz[:, :, 0]
+    else:
+        # Add 3D handling code here when needed
+        raise NotImplementedError("3D support not yet implemented")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def ndim_from(npx, npy, npz):
if npx > 2 and npy > 2 and npz > 2:
return 3
elif npx > 2 and npy > 2 and npz == 2:
return 2
elif npx > 2 and npy == 2 and npz == 2:
return 1
else:
raise ValueError(
f" cannot infer dimension from (npx, npy, npz) = {npx} {npy} {npz}"
)
def BtoFlatPrimal(ph_bx, ph_by, ph_bz, npx, npy, npz, gn=2):
nbrPoints = npx * npy * npz
b = np.zeros((nbrPoints, 3), dtype="f")
# pure primal arrays
bx = np.zeros((npx, npy, npz), dtype=np.float32)
by = np.zeros((npx, npy, npz), dtype=np.float32)
bz = np.zeros((npx, npy, npz), dtype=np.float32)
domainP1 = slice(gn - 1, -gn + 1)
domain = slice(gn, -gn)
# converts yee to pure primal
# we average in the dual direction so we need one extract ghost data in that direction
if ndim_from(npx, npy, npz) == 2:
bx[:, :, 0] = (
ph_bx[gn:-gn, gn - 1 : -gn + 1][:, 1:]
+ ph_bx[gn:-gn, gn - 1 : -gn + 1][:, :-1]
def ndim_from(npx, npy, npz):
if npx > 2 and npy > 2 and npz > 2:
return 3
elif npx > 2 and npy > 2 and npz == 2:
return 2
elif npx > 2 and npy == 2 and npz == 2:
return 1
else:
raise ValueError(
f" cannot infer dimension from (npx, npy, npz) = {npx} {npy} {npz}"
)
def BtoFlatPrimal(ph_bx, ph_by, ph_bz, npx, npy, npz, gn=2):
ndim = 2 if npz <= 2 else 3
nbrPoints = npx * npy * npz
b = np.zeros((nbrPoints, 3), dtype="f")
# pure primal arrays
bx = np.zeros((npx, npy, npz), dtype=np.float32)
by = np.zeros((npx, npy, npz), dtype=np.float32)
bz = np.zeros((npx, npy, npz), dtype=np.float32)
domainP1 = slice(gn - 1, -gn + 1)
domain = slice(gn, -gn)
# converts yee to pure primal
# we average in the dual direction so we need one extract ghost data in that direction
if ndim == 2:
bx[:, :, 0] = (
ph_bx[gn:-gn, gn - 1 : -gn + 1][:, 1:] + ph_bx[gn:-gn, gn - 1 : -gn + 1][:, :-1]
) * 0.5
by[:, :, 0] = (
ph_by[gn - 1 : -gn + 1, gn:-gn][1:, :] + ph_by[gn - 1 : -gn + 1, gn:-gn][:-1, :]
) * 0.5
bz[:, :, 0] = (
ph_bz[gn - 1 : -gn + 1, gn - 1 : -gn + 1][1:, 1:]
+ ph_bz[gn - 1 : -gn + 1, gn - 1 : -gn + 1][:-1, :-1]
) * 0.5
# Duplicate 2D data for pseudo-3D representation
for i in range(1, npz):
bx[:, :, i] = bx[:, :, 0]
by[:, :, i] = by[:, :, 0]
bz[:, :, i] = bz[:, :, 0]
else:
# Add 3D handling code here when needed
raise NotImplementedError("3D support not yet implemented")

Comment on lines +250 to +281
def times_in(phare_h5):
times_str = list(phare_h5["t"].keys())
times = np.asarray([float(time) for time in times_str])
times.sort()
return times


def is_vector_data(patch):
return len(patch.keys()) == 3


def level_spacing_from(root_spacing, ilvl):
# hard-coded 2D adds 0 for last dim spacing
return [dl / 2**ilvl for dl in root_spacing]


def make3d(root_spacing):
if len(root_spacing) == 1:
return root_spacing + [0, 0]
elif len(root_spacing) == 2:
return root_spacing + [0]
return root_spacing


def main():

if len(sys.argv) != 2 or sys.argv[1] in ["-h", "--help"]:
print(f"Usage: {os.path.basename(sys.argv[0])} <path_to_phare_h5>")
print("Works for EM fields, bulk velocity and density")
sys.exit(1)

path = sys.argv[1]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Variable toFlatPrimal may be used uninitialized

If primalFlattener returns a function that doesn't match the expected signature, this can lead to runtime errors.

 def primalFlattener(diag_filename):
+    """
+    Returns the appropriate function to convert data to flat primal format.
+    
+    The returned function will have one of these signatures:
+    - For vector data: f(ph_x, ph_y, ph_z, npx, npy, npz, gn=2) -> np.array
+    - For scalar data: f(ph_scalar, npx, npy, npz, gn=2) -> np.array
+    """
     if "_B" in diag_filename:
         print("Converting B fields")
         return BtoFlatPrimal

Add validation to ensure proper function type is used:

                 if is_vector_data(patch):
                     x_name, y_name, z_name = list(patch.keys())
                     ph_x = patch[x_name][:]
                     ph_y = patch[y_name][:]
                     ph_z = patch[z_name][:]
 
                     box = boxFromPatch(patch)
                     AMRBox.append(box)
                     nbr_boxes += 1
                     npx, npy, npz = nbrNodes(box)
+                    # Ensure toFlatPrimal is a vector conversion function
+                    if toFlatPrimal.__name__ not in ['BtoFlatPrimal', 'EtoFlatPrimal', 'primalVectorToFlatPrimal']:
+                        raise TypeError(f"Expected vector conversion function but got {toFlatPrimal.__name__}")
                     data = toFlatPrimal(ph_x, ph_y, ph_z, npx, npy, npz)
                 else:
                     assert len(patch.keys()) == 1
                     dataset_name = list(patch.keys())[0]
                     ph_data = patch[dataset_name][:]
 
                     box = boxFromPatch(patch)
                     AMRBox.append(box)
                     nbr_boxes += 1
                     npx, npy, npz = nbrNodes(box)
+                    # Ensure toFlatPrimal is a scalar conversion function
+                    if toFlatPrimal.__name__ != 'primalScalarToFlatPrimal':
+                        raise TypeError(f"Expected scalar conversion function but got {toFlatPrimal.__name__}")
                     data = toFlatPrimal(ph_data, npx, npy, npz)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def times_in(phare_h5):
times_str = list(phare_h5["t"].keys())
times = np.asarray([float(time) for time in times_str])
times.sort()
return times
def is_vector_data(patch):
return len(patch.keys()) == 3
def level_spacing_from(root_spacing, ilvl):
# hard-coded 2D adds 0 for last dim spacing
return [dl / 2**ilvl for dl in root_spacing]
def make3d(root_spacing):
if len(root_spacing) == 1:
return root_spacing + [0, 0]
elif len(root_spacing) == 2:
return root_spacing + [0]
return root_spacing
def main():
if len(sys.argv) != 2 or sys.argv[1] in ["-h", "--help"]:
print(f"Usage: {os.path.basename(sys.argv[0])} <path_to_phare_h5>")
print("Works for EM fields, bulk velocity and density")
sys.exit(1)
path = sys.argv[1]
def times_in(phare_h5):
times_str = list(phare_h5["t"].keys())
times = np.asarray([float(time) for time in times_str])
times.sort()
return times
def is_vector_data(patch):
return len(patch.keys()) == 3
def level_spacing_from(root_spacing, ilvl):
# hard-coded 2D adds 0 for last dim spacing
return [dl / 2**ilvl for dl in root_spacing]
def make3d(root_spacing):
if len(root_spacing) == 1:
return root_spacing + [0, 0]
elif len(root_spacing) == 2:
return root_spacing + [0]
return root_spacing
def main():
if len(sys.argv) != 2 or sys.argv[1] in ["-h", "--help"]:
print(f"Usage: {os.path.basename(sys.argv[0])} <path_to_phare_h5>")
print("Works for EM fields, bulk velocity and density")
sys.exit(1)
path = sys.argv[1]
# Assume diag_filename is derived from path or another source
toFlatPrimal = primalFlattener(path)
# ... (other code initializing variables like AMRBox, nbr_boxes, etc.)
# Example loop processing patches (the actual context may vary)
for patch in patches:
if is_vector_data(patch):
x_name, y_name, z_name = list(patch.keys())
ph_x = patch[x_name][:]
ph_y = patch[y_name][:]
ph_z = patch[z_name][:]
box = boxFromPatch(patch)
AMRBox.append(box)
nbr_boxes += 1
npx, npy, npz = nbrNodes(box)
# Ensure toFlatPrimal is a vector conversion function
if toFlatPrimal.__name__ not in ['BtoFlatPrimal', 'EtoFlatPrimal', 'primalVectorToFlatPrimal']:
raise TypeError(f"Expected vector conversion function but got {toFlatPrimal.__name__}")
data = toFlatPrimal(ph_x, ph_y, ph_z, npx, npy, npz)
else:
assert len(patch.keys()) == 1
dataset_name = list(patch.keys())[0]
ph_data = patch[dataset_name][:]
box = boxFromPatch(patch)
AMRBox.append(box)
nbr_boxes += 1
npx, npy, npz = nbrNodes(box)
# Ensure toFlatPrimal is a scalar conversion function
if toFlatPrimal.__name__ != 'primalScalarToFlatPrimal':
raise TypeError(f"Expected scalar conversion function but got {toFlatPrimal.__name__}")
data = toFlatPrimal(ph_data, npx, npy, npz)
# ... (rest of main function)
def primalFlattener(diag_filename):
"""
Returns the appropriate function to convert data to flat primal format.
The returned function will have one of these signatures:
- For vector data: f(ph_x, ph_y, ph_z, npx, npy, npz, gn=2) -> np.array
- For scalar data: f(ph_scalar, npx, npy, npz, gn=2) -> np.array
"""
if "_B" in diag_filename:
print("Converting B fields")
return BtoFlatPrimal
# Additional logic may follow to select and return the correct conversion function

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant