Conversation
| assert len(cx5_edges) > 0, "cx5_100gbe should have edges" | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_json_file(): |
There was a problem hiding this comment.
Create a fabric, dump it as json and test that json. Do the same for yaml. This way we can get rid of the mock_data
| current_dir = os.path.dirname(os.path.abspath(__file__)) | ||
| mock_data_path = os.path.join(current_dir, "mock_data") | ||
| json_path = os.path.join(mock_data_path, "SYS-221H-TNR.json") | ||
| output_dir="./viz3" |
There was a problem hiding this comment.
use pytest native tmp_path or some fixture that returns a tmp directory - also add them to gitignore if not already present
| switch = Switch(port_count=4) | ||
| clos_fat_tree = ClosFatTreeFabric(switch, server, 3, []) | ||
| output_dir="./viz1" | ||
| run_visualizer(infrastructure=clos_fat_tree,hosts="server", switches="switch",output=output_dir) |
There was a problem hiding this comment.
provide better spacing - I see no space here:
"switch",output=output_dir
|
|
||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_closfabric_3tier_4radix(): |
There was a problem hiding this comment.
improper naming convention - provide full names like three tier radix four
or just provide test_visualizer_closfabric
| @@ -0,0 +1,85 @@ | |||
| import pytest | |||
There was a problem hiding this comment.
no need to have a folder - you can have the file under src/tests/test_visualizer.py
| @@ -0,0 +1,305 @@ | |||
| { | |||
There was a problem hiding this comment.
delete this file - we do not want to keep resources
There was a problem hiding this comment.
done, removed the old files
| @@ -0,0 +1,193 @@ | |||
| devices: | |||
There was a problem hiding this comment.
delete this file - we do not want to keep resources
| from infragraph.visualizer.visualize import run_visualizer | ||
|
|
||
|
|
||
| def _load_graph_data(output_dir): |
There was a problem hiding this comment.
not required as we are already using infra objects
There was a problem hiding this comment.
keeping _load_graph_data() as it's a shared helper used by multiple tests to parse and validate the generated graph_data.js file. It extracts the JSON from the JS output so tests can assert on node/edge counts.
| data = _load_graph_data(output_dir) | ||
| assert len(data) > 1, "Should have infrastructure and at least one device view" | ||
|
|
||
|
|
There was a problem hiding this comment.
add a test for dgx device - maybe a higher variant
There was a problem hiding this comment.
added tests for a compossibility graph in commit 5b584be
|
|
||
| @pytest.mark.asyncio | ||
| async def test_composed_devices(): | ||
| """ |
There was a problem hiding this comment.
use infragraph object rather than yaml
| assert os.path.exists(os.path.join(output_dir, "js", "graph_data.js")), "graph_data.js not generated" | ||
| assert os.path.exists(os.path.join(output_dir, "css", "style.css")), "CSS file not copied" | ||
|
|
||
| data = _load_graph_data(output_dir) |
There was a problem hiding this comment.
is there a reason why Api().set_graph() is not being used, for loading and validation?
There was a problem hiding this comment.
- set_graph() is already called internally in visualize.py (in run_visualizer()).
- _load_graph_data is a test helper that parses the generated graph_data.js output file against which all assertions are running.
This PR contains Unit Tests for the visualizer and some mock data for testing.
The UTs asserts: