Skip to content

Conversation

@iksnagreb
Copy link

Not sure if this should actually be merged...

TODO: ...

This adds the same constant amount to the estimate for the stitched IP
verification simulation which was already present for the RTL simulation
performance measurement, but also adds the option to overwrite this via
the LIVENESS_THRESHOLD in both cases.
See iksnagreb/onnx-passes@16f869c for details
on the new data layout annotation and conversion
Note: This is a minimally invasive adaptation to make the attention
operator handle the batch/head dimension which is not squeezed out of
the model anymore. A proper refactoring should be done later.

Includes a bugfix to the code generation for thresholds embedded into
the attention operator: Apprently comp::less is correct after all...
@iksnagreb iksnagreb self-assigned this Nov 8, 2025
@github-actions
Copy link

github-actions bot commented Nov 8, 2025

📋 Docstring Check Report

Checked files:

  • src/finn/builder/build_dataflow_config.py
  • src/finn/builder/build_dataflow_steps.py
  • src/finn/builder/passes.py
  • src/finn/custom_op/fpgadataflow/attention.py
  • src/finn/custom_op/fpgadataflow/hls/attention_hls.py
  • src/finn/custom_op/fpgadataflow/hls/pool_hls.py
  • src/finn/custom_op/fpgadataflow/pool.py
  • src/finn/custom_op/fpgadataflow/reshape.py
  • src/finn/custom_op/fpgadataflow/rtl/__init__.py
  • src/finn/custom_op/fpgadataflow/rtl/reshape_rtl.py
  • src/finn/transformation/fpgadataflow/convert_to_hw_layers.py
  • src/finn/transformation/fpgadataflow/set_folding.py
  • src/finn/transformation/streamline/__init__.py
  • src/finn/transformation/streamline/round_thresholds.py

Docstring check failed!

Missing Docstrings Details:

📄 src/finn/custom_op/fpgadataflow/attention.py:

    • Line 1: module 'attention.py'
    • Line 30: function 'softmax'
    • Line 48: class 'ScaledDotProductAttention'
    • Line 50: function 'ScaledDotProductAttention.init'
    • Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
    • Line 171: function 'ScaledDotProductAttention.shapes'
    • Line 179: function 'ScaledDotProductAttention.folds'
    • Line 187: function 'ScaledDotProductAttention.is_valid_folding'
    • Line 199: function 'ScaledDotProductAttention.iterations'
    • Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
    • Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
    • Line 286: function 'ScaledDotProductAttention._execute_node_python'
    • Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
    • Line 315: function 'ScaledDotProductAttention.act_a_softmax'
    • Line 333: function 'ScaledDotProductAttention.act_av_matmul'
    • Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
    • Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
    • Line 423: function 'ScaledDotProductAttention.execute_node'
    • Line 436: function 'ScaledDotProductAttention.verify_node'
    • Line 440: function 'ScaledDotProductAttention.get_input_datatype'
    • Line 479: function 'ScaledDotProductAttention.get_output_datatype'
    • Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
    • Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
    • Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
    • Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
    • Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
    • Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
    • Line 602: function 'ScaledDotProductAttention.get_instream_width'
    • Line 612: function 'ScaledDotProductAttention.get_outstream_width'
    • Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
    • Line 707: function 'ScaledDotProductAttention.get_number_input_values'
    • Line 714: function 'ScaledDotProductAttention.get_number_output_values'
    • Line 724: function 'ScaledDotProductAttention.get_input_name_by_name'
    • Line 763: function 'ScaledDotProductAttention.get_exp_cycles'

Total missing docstrings: 34

How to Fix:

Please add docstrings to the missing functions, classes, and modules listed above.

Docstring Guidelines:

  • All modules should have a module-level docstring
  • All public functions and methods should have docstrings
  • All private functions should have docstrings
  • All classes should have docstrings
  • Use triple quotes (""") for docstrings
  • Follow PEP 257 conventions
Raw output from docstring checker
Checking 14 changed Python file(s):
❌ Missing docstrings found:

📄 src/finn/custom_op/fpgadataflow/attention.py:
  - Line 1: module 'attention.py'
  - Line 30: function 'softmax'
  - Line 48: class 'ScaledDotProductAttention'
  - Line 50: function 'ScaledDotProductAttention.__init__'
  - Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
  - Line 171: function 'ScaledDotProductAttention.shapes'
  - Line 179: function 'ScaledDotProductAttention.folds'
  - Line 187: function 'ScaledDotProductAttention.is_valid_folding'
  - Line 199: function 'ScaledDotProductAttention.iterations'
  - Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
  - Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
  - Line 286: function 'ScaledDotProductAttention._execute_node_python'
  - Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
  - Line 315: function 'ScaledDotProductAttention.act_a_softmax'
  - Line 333: function 'ScaledDotProductAttention.act_av_matmul'
  - Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
  - Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
  - Line 423: function 'ScaledDotProductAttention.execute_node'
  - Line 436: function 'ScaledDotProductAttention.verify_node'
  - Line 440: function 'ScaledDotProductAttention.get_input_datatype'
  - Line 479: function 'ScaledDotProductAttention.get_output_datatype'
  - Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
  - Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
  - Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
  - Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
  - Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
  - Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
  - Line 602: function 'ScaledDotProductAttention.get_instream_width'
  - Line 612: function 'ScaledDotProductAttention.get_outstream_width'
  - Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
  - Line 707: function 'ScaledDotProductAttention.get_number_input_values'
  - Line 714: function 'ScaledDotProductAttention.get_number_output_values'
  - Line 724: function 'ScaledDotProductAttention.get_input_name_by_name'
  - Line 763: function 'ScaledDotProductAttention.get_exp_cycles'

Total missing docstrings: 34

@github-actions
Copy link

github-actions bot commented Nov 9, 2025

📋 Docstring Check Report

Checked files:

  • src/finn/builder/build_dataflow_config.py
  • src/finn/builder/build_dataflow_steps.py
  • src/finn/builder/passes.py
  • src/finn/custom_op/fpgadataflow/attention.py
  • src/finn/custom_op/fpgadataflow/hls/attention_hls.py
  • src/finn/custom_op/fpgadataflow/hls/pool_hls.py
  • src/finn/custom_op/fpgadataflow/pool.py
  • src/finn/custom_op/fpgadataflow/reshape.py
  • src/finn/custom_op/fpgadataflow/rtl/__init__.py
  • src/finn/custom_op/fpgadataflow/rtl/reshape_rtl.py
  • src/finn/transformation/fpgadataflow/convert_to_hw_layers.py
  • src/finn/transformation/fpgadataflow/set_folding.py
  • src/finn/transformation/streamline/__init__.py
  • src/finn/transformation/streamline/round_thresholds.py

Docstring check failed!

Missing Docstrings Details:

📄 src/finn/custom_op/fpgadataflow/attention.py:

    • Line 1: module 'attention.py'
    • Line 30: function 'softmax'
    • Line 48: class 'ScaledDotProductAttention'
    • Line 50: function 'ScaledDotProductAttention.init'
    • Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
    • Line 171: function 'ScaledDotProductAttention.shapes'
    • Line 179: function 'ScaledDotProductAttention.folds'
    • Line 187: function 'ScaledDotProductAttention.is_valid_folding'
    • Line 199: function 'ScaledDotProductAttention.iterations'
    • Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
    • Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
    • Line 286: function 'ScaledDotProductAttention._execute_node_python'
    • Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
    • Line 315: function 'ScaledDotProductAttention.act_a_softmax'
    • Line 333: function 'ScaledDotProductAttention.act_av_matmul'
    • Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
    • Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
    • Line 423: function 'ScaledDotProductAttention.execute_node'
    • Line 436: function 'ScaledDotProductAttention.verify_node'
    • Line 440: function 'ScaledDotProductAttention.get_input_datatype'
    • Line 479: function 'ScaledDotProductAttention.get_output_datatype'
    • Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
    • Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
    • Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
    • Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
    • Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
    • Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
    • Line 602: function 'ScaledDotProductAttention.get_instream_width'
    • Line 612: function 'ScaledDotProductAttention.get_outstream_width'
    • Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
    • Line 707: function 'ScaledDotProductAttention.get_number_input_values'
    • Line 714: function 'ScaledDotProductAttention.get_number_output_values'
    • Line 724: function 'ScaledDotProductAttention.get_input_name_by_name'
    • Line 763: function 'ScaledDotProductAttention.get_exp_cycles'

Total missing docstrings: 34

How to Fix:

Please add docstrings to the missing functions, classes, and modules listed above.

Docstring Guidelines:

  • All modules should have a module-level docstring
  • All public functions and methods should have docstrings
  • All private functions should have docstrings
  • All classes should have docstrings
  • Use triple quotes (""") for docstrings
  • Follow PEP 257 conventions
Raw output from docstring checker
Checking 14 changed Python file(s):
❌ Missing docstrings found:

📄 src/finn/custom_op/fpgadataflow/attention.py:
  - Line 1: module 'attention.py'
  - Line 30: function 'softmax'
  - Line 48: class 'ScaledDotProductAttention'
  - Line 50: function 'ScaledDotProductAttention.__init__'
  - Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
  - Line 171: function 'ScaledDotProductAttention.shapes'
  - Line 179: function 'ScaledDotProductAttention.folds'
  - Line 187: function 'ScaledDotProductAttention.is_valid_folding'
  - Line 199: function 'ScaledDotProductAttention.iterations'
  - Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
  - Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
  - Line 286: function 'ScaledDotProductAttention._execute_node_python'
  - Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
  - Line 315: function 'ScaledDotProductAttention.act_a_softmax'
  - Line 333: function 'ScaledDotProductAttention.act_av_matmul'
  - Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
  - Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
  - Line 423: function 'ScaledDotProductAttention.execute_node'
  - Line 436: function 'ScaledDotProductAttention.verify_node'
  - Line 440: function 'ScaledDotProductAttention.get_input_datatype'
  - Line 479: function 'ScaledDotProductAttention.get_output_datatype'
  - Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
  - Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
  - Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
  - Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
  - Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
  - Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
  - Line 602: function 'ScaledDotProductAttention.get_instream_width'
  - Line 612: function 'ScaledDotProductAttention.get_outstream_width'
  - Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
  - Line 707: function 'ScaledDotProductAttention.get_number_input_values'
  - Line 714: function 'ScaledDotProductAttention.get_number_output_values'
  - Line 724: function 'ScaledDotProductAttention.get_input_name_by_name'
  - Line 763: function 'ScaledDotProductAttention.get_exp_cycles'

Total missing docstrings: 34

@github-actions
Copy link

github-actions bot commented Nov 9, 2025

📋 Docstring Check Report

Checked files:

  • src/finn/builder/build_dataflow_config.py
  • src/finn/builder/build_dataflow_steps.py
  • src/finn/builder/passes.py
  • src/finn/custom_op/fpgadataflow/attention.py
  • src/finn/custom_op/fpgadataflow/hls/attention_hls.py
  • src/finn/custom_op/fpgadataflow/hls/pool_hls.py
  • src/finn/custom_op/fpgadataflow/pool.py
  • src/finn/custom_op/fpgadataflow/reshape.py
  • src/finn/custom_op/fpgadataflow/rtl/__init__.py
  • src/finn/custom_op/fpgadataflow/rtl/reshape_rtl.py
  • src/finn/transformation/fpgadataflow/convert_to_hw_layers.py
  • src/finn/transformation/fpgadataflow/set_folding.py
  • src/finn/transformation/streamline/__init__.py
  • src/finn/transformation/streamline/round_thresholds.py

Docstring check failed!

Missing Docstrings Details:

📄 src/finn/custom_op/fpgadataflow/attention.py:

    • Line 1: module 'attention.py'
    • Line 30: function 'softmax'
    • Line 48: class 'ScaledDotProductAttention'
    • Line 50: function 'ScaledDotProductAttention.init'
    • Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
    • Line 171: function 'ScaledDotProductAttention.shapes'
    • Line 179: function 'ScaledDotProductAttention.folds'
    • Line 187: function 'ScaledDotProductAttention.is_valid_folding'
    • Line 199: function 'ScaledDotProductAttention.iterations'
    • Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
    • Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
    • Line 286: function 'ScaledDotProductAttention._execute_node_python'
    • Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
    • Line 315: function 'ScaledDotProductAttention.act_a_softmax'
    • Line 333: function 'ScaledDotProductAttention.act_av_matmul'
    • Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
    • Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
    • Line 423: function 'ScaledDotProductAttention.execute_node'
    • Line 436: function 'ScaledDotProductAttention.verify_node'
    • Line 440: function 'ScaledDotProductAttention.get_input_datatype'
    • Line 479: function 'ScaledDotProductAttention.get_output_datatype'
    • Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
    • Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
    • Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
    • Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
    • Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
    • Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
    • Line 602: function 'ScaledDotProductAttention.get_instream_width'
    • Line 612: function 'ScaledDotProductAttention.get_outstream_width'
    • Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
    • Line 709: function 'ScaledDotProductAttention.get_number_input_values'
    • Line 716: function 'ScaledDotProductAttention.get_number_output_values'
    • Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
    • Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

Total missing docstrings: 34

How to Fix:

Please add docstrings to the missing functions, classes, and modules listed above.

Docstring Guidelines:

  • All modules should have a module-level docstring
  • All public functions and methods should have docstrings
  • All private functions should have docstrings
  • All classes should have docstrings
  • Use triple quotes (""") for docstrings
  • Follow PEP 257 conventions
Raw output from docstring checker
Checking 14 changed Python file(s):
❌ Missing docstrings found:

📄 src/finn/custom_op/fpgadataflow/attention.py:
  - Line 1: module 'attention.py'
  - Line 30: function 'softmax'
  - Line 48: class 'ScaledDotProductAttention'
  - Line 50: function 'ScaledDotProductAttention.__init__'
  - Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
  - Line 171: function 'ScaledDotProductAttention.shapes'
  - Line 179: function 'ScaledDotProductAttention.folds'
  - Line 187: function 'ScaledDotProductAttention.is_valid_folding'
  - Line 199: function 'ScaledDotProductAttention.iterations'
  - Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
  - Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
  - Line 286: function 'ScaledDotProductAttention._execute_node_python'
  - Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
  - Line 315: function 'ScaledDotProductAttention.act_a_softmax'
  - Line 333: function 'ScaledDotProductAttention.act_av_matmul'
  - Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
  - Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
  - Line 423: function 'ScaledDotProductAttention.execute_node'
  - Line 436: function 'ScaledDotProductAttention.verify_node'
  - Line 440: function 'ScaledDotProductAttention.get_input_datatype'
  - Line 479: function 'ScaledDotProductAttention.get_output_datatype'
  - Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
  - Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
  - Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
  - Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
  - Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
  - Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
  - Line 602: function 'ScaledDotProductAttention.get_instream_width'
  - Line 612: function 'ScaledDotProductAttention.get_outstream_width'
  - Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
  - Line 709: function 'ScaledDotProductAttention.get_number_input_values'
  - Line 716: function 'ScaledDotProductAttention.get_number_output_values'
  - Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
  - Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

Total missing docstrings: 34

This makes it more convenient to reuse the same dataflow build config
for different models, which usually require different verification data.
@github-actions
Copy link

📋 Docstring Check Report

Checked files:

  • src/finn/builder/build_dataflow_config.py
  • src/finn/builder/build_dataflow_steps.py
  • src/finn/builder/passes.py
  • src/finn/custom_op/fpgadataflow/attention.py
  • src/finn/custom_op/fpgadataflow/hls/attention_hls.py
  • src/finn/custom_op/fpgadataflow/hls/pool_hls.py
  • src/finn/custom_op/fpgadataflow/pool.py
  • src/finn/custom_op/fpgadataflow/reshape.py
  • src/finn/custom_op/fpgadataflow/rtl/__init__.py
  • src/finn/custom_op/fpgadataflow/rtl/reshape_rtl.py
  • src/finn/interface/run_finn.py
  • src/finn/transformation/fpgadataflow/convert_to_hw_layers.py
  • src/finn/transformation/fpgadataflow/set_folding.py
  • src/finn/transformation/streamline/__init__.py
  • src/finn/transformation/streamline/round_thresholds.py

Docstring check failed!

Missing Docstrings Details:

📄 src/finn/custom_op/fpgadataflow/attention.py:

    • Line 1: module 'attention.py'
    • Line 30: function 'softmax'
    • Line 48: class 'ScaledDotProductAttention'
    • Line 50: function 'ScaledDotProductAttention.init'
    • Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
    • Line 171: function 'ScaledDotProductAttention.shapes'
    • Line 179: function 'ScaledDotProductAttention.folds'
    • Line 187: function 'ScaledDotProductAttention.is_valid_folding'
    • Line 199: function 'ScaledDotProductAttention.iterations'
    • Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
    • Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
    • Line 286: function 'ScaledDotProductAttention._execute_node_python'
    • Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
    • Line 315: function 'ScaledDotProductAttention.act_a_softmax'
    • Line 333: function 'ScaledDotProductAttention.act_av_matmul'
    • Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
    • Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
    • Line 423: function 'ScaledDotProductAttention.execute_node'
    • Line 436: function 'ScaledDotProductAttention.verify_node'
    • Line 440: function 'ScaledDotProductAttention.get_input_datatype'
    • Line 479: function 'ScaledDotProductAttention.get_output_datatype'
    • Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
    • Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
    • Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
    • Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
    • Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
    • Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
    • Line 602: function 'ScaledDotProductAttention.get_instream_width'
    • Line 612: function 'ScaledDotProductAttention.get_outstream_width'
    • Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
    • Line 709: function 'ScaledDotProductAttention.get_number_input_values'
    • Line 716: function 'ScaledDotProductAttention.get_number_output_values'
    • Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
    • Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

📄 src/finn/interface/run_finn.py:

    • Line 1: module 'run_finn.py'
    • Line 37: function '_resolve_module_path'
    • Line 128: function 'main_group'
    • Line 181: function 'build'
    • Line 284: function 'run'
    • Line 317: function 'bench'
    • Line 348: function 'test'
    • Line 361: function 'deps'
    • Line 373: function 'update'
    • Line 379: function 'config'
    • Line 384: function '_command_get_settings'
    • Line 394: function 'config_list'
    • Line 402: function 'config_get'
    • Line 413: function 'config_set'
    • Line 433: function 'config_create'
    • Line 447: function 'main'

Total missing docstrings: 50

How to Fix:

Please add docstrings to the missing functions, classes, and modules listed above.

Docstring Guidelines:

  • All modules should have a module-level docstring
  • All public functions and methods should have docstrings
  • All private functions should have docstrings
  • All classes should have docstrings
  • Use triple quotes (""") for docstrings
  • Follow PEP 257 conventions
Raw output from docstring checker
Checking 15 changed Python file(s):
❌ Missing docstrings found:

📄 src/finn/custom_op/fpgadataflow/attention.py:
  - Line 1: module 'attention.py'
  - Line 30: function 'softmax'
  - Line 48: class 'ScaledDotProductAttention'
  - Line 50: function 'ScaledDotProductAttention.__init__'
  - Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
  - Line 171: function 'ScaledDotProductAttention.shapes'
  - Line 179: function 'ScaledDotProductAttention.folds'
  - Line 187: function 'ScaledDotProductAttention.is_valid_folding'
  - Line 199: function 'ScaledDotProductAttention.iterations'
  - Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
  - Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
  - Line 286: function 'ScaledDotProductAttention._execute_node_python'
  - Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
  - Line 315: function 'ScaledDotProductAttention.act_a_softmax'
  - Line 333: function 'ScaledDotProductAttention.act_av_matmul'
  - Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
  - Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
  - Line 423: function 'ScaledDotProductAttention.execute_node'
  - Line 436: function 'ScaledDotProductAttention.verify_node'
  - Line 440: function 'ScaledDotProductAttention.get_input_datatype'
  - Line 479: function 'ScaledDotProductAttention.get_output_datatype'
  - Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
  - Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
  - Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
  - Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
  - Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
  - Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
  - Line 602: function 'ScaledDotProductAttention.get_instream_width'
  - Line 612: function 'ScaledDotProductAttention.get_outstream_width'
  - Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
  - Line 709: function 'ScaledDotProductAttention.get_number_input_values'
  - Line 716: function 'ScaledDotProductAttention.get_number_output_values'
  - Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
  - Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

📄 src/finn/interface/run_finn.py:
  - Line 1: module 'run_finn.py'
  - Line 37: function '_resolve_module_path'
  - Line 128: function 'main_group'
  - Line 181: function 'build'
  - Line 284: function 'run'
  - Line 317: function 'bench'
  - Line 348: function 'test'
  - Line 361: function 'deps'
  - Line 373: function 'update'
  - Line 379: function 'config'
  - Line 384: function '_command_get_settings'
  - Line 394: function 'config_list'
  - Line 402: function 'config_get'
  - Line 413: function 'config_set'
  - Line 433: function 'config_create'
  - Line 447: function 'main'

Total missing docstrings: 50

@github-actions
Copy link

📋 Docstring Check Report

Checked files:

  • src/finn/builder/build_dataflow_config.py
  • src/finn/builder/build_dataflow_steps.py
  • src/finn/builder/passes.py
  • src/finn/custom_op/fpgadataflow/attention.py
  • src/finn/custom_op/fpgadataflow/hls/attention_hls.py
  • src/finn/custom_op/fpgadataflow/hls/pool_hls.py
  • src/finn/custom_op/fpgadataflow/pool.py
  • src/finn/custom_op/fpgadataflow/reshape.py
  • src/finn/custom_op/fpgadataflow/rtl/__init__.py
  • src/finn/custom_op/fpgadataflow/rtl/reshape_rtl.py
  • src/finn/interface/run_finn.py
  • src/finn/transformation/fpgadataflow/convert_to_hw_layers.py
  • src/finn/transformation/fpgadataflow/set_folding.py
  • src/finn/transformation/streamline/__init__.py
  • src/finn/transformation/streamline/round_thresholds.py

Docstring check failed!

Missing Docstrings Details:

📄 src/finn/custom_op/fpgadataflow/attention.py:

    • Line 1: module 'attention.py'
    • Line 30: function 'softmax'
    • Line 48: class 'ScaledDotProductAttention'
    • Line 50: function 'ScaledDotProductAttention.init'
    • Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
    • Line 171: function 'ScaledDotProductAttention.shapes'
    • Line 179: function 'ScaledDotProductAttention.folds'
    • Line 187: function 'ScaledDotProductAttention.is_valid_folding'
    • Line 199: function 'ScaledDotProductAttention.iterations'
    • Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
    • Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
    • Line 286: function 'ScaledDotProductAttention._execute_node_python'
    • Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
    • Line 315: function 'ScaledDotProductAttention.act_a_softmax'
    • Line 333: function 'ScaledDotProductAttention.act_av_matmul'
    • Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
    • Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
    • Line 423: function 'ScaledDotProductAttention.execute_node'
    • Line 436: function 'ScaledDotProductAttention.verify_node'
    • Line 440: function 'ScaledDotProductAttention.get_input_datatype'
    • Line 479: function 'ScaledDotProductAttention.get_output_datatype'
    • Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
    • Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
    • Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
    • Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
    • Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
    • Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
    • Line 602: function 'ScaledDotProductAttention.get_instream_width'
    • Line 612: function 'ScaledDotProductAttention.get_outstream_width'
    • Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
    • Line 709: function 'ScaledDotProductAttention.get_number_input_values'
    • Line 716: function 'ScaledDotProductAttention.get_number_output_values'
    • Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
    • Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

📄 src/finn/interface/run_finn.py:

    • Line 1: module 'run_finn.py'
    • Line 37: function '_resolve_module_path'
    • Line 128: function 'main_group'
    • Line 181: function 'build'
    • Line 284: function 'run'
    • Line 317: function 'bench'
    • Line 348: function 'test'
    • Line 361: function 'deps'
    • Line 373: function 'update'
    • Line 379: function 'config'
    • Line 384: function '_command_get_settings'
    • Line 394: function 'config_list'
    • Line 402: function 'config_get'
    • Line 413: function 'config_set'
    • Line 433: function 'config_create'
    • Line 447: function 'main'

Total missing docstrings: 50

How to Fix:

Please add docstrings to the missing functions, classes, and modules listed above.

Docstring Guidelines:

  • All modules should have a module-level docstring
  • All public functions and methods should have docstrings
  • All private functions should have docstrings
  • All classes should have docstrings
  • Use triple quotes (""") for docstrings
  • Follow PEP 257 conventions
Raw output from docstring checker
Checking 15 changed Python file(s):
❌ Missing docstrings found:

📄 src/finn/custom_op/fpgadataflow/attention.py:
  - Line 1: module 'attention.py'
  - Line 30: function 'softmax'
  - Line 48: class 'ScaledDotProductAttention'
  - Line 50: function 'ScaledDotProductAttention.__init__'
  - Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
  - Line 171: function 'ScaledDotProductAttention.shapes'
  - Line 179: function 'ScaledDotProductAttention.folds'
  - Line 187: function 'ScaledDotProductAttention.is_valid_folding'
  - Line 199: function 'ScaledDotProductAttention.iterations'
  - Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
  - Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
  - Line 286: function 'ScaledDotProductAttention._execute_node_python'
  - Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
  - Line 315: function 'ScaledDotProductAttention.act_a_softmax'
  - Line 333: function 'ScaledDotProductAttention.act_av_matmul'
  - Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
  - Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
  - Line 423: function 'ScaledDotProductAttention.execute_node'
  - Line 436: function 'ScaledDotProductAttention.verify_node'
  - Line 440: function 'ScaledDotProductAttention.get_input_datatype'
  - Line 479: function 'ScaledDotProductAttention.get_output_datatype'
  - Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
  - Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
  - Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
  - Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
  - Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
  - Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
  - Line 602: function 'ScaledDotProductAttention.get_instream_width'
  - Line 612: function 'ScaledDotProductAttention.get_outstream_width'
  - Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
  - Line 709: function 'ScaledDotProductAttention.get_number_input_values'
  - Line 716: function 'ScaledDotProductAttention.get_number_output_values'
  - Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
  - Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

📄 src/finn/interface/run_finn.py:
  - Line 1: module 'run_finn.py'
  - Line 37: function '_resolve_module_path'
  - Line 128: function 'main_group'
  - Line 181: function 'build'
  - Line 284: function 'run'
  - Line 317: function 'bench'
  - Line 348: function 'test'
  - Line 361: function 'deps'
  - Line 373: function 'update'
  - Line 379: function 'config'
  - Line 384: function '_command_get_settings'
  - Line 394: function 'config_list'
  - Line 402: function 'config_get'
  - Line 413: function 'config_set'
  - Line 433: function 'config_create'
  - Line 447: function 'main'

Total missing docstrings: 50

Note: This is not included by default, as inserting DWCs is already part
of the default step_set_fifo_depth.
@github-actions
Copy link

📋 Docstring Check Report

Checked files:

  • src/finn/builder/build_dataflow_config.py
  • src/finn/builder/build_dataflow_steps.py
  • src/finn/builder/passes.py
  • src/finn/custom_op/fpgadataflow/attention.py
  • src/finn/custom_op/fpgadataflow/hls/attention_hls.py
  • src/finn/custom_op/fpgadataflow/hls/pool_hls.py
  • src/finn/custom_op/fpgadataflow/pool.py
  • src/finn/custom_op/fpgadataflow/reshape.py
  • src/finn/custom_op/fpgadataflow/rtl/__init__.py
  • src/finn/custom_op/fpgadataflow/rtl/reshape_rtl.py
  • src/finn/interface/run_finn.py
  • src/finn/transformation/fpgadataflow/convert_to_hw_layers.py
  • src/finn/transformation/fpgadataflow/set_folding.py
  • src/finn/transformation/streamline/__init__.py
  • src/finn/transformation/streamline/round_thresholds.py

Docstring check failed!

Missing Docstrings Details:

📄 src/finn/custom_op/fpgadataflow/attention.py:

    • Line 1: module 'attention.py'
    • Line 30: function 'softmax'
    • Line 48: class 'ScaledDotProductAttention'
    • Line 50: function 'ScaledDotProductAttention.init'
    • Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
    • Line 171: function 'ScaledDotProductAttention.shapes'
    • Line 179: function 'ScaledDotProductAttention.folds'
    • Line 187: function 'ScaledDotProductAttention.is_valid_folding'
    • Line 199: function 'ScaledDotProductAttention.iterations'
    • Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
    • Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
    • Line 286: function 'ScaledDotProductAttention._execute_node_python'
    • Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
    • Line 315: function 'ScaledDotProductAttention.act_a_softmax'
    • Line 333: function 'ScaledDotProductAttention.act_av_matmul'
    • Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
    • Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
    • Line 423: function 'ScaledDotProductAttention.execute_node'
    • Line 436: function 'ScaledDotProductAttention.verify_node'
    • Line 440: function 'ScaledDotProductAttention.get_input_datatype'
    • Line 479: function 'ScaledDotProductAttention.get_output_datatype'
    • Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
    • Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
    • Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
    • Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
    • Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
    • Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
    • Line 602: function 'ScaledDotProductAttention.get_instream_width'
    • Line 612: function 'ScaledDotProductAttention.get_outstream_width'
    • Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
    • Line 709: function 'ScaledDotProductAttention.get_number_input_values'
    • Line 716: function 'ScaledDotProductAttention.get_number_output_values'
    • Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
    • Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

📄 src/finn/interface/run_finn.py:

    • Line 1: module 'run_finn.py'
    • Line 37: function '_resolve_module_path'
    • Line 128: function 'main_group'
    • Line 181: function 'build'
    • Line 284: function 'run'
    • Line 317: function 'bench'
    • Line 348: function 'test'
    • Line 361: function 'deps'
    • Line 373: function 'update'
    • Line 379: function 'config'
    • Line 384: function '_command_get_settings'
    • Line 394: function 'config_list'
    • Line 402: function 'config_get'
    • Line 413: function 'config_set'
    • Line 433: function 'config_create'
    • Line 447: function 'main'

Total missing docstrings: 50

How to Fix:

Please add docstrings to the missing functions, classes, and modules listed above.

Docstring Guidelines:

  • All modules should have a module-level docstring
  • All public functions and methods should have docstrings
  • All private functions should have docstrings
  • All classes should have docstrings
  • Use triple quotes (""") for docstrings
  • Follow PEP 257 conventions
Raw output from docstring checker
Checking 15 changed Python file(s):
❌ Missing docstrings found:

📄 src/finn/custom_op/fpgadataflow/attention.py:
  - Line 1: module 'attention.py'
  - Line 30: function 'softmax'
  - Line 48: class 'ScaledDotProductAttention'
  - Line 50: function 'ScaledDotProductAttention.__init__'
  - Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
  - Line 171: function 'ScaledDotProductAttention.shapes'
  - Line 179: function 'ScaledDotProductAttention.folds'
  - Line 187: function 'ScaledDotProductAttention.is_valid_folding'
  - Line 199: function 'ScaledDotProductAttention.iterations'
  - Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
  - Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
  - Line 286: function 'ScaledDotProductAttention._execute_node_python'
  - Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
  - Line 315: function 'ScaledDotProductAttention.act_a_softmax'
  - Line 333: function 'ScaledDotProductAttention.act_av_matmul'
  - Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
  - Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
  - Line 423: function 'ScaledDotProductAttention.execute_node'
  - Line 436: function 'ScaledDotProductAttention.verify_node'
  - Line 440: function 'ScaledDotProductAttention.get_input_datatype'
  - Line 479: function 'ScaledDotProductAttention.get_output_datatype'
  - Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
  - Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
  - Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
  - Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
  - Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
  - Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
  - Line 602: function 'ScaledDotProductAttention.get_instream_width'
  - Line 612: function 'ScaledDotProductAttention.get_outstream_width'
  - Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
  - Line 709: function 'ScaledDotProductAttention.get_number_input_values'
  - Line 716: function 'ScaledDotProductAttention.get_number_output_values'
  - Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
  - Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

📄 src/finn/interface/run_finn.py:
  - Line 1: module 'run_finn.py'
  - Line 37: function '_resolve_module_path'
  - Line 128: function 'main_group'
  - Line 181: function 'build'
  - Line 284: function 'run'
  - Line 317: function 'bench'
  - Line 348: function 'test'
  - Line 361: function 'deps'
  - Line 373: function 'update'
  - Line 379: function 'config'
  - Line 384: function '_command_get_settings'
  - Line 394: function 'config_list'
  - Line 402: function 'config_get'
  - Line 413: function 'config_set'
  - Line 433: function 'config_create'
  - Line 447: function 'main'

Total missing docstrings: 50

Note: This is merely a workaround, a proper fix to address changing
types after reordering should follow.
@github-actions
Copy link

📋 Docstring Check Report

Checked files:

  • src/finn/builder/build_dataflow_config.py
  • src/finn/builder/build_dataflow_steps.py
  • src/finn/builder/passes.py
  • src/finn/custom_op/fpgadataflow/attention.py
  • src/finn/custom_op/fpgadataflow/hls/attention_hls.py
  • src/finn/custom_op/fpgadataflow/hls/pool_hls.py
  • src/finn/custom_op/fpgadataflow/pool.py
  • src/finn/custom_op/fpgadataflow/reshape.py
  • src/finn/custom_op/fpgadataflow/rtl/__init__.py
  • src/finn/custom_op/fpgadataflow/rtl/reshape_rtl.py
  • src/finn/interface/run_finn.py
  • src/finn/transformation/fpgadataflow/convert_to_hw_layers.py
  • src/finn/transformation/fpgadataflow/set_folding.py
  • src/finn/transformation/streamline/__init__.py
  • src/finn/transformation/streamline/reorder.py
  • src/finn/transformation/streamline/round_thresholds.py

Docstring check failed!

Missing Docstrings Details:

📄 src/finn/custom_op/fpgadataflow/attention.py:

    • Line 1: module 'attention.py'
    • Line 30: function 'softmax'
    • Line 48: class 'ScaledDotProductAttention'
    • Line 50: function 'ScaledDotProductAttention.init'
    • Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
    • Line 171: function 'ScaledDotProductAttention.shapes'
    • Line 179: function 'ScaledDotProductAttention.folds'
    • Line 187: function 'ScaledDotProductAttention.is_valid_folding'
    • Line 199: function 'ScaledDotProductAttention.iterations'
    • Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
    • Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
    • Line 286: function 'ScaledDotProductAttention._execute_node_python'
    • Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
    • Line 315: function 'ScaledDotProductAttention.act_a_softmax'
    • Line 333: function 'ScaledDotProductAttention.act_av_matmul'
    • Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
    • Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
    • Line 423: function 'ScaledDotProductAttention.execute_node'
    • Line 436: function 'ScaledDotProductAttention.verify_node'
    • Line 440: function 'ScaledDotProductAttention.get_input_datatype'
    • Line 479: function 'ScaledDotProductAttention.get_output_datatype'
    • Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
    • Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
    • Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
    • Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
    • Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
    • Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
    • Line 602: function 'ScaledDotProductAttention.get_instream_width'
    • Line 612: function 'ScaledDotProductAttention.get_outstream_width'
    • Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
    • Line 709: function 'ScaledDotProductAttention.get_number_input_values'
    • Line 716: function 'ScaledDotProductAttention.get_number_output_values'
    • Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
    • Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

📄 src/finn/interface/run_finn.py:

    • Line 1: module 'run_finn.py'
    • Line 37: function '_resolve_module_path'
    • Line 128: function 'main_group'
    • Line 181: function 'build'
    • Line 284: function 'run'
    • Line 317: function 'bench'
    • Line 348: function 'test'
    • Line 361: function 'deps'
    • Line 373: function 'update'
    • Line 379: function 'config'
    • Line 384: function '_command_get_settings'
    • Line 394: function 'config_list'
    • Line 402: function 'config_get'
    • Line 413: function 'config_set'
    • Line 433: function 'config_create'
    • Line 447: function 'main'

📄 src/finn/transformation/streamline/reorder.py:

    • Line 1: module 'reorder.py'
    • Line 56: function 'MoveAddPastMul.apply'
    • Line 120: function 'MoveScalarMulPastMatMul.apply'
    • Line 177: function 'MoveScalarAddPastMatMul.apply'
    • Line 234: function 'MoveAddPastConv.apply'
    • Line 312: function 'MoveScalarMulPastConv.apply'
    • Line 361: function 'MoveScalarMulPastConvTranspose.apply'
    • Line 410: function 'MoveMulPastDWConv.apply'
    • Line 472: function 'MoveMulPastMaxPool.apply'
    • Line 543: function 'MoveLinearPastEltwiseAdd.move_node'
    • Line 563: function 'MoveLinearPastEltwiseAdd.apply'
    • Line 652: function 'MoveScalarLinearPastInvariants.apply'
    • Line 731: function 'MakeMaxPoolNHWC.apply'
    • Line 805: function 'MakeScaleResizeNHWC.apply'
    • Line 906: function 'MoveOpPastFork.init'
    • Line 910: function 'MoveOpPastFork.apply'
    • Line 974: class 'MoveAddPastFork'
    • Line 975: function 'MoveAddPastFork.init'
    • Line 979: class 'MoveMulPastFork'
    • Line 980: function 'MoveMulPastFork.init'
    • Line 984: class 'MoveLinearPastFork'
    • Line 985: function 'MoveLinearPastFork.init'
    • Line 989: class 'MoveTransposePastFork'
    • Line 990: function 'MoveTransposePastFork.init'
    • Line 994: function 'permute_shape'
    • Line 1006: function 'MoveScalarLinearPastSplit.init'
    • Line 1011: function 'MoveScalarLinearPastSplit.apply'
    • Line 1057: class 'MoveTransposePastSplit'
    • Line 1058: function 'MoveTransposePastSplit.init'
    • Line 1063: function 'MoveTransposePastSplit.apply'
    • Line 1110: function 'MoveMaxPoolPastMultiThreshold.apply'
    • Line 1173: function 'MoveFlattenPastTopK.apply'
    • Line 1233: function 'MoveFlattenPastAffine.apply'
    • Line 1319: function 'MoveTransposePastScalarMul.apply'
    • Line 1381: function 'MoveIdenticalOpPastJoinOp.init'
    • Line 1425: function 'MoveIdenticalOpPastJoinOp.apply'
    • Line 1460: class 'MoveTransposePastJoinAdd'
    • Line 1461: function 'MoveTransposePastJoinAdd.init'
    • Line 1464: function 'MoveTransposePastJoinAdd.are_producers_identical'
    • Line 1474: class 'MoveTransposePastJoinMul'
    • Line 1475: function 'MoveTransposePastJoinMul.init'
    • Line 1478: function 'MoveTransposePastJoinMul.are_producers_identical'
    • Line 1488: class 'MoveMulPastJoinAdd'
    • Line 1489: function 'MoveMulPastJoinAdd.init'
    • Line 1492: function 'MoveMulPastJoinAdd.are_producers_identical'
    • Line 1504: class 'MoveAddPastJoinAdd'
    • Line 1505: function 'MoveAddPastJoinAdd.init'
    • Line 1508: function 'MoveAddPastJoinAdd.are_producers_identical'
    • Line 1529: class 'MoveTransposePastJoinConcat'
    • Line 1530: function 'MoveTransposePastJoinConcat.init'
    • Line 1533: function 'MoveTransposePastJoinConcat.are_producers_identical'
    • Line 1542: function 'MoveTransposePastJoinConcat.move_node'
    • Line 1579: function 'MoveAffinePastJoinConcat.init'
    • Line 1582: function 'MoveAffinePastJoinConcat.are_producers_identical_scalar_ops'
    • Line 1591: function 'MoveAffinePastJoinConcat.are_producers_channelwise_ops'
    • Line 1604: function 'MoveAffinePastJoinConcat.move_node'
    • Line 1654: class 'MoveMulPastJoinConcat'
    • Line 1655: function 'MoveMulPastJoinConcat.init'
    • Line 1659: class 'MoveAddPastJoinConcat'
    • Line 1660: function 'MoveAddPastJoinConcat.init'
    • Line 1666: class 'MoveSqueezePastMultiThreshold'
    • Line 1668: function 'MoveSqueezePastMultiThreshold.apply'
    • Line 1728: class 'MoveSqueezePastMatMul'
    • Line 1730: function 'MoveSqueezePastMatMul.apply'
    • Line 1792: class 'MoveTransposePastEltwise'
    • Line 1794: function 'MoveTransposePastEltwise.apply'
    • Line 1897: class 'MoveAddPastMatMul'
    • Line 1899: function 'MoveAddPastMatMul.apply'
    • Line 1999: class 'MoveConstMulPastJoinMul'
    • Line 2001: function 'MoveConstMulPastJoinMul.apply'
    • Line 2076: class 'MoveMulPastAdd'
    • Line 2078: function 'MoveMulPastAdd.apply'
    • Line 2159: class 'MoveScalarLinearPastFork'
    • Line 2161: function 'MoveScalarLinearPastFork.apply'
    • Line 2213: class 'MoveChannelwiseLinearPastFork'
    • Line 2215: function 'MoveChannelwiseLinearPastFork.apply'
    • Line 2264: function 'MoveChannelwiseLinearPastFork.can_broadcast_to'
    • Line 2319: class 'MoveScalesPastIm2Col'
    • Line 2321: function 'MoveScalesPastIm2Col.apply'

Total missing docstrings: 129

How to Fix:

Please add docstrings to the missing functions, classes, and modules listed above.

Docstring Guidelines:

  • All modules should have a module-level docstring
  • All public functions and methods should have docstrings
  • All private functions should have docstrings
  • All classes should have docstrings
  • Use triple quotes (""") for docstrings
  • Follow PEP 257 conventions
Raw output from docstring checker
Checking 16 changed Python file(s):
❌ Missing docstrings found:

📄 src/finn/custom_op/fpgadataflow/attention.py:
  - Line 1: module 'attention.py'
  - Line 30: function 'softmax'
  - Line 48: class 'ScaledDotProductAttention'
  - Line 50: function 'ScaledDotProductAttention.__init__'
  - Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
  - Line 171: function 'ScaledDotProductAttention.shapes'
  - Line 179: function 'ScaledDotProductAttention.folds'
  - Line 187: function 'ScaledDotProductAttention.is_valid_folding'
  - Line 199: function 'ScaledDotProductAttention.iterations'
  - Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
  - Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
  - Line 286: function 'ScaledDotProductAttention._execute_node_python'
  - Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
  - Line 315: function 'ScaledDotProductAttention.act_a_softmax'
  - Line 333: function 'ScaledDotProductAttention.act_av_matmul'
  - Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
  - Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
  - Line 423: function 'ScaledDotProductAttention.execute_node'
  - Line 436: function 'ScaledDotProductAttention.verify_node'
  - Line 440: function 'ScaledDotProductAttention.get_input_datatype'
  - Line 479: function 'ScaledDotProductAttention.get_output_datatype'
  - Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
  - Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
  - Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
  - Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
  - Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
  - Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
  - Line 602: function 'ScaledDotProductAttention.get_instream_width'
  - Line 612: function 'ScaledDotProductAttention.get_outstream_width'
  - Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
  - Line 709: function 'ScaledDotProductAttention.get_number_input_values'
  - Line 716: function 'ScaledDotProductAttention.get_number_output_values'
  - Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
  - Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

📄 src/finn/interface/run_finn.py:
  - Line 1: module 'run_finn.py'
  - Line 37: function '_resolve_module_path'
  - Line 128: function 'main_group'
  - Line 181: function 'build'
  - Line 284: function 'run'
  - Line 317: function 'bench'
  - Line 348: function 'test'
  - Line 361: function 'deps'
  - Line 373: function 'update'
  - Line 379: function 'config'
  - Line 384: function '_command_get_settings'
  - Line 394: function 'config_list'
  - Line 402: function 'config_get'
  - Line 413: function 'config_set'
  - Line 433: function 'config_create'
  - Line 447: function 'main'

📄 src/finn/transformation/streamline/reorder.py:
  - Line 1: module 'reorder.py'
  - Line 56: function 'MoveAddPastMul.apply'
  - Line 120: function 'MoveScalarMulPastMatMul.apply'
  - Line 177: function 'MoveScalarAddPastMatMul.apply'
  - Line 234: function 'MoveAddPastConv.apply'
  - Line 312: function 'MoveScalarMulPastConv.apply'
  - Line 361: function 'MoveScalarMulPastConvTranspose.apply'
  - Line 410: function 'MoveMulPastDWConv.apply'
  - Line 472: function 'MoveMulPastMaxPool.apply'
  - Line 543: function 'MoveLinearPastEltwiseAdd.move_node'
  - Line 563: function 'MoveLinearPastEltwiseAdd.apply'
  - Line 652: function 'MoveScalarLinearPastInvariants.apply'
  - Line 731: function 'MakeMaxPoolNHWC.apply'
  - Line 805: function 'MakeScaleResizeNHWC.apply'
  - Line 906: function 'MoveOpPastFork.__init__'
  - Line 910: function 'MoveOpPastFork.apply'
  - Line 974: class 'MoveAddPastFork'
  - Line 975: function 'MoveAddPastFork.__init__'
  - Line 979: class 'MoveMulPastFork'
  - Line 980: function 'MoveMulPastFork.__init__'
  - Line 984: class 'MoveLinearPastFork'
  - Line 985: function 'MoveLinearPastFork.__init__'
  - Line 989: class 'MoveTransposePastFork'
  - Line 990: function 'MoveTransposePastFork.__init__'
  - Line 994: function 'permute_shape'
  - Line 1006: function 'MoveScalarLinearPastSplit.__init__'
  - Line 1011: function 'MoveScalarLinearPastSplit.apply'
  - Line 1057: class 'MoveTransposePastSplit'
  - Line 1058: function 'MoveTransposePastSplit.__init__'
  - Line 1063: function 'MoveTransposePastSplit.apply'
  - Line 1110: function 'MoveMaxPoolPastMultiThreshold.apply'
  - Line 1173: function 'MoveFlattenPastTopK.apply'
  - Line 1233: function 'MoveFlattenPastAffine.apply'
  - Line 1319: function 'MoveTransposePastScalarMul.apply'
  - Line 1381: function 'MoveIdenticalOpPastJoinOp.__init__'
  - Line 1425: function 'MoveIdenticalOpPastJoinOp.apply'
  - Line 1460: class 'MoveTransposePastJoinAdd'
  - Line 1461: function 'MoveTransposePastJoinAdd.__init__'
  - Line 1464: function 'MoveTransposePastJoinAdd.are_producers_identical'
  - Line 1474: class 'MoveTransposePastJoinMul'
  - Line 1475: function 'MoveTransposePastJoinMul.__init__'
  - Line 1478: function 'MoveTransposePastJoinMul.are_producers_identical'
  - Line 1488: class 'MoveMulPastJoinAdd'
  - Line 1489: function 'MoveMulPastJoinAdd.__init__'
  - Line 1492: function 'MoveMulPastJoinAdd.are_producers_identical'
  - Line 1504: class 'MoveAddPastJoinAdd'
  - Line 1505: function 'MoveAddPastJoinAdd.__init__'
  - Line 1508: function 'MoveAddPastJoinAdd.are_producers_identical'
  - Line 1529: class 'MoveTransposePastJoinConcat'
  - Line 1530: function 'MoveTransposePastJoinConcat.__init__'
  - Line 1533: function 'MoveTransposePastJoinConcat.are_producers_identical'
  - Line 1542: function 'MoveTransposePastJoinConcat.move_node'
  - Line 1579: function 'MoveAffinePastJoinConcat.__init__'
  - Line 1582: function 'MoveAffinePastJoinConcat.are_producers_identical_scalar_ops'
  - Line 1591: function 'MoveAffinePastJoinConcat.are_producers_channelwise_ops'
  - Line 1604: function 'MoveAffinePastJoinConcat.move_node'
  - Line 1654: class 'MoveMulPastJoinConcat'
  - Line 1655: function 'MoveMulPastJoinConcat.__init__'
  - Line 1659: class 'MoveAddPastJoinConcat'
  - Line 1660: function 'MoveAddPastJoinConcat.__init__'
  - Line 1666: class 'MoveSqueezePastMultiThreshold'
  - Line 1668: function 'MoveSqueezePastMultiThreshold.apply'
  - Line 1728: class 'MoveSqueezePastMatMul'
  - Line 1730: function 'MoveSqueezePastMatMul.apply'
  - Line 1792: class 'MoveTransposePastEltwise'
  - Line 1794: function 'MoveTransposePastEltwise.apply'
  - Line 1897: class 'MoveAddPastMatMul'
  - Line 1899: function 'MoveAddPastMatMul.apply'
  - Line 1999: class 'MoveConstMulPastJoinMul'
  - Line 2001: function 'MoveConstMulPastJoinMul.apply'
  - Line 2076: class 'MoveMulPastAdd'
  - Line 2078: function 'MoveMulPastAdd.apply'
  - Line 2159: class 'MoveScalarLinearPastFork'
  - Line 2161: function 'MoveScalarLinearPastFork.apply'
  - Line 2213: class 'MoveChannelwiseLinearPastFork'
  - Line 2215: function 'MoveChannelwiseLinearPastFork.apply'
  - Line 2264: function 'MoveChannelwiseLinearPastFork.can_broadcast_to'
  - Line 2319: class 'MoveScalesPastIm2Col'
  - Line 2321: function 'MoveScalesPastIm2Col.apply'

Total missing docstrings: 129

@github-actions
Copy link

📋 Docstring Check Report

Checked files:

  • src/finn/builder/build_dataflow_config.py
  • src/finn/builder/build_dataflow_steps.py
  • src/finn/builder/passes.py
  • src/finn/custom_op/fpgadataflow/attention.py
  • src/finn/custom_op/fpgadataflow/hls/attention_hls.py
  • src/finn/custom_op/fpgadataflow/hls/pool_hls.py
  • src/finn/custom_op/fpgadataflow/pool.py
  • src/finn/custom_op/fpgadataflow/reshape.py
  • src/finn/custom_op/fpgadataflow/rtl/__init__.py
  • src/finn/custom_op/fpgadataflow/rtl/reshape_rtl.py
  • src/finn/interface/run_finn.py
  • src/finn/transformation/fpgadataflow/attention_heads.py
  • src/finn/transformation/fpgadataflow/convert_to_hw_layers.py
  • src/finn/transformation/fpgadataflow/set_folding.py
  • src/finn/transformation/streamline/__init__.py
  • src/finn/transformation/streamline/reorder.py
  • src/finn/transformation/streamline/round_thresholds.py

Docstring check failed!

Missing Docstrings Details:

📄 src/finn/custom_op/fpgadataflow/attention.py:

    • Line 1: module 'attention.py'
    • Line 30: function 'softmax'
    • Line 48: class 'ScaledDotProductAttention'
    • Line 50: function 'ScaledDotProductAttention.init'
    • Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
    • Line 171: function 'ScaledDotProductAttention.shapes'
    • Line 179: function 'ScaledDotProductAttention.folds'
    • Line 187: function 'ScaledDotProductAttention.is_valid_folding'
    • Line 199: function 'ScaledDotProductAttention.iterations'
    • Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
    • Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
    • Line 286: function 'ScaledDotProductAttention._execute_node_python'
    • Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
    • Line 315: function 'ScaledDotProductAttention.act_a_softmax'
    • Line 333: function 'ScaledDotProductAttention.act_av_matmul'
    • Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
    • Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
    • Line 423: function 'ScaledDotProductAttention.execute_node'
    • Line 436: function 'ScaledDotProductAttention.verify_node'
    • Line 440: function 'ScaledDotProductAttention.get_input_datatype'
    • Line 479: function 'ScaledDotProductAttention.get_output_datatype'
    • Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
    • Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
    • Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
    • Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
    • Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
    • Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
    • Line 602: function 'ScaledDotProductAttention.get_instream_width'
    • Line 612: function 'ScaledDotProductAttention.get_outstream_width'
    • Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
    • Line 709: function 'ScaledDotProductAttention.get_number_input_values'
    • Line 716: function 'ScaledDotProductAttention.get_number_output_values'
    • Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
    • Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

📄 src/finn/interface/run_finn.py:

    • Line 1: module 'run_finn.py'
    • Line 37: function '_resolve_module_path'
    • Line 128: function 'main_group'
    • Line 181: function 'build'
    • Line 284: function 'run'
    • Line 317: function 'bench'
    • Line 348: function 'test'
    • Line 361: function 'deps'
    • Line 373: function 'update'
    • Line 379: function 'config'
    • Line 384: function '_command_get_settings'
    • Line 394: function 'config_list'
    • Line 402: function 'config_get'
    • Line 413: function 'config_set'
    • Line 433: function 'config_create'
    • Line 447: function 'main'

📄 src/finn/transformation/fpgadataflow/attention_heads.py:

    • Line 1: module 'attention_heads.py'
    • Line 44: class 'InferMultiHeads'
    • Line 46: function 'InferMultiHeads.apply'
    • Line 342: class 'MoveSplitMultiHeadsPastMultiThreshold'
    • Line 344: function 'MoveSplitMultiHeadsPastMultiThreshold.apply'
    • Line 457: class 'MoveMergeMultiHeadsPastMultiThreshold'
    • Line 459: function 'MoveMergeMultiHeadsPastMultiThreshold.apply'
    • Line 585: function 'is_multi_head_attention'
    • Line 615: class 'UnrollMultiHeadAttention'
    • Line 617: function 'UnrollMultiHeadAttention.apply'

📄 src/finn/transformation/streamline/reorder.py:

    • Line 1: module 'reorder.py'
    • Line 56: function 'MoveAddPastMul.apply'
    • Line 120: function 'MoveScalarMulPastMatMul.apply'
    • Line 177: function 'MoveScalarAddPastMatMul.apply'
    • Line 234: function 'MoveAddPastConv.apply'
    • Line 312: function 'MoveScalarMulPastConv.apply'
    • Line 361: function 'MoveScalarMulPastConvTranspose.apply'
    • Line 410: function 'MoveMulPastDWConv.apply'
    • Line 472: function 'MoveMulPastMaxPool.apply'
    • Line 543: function 'MoveLinearPastEltwiseAdd.move_node'
    • Line 563: function 'MoveLinearPastEltwiseAdd.apply'
    • Line 652: function 'MoveScalarLinearPastInvariants.apply'
    • Line 731: function 'MakeMaxPoolNHWC.apply'
    • Line 805: function 'MakeScaleResizeNHWC.apply'
    • Line 906: function 'MoveOpPastFork.init'
    • Line 910: function 'MoveOpPastFork.apply'
    • Line 974: class 'MoveAddPastFork'
    • Line 975: function 'MoveAddPastFork.init'
    • Line 979: class 'MoveMulPastFork'
    • Line 980: function 'MoveMulPastFork.init'
    • Line 984: class 'MoveLinearPastFork'
    • Line 985: function 'MoveLinearPastFork.init'
    • Line 989: class 'MoveTransposePastFork'
    • Line 990: function 'MoveTransposePastFork.init'
    • Line 994: function 'permute_shape'
    • Line 1006: function 'MoveScalarLinearPastSplit.init'
    • Line 1011: function 'MoveScalarLinearPastSplit.apply'
    • Line 1057: class 'MoveTransposePastSplit'
    • Line 1058: function 'MoveTransposePastSplit.init'
    • Line 1063: function 'MoveTransposePastSplit.apply'
    • Line 1110: function 'MoveMaxPoolPastMultiThreshold.apply'
    • Line 1173: function 'MoveFlattenPastTopK.apply'
    • Line 1233: function 'MoveFlattenPastAffine.apply'
    • Line 1319: function 'MoveTransposePastScalarMul.apply'
    • Line 1381: function 'MoveIdenticalOpPastJoinOp.init'
    • Line 1425: function 'MoveIdenticalOpPastJoinOp.apply'
    • Line 1460: class 'MoveTransposePastJoinAdd'
    • Line 1461: function 'MoveTransposePastJoinAdd.init'
    • Line 1464: function 'MoveTransposePastJoinAdd.are_producers_identical'
    • Line 1474: class 'MoveTransposePastJoinMul'
    • Line 1475: function 'MoveTransposePastJoinMul.init'
    • Line 1478: function 'MoveTransposePastJoinMul.are_producers_identical'
    • Line 1488: class 'MoveMulPastJoinAdd'
    • Line 1489: function 'MoveMulPastJoinAdd.init'
    • Line 1492: function 'MoveMulPastJoinAdd.are_producers_identical'
    • Line 1504: class 'MoveAddPastJoinAdd'
    • Line 1505: function 'MoveAddPastJoinAdd.init'
    • Line 1508: function 'MoveAddPastJoinAdd.are_producers_identical'
    • Line 1529: class 'MoveTransposePastJoinConcat'
    • Line 1530: function 'MoveTransposePastJoinConcat.init'
    • Line 1533: function 'MoveTransposePastJoinConcat.are_producers_identical'
    • Line 1542: function 'MoveTransposePastJoinConcat.move_node'
    • Line 1579: function 'MoveAffinePastJoinConcat.init'
    • Line 1582: function 'MoveAffinePastJoinConcat.are_producers_identical_scalar_ops'
    • Line 1591: function 'MoveAffinePastJoinConcat.are_producers_channelwise_ops'
    • Line 1604: function 'MoveAffinePastJoinConcat.move_node'
    • Line 1654: class 'MoveMulPastJoinConcat'
    • Line 1655: function 'MoveMulPastJoinConcat.init'
    • Line 1659: class 'MoveAddPastJoinConcat'
    • Line 1660: function 'MoveAddPastJoinConcat.init'
    • Line 1666: class 'MoveSqueezePastMultiThreshold'
    • Line 1668: function 'MoveSqueezePastMultiThreshold.apply'
    • Line 1728: class 'MoveSqueezePastMatMul'
    • Line 1730: function 'MoveSqueezePastMatMul.apply'
    • Line 1792: class 'MoveTransposePastEltwise'
    • Line 1794: function 'MoveTransposePastEltwise.apply'
    • Line 1897: class 'MoveAddPastMatMul'
    • Line 1899: function 'MoveAddPastMatMul.apply'
    • Line 1999: class 'MoveConstMulPastJoinMul'
    • Line 2001: function 'MoveConstMulPastJoinMul.apply'
    • Line 2076: class 'MoveMulPastAdd'
    • Line 2078: function 'MoveMulPastAdd.apply'
    • Line 2159: class 'MoveScalarLinearPastFork'
    • Line 2161: function 'MoveScalarLinearPastFork.apply'
    • Line 2213: class 'MoveChannelwiseLinearPastFork'
    • Line 2215: function 'MoveChannelwiseLinearPastFork.apply'
    • Line 2264: function 'MoveChannelwiseLinearPastFork.can_broadcast_to'
    • Line 2319: class 'MoveScalesPastIm2Col'
    • Line 2321: function 'MoveScalesPastIm2Col.apply'

Total missing docstrings: 139

How to Fix:

Please add docstrings to the missing functions, classes, and modules listed above.

Docstring Guidelines:

  • All modules should have a module-level docstring
  • All public functions and methods should have docstrings
  • All private functions should have docstrings
  • All classes should have docstrings
  • Use triple quotes (""") for docstrings
  • Follow PEP 257 conventions
Raw output from docstring checker
Checking 17 changed Python file(s):
❌ Missing docstrings found:

📄 src/finn/custom_op/fpgadataflow/attention.py:
  - Line 1: module 'attention.py'
  - Line 30: function 'softmax'
  - Line 48: class 'ScaledDotProductAttention'
  - Line 50: function 'ScaledDotProductAttention.__init__'
  - Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
  - Line 171: function 'ScaledDotProductAttention.shapes'
  - Line 179: function 'ScaledDotProductAttention.folds'
  - Line 187: function 'ScaledDotProductAttention.is_valid_folding'
  - Line 199: function 'ScaledDotProductAttention.iterations'
  - Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
  - Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
  - Line 286: function 'ScaledDotProductAttention._execute_node_python'
  - Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
  - Line 315: function 'ScaledDotProductAttention.act_a_softmax'
  - Line 333: function 'ScaledDotProductAttention.act_av_matmul'
  - Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
  - Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
  - Line 423: function 'ScaledDotProductAttention.execute_node'
  - Line 436: function 'ScaledDotProductAttention.verify_node'
  - Line 440: function 'ScaledDotProductAttention.get_input_datatype'
  - Line 479: function 'ScaledDotProductAttention.get_output_datatype'
  - Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
  - Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
  - Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
  - Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
  - Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
  - Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
  - Line 602: function 'ScaledDotProductAttention.get_instream_width'
  - Line 612: function 'ScaledDotProductAttention.get_outstream_width'
  - Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
  - Line 709: function 'ScaledDotProductAttention.get_number_input_values'
  - Line 716: function 'ScaledDotProductAttention.get_number_output_values'
  - Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
  - Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

📄 src/finn/interface/run_finn.py:
  - Line 1: module 'run_finn.py'
  - Line 37: function '_resolve_module_path'
  - Line 128: function 'main_group'
  - Line 181: function 'build'
  - Line 284: function 'run'
  - Line 317: function 'bench'
  - Line 348: function 'test'
  - Line 361: function 'deps'
  - Line 373: function 'update'
  - Line 379: function 'config'
  - Line 384: function '_command_get_settings'
  - Line 394: function 'config_list'
  - Line 402: function 'config_get'
  - Line 413: function 'config_set'
  - Line 433: function 'config_create'
  - Line 447: function 'main'

📄 src/finn/transformation/fpgadataflow/attention_heads.py:
  - Line 1: module 'attention_heads.py'
  - Line 44: class 'InferMultiHeads'
  - Line 46: function 'InferMultiHeads.apply'
  - Line 342: class 'MoveSplitMultiHeadsPastMultiThreshold'
  - Line 344: function 'MoveSplitMultiHeadsPastMultiThreshold.apply'
  - Line 457: class 'MoveMergeMultiHeadsPastMultiThreshold'
  - Line 459: function 'MoveMergeMultiHeadsPastMultiThreshold.apply'
  - Line 585: function 'is_multi_head_attention'
  - Line 615: class 'UnrollMultiHeadAttention'
  - Line 617: function 'UnrollMultiHeadAttention.apply'

📄 src/finn/transformation/streamline/reorder.py:
  - Line 1: module 'reorder.py'
  - Line 56: function 'MoveAddPastMul.apply'
  - Line 120: function 'MoveScalarMulPastMatMul.apply'
  - Line 177: function 'MoveScalarAddPastMatMul.apply'
  - Line 234: function 'MoveAddPastConv.apply'
  - Line 312: function 'MoveScalarMulPastConv.apply'
  - Line 361: function 'MoveScalarMulPastConvTranspose.apply'
  - Line 410: function 'MoveMulPastDWConv.apply'
  - Line 472: function 'MoveMulPastMaxPool.apply'
  - Line 543: function 'MoveLinearPastEltwiseAdd.move_node'
  - Line 563: function 'MoveLinearPastEltwiseAdd.apply'
  - Line 652: function 'MoveScalarLinearPastInvariants.apply'
  - Line 731: function 'MakeMaxPoolNHWC.apply'
  - Line 805: function 'MakeScaleResizeNHWC.apply'
  - Line 906: function 'MoveOpPastFork.__init__'
  - Line 910: function 'MoveOpPastFork.apply'
  - Line 974: class 'MoveAddPastFork'
  - Line 975: function 'MoveAddPastFork.__init__'
  - Line 979: class 'MoveMulPastFork'
  - Line 980: function 'MoveMulPastFork.__init__'
  - Line 984: class 'MoveLinearPastFork'
  - Line 985: function 'MoveLinearPastFork.__init__'
  - Line 989: class 'MoveTransposePastFork'
  - Line 990: function 'MoveTransposePastFork.__init__'
  - Line 994: function 'permute_shape'
  - Line 1006: function 'MoveScalarLinearPastSplit.__init__'
  - Line 1011: function 'MoveScalarLinearPastSplit.apply'
  - Line 1057: class 'MoveTransposePastSplit'
  - Line 1058: function 'MoveTransposePastSplit.__init__'
  - Line 1063: function 'MoveTransposePastSplit.apply'
  - Line 1110: function 'MoveMaxPoolPastMultiThreshold.apply'
  - Line 1173: function 'MoveFlattenPastTopK.apply'
  - Line 1233: function 'MoveFlattenPastAffine.apply'
  - Line 1319: function 'MoveTransposePastScalarMul.apply'
  - Line 1381: function 'MoveIdenticalOpPastJoinOp.__init__'
  - Line 1425: function 'MoveIdenticalOpPastJoinOp.apply'
  - Line 1460: class 'MoveTransposePastJoinAdd'
  - Line 1461: function 'MoveTransposePastJoinAdd.__init__'
  - Line 1464: function 'MoveTransposePastJoinAdd.are_producers_identical'
  - Line 1474: class 'MoveTransposePastJoinMul'
  - Line 1475: function 'MoveTransposePastJoinMul.__init__'
  - Line 1478: function 'MoveTransposePastJoinMul.are_producers_identical'
  - Line 1488: class 'MoveMulPastJoinAdd'
  - Line 1489: function 'MoveMulPastJoinAdd.__init__'
  - Line 1492: function 'MoveMulPastJoinAdd.are_producers_identical'
  - Line 1504: class 'MoveAddPastJoinAdd'
  - Line 1505: function 'MoveAddPastJoinAdd.__init__'
  - Line 1508: function 'MoveAddPastJoinAdd.are_producers_identical'
  - Line 1529: class 'MoveTransposePastJoinConcat'
  - Line 1530: function 'MoveTransposePastJoinConcat.__init__'
  - Line 1533: function 'MoveTransposePastJoinConcat.are_producers_identical'
  - Line 1542: function 'MoveTransposePastJoinConcat.move_node'
  - Line 1579: function 'MoveAffinePastJoinConcat.__init__'
  - Line 1582: function 'MoveAffinePastJoinConcat.are_producers_identical_scalar_ops'
  - Line 1591: function 'MoveAffinePastJoinConcat.are_producers_channelwise_ops'
  - Line 1604: function 'MoveAffinePastJoinConcat.move_node'
  - Line 1654: class 'MoveMulPastJoinConcat'
  - Line 1655: function 'MoveMulPastJoinConcat.__init__'
  - Line 1659: class 'MoveAddPastJoinConcat'
  - Line 1660: function 'MoveAddPastJoinConcat.__init__'
  - Line 1666: class 'MoveSqueezePastMultiThreshold'
  - Line 1668: function 'MoveSqueezePastMultiThreshold.apply'
  - Line 1728: class 'MoveSqueezePastMatMul'
  - Line 1730: function 'MoveSqueezePastMatMul.apply'
  - Line 1792: class 'MoveTransposePastEltwise'
  - Line 1794: function 'MoveTransposePastEltwise.apply'
  - Line 1897: class 'MoveAddPastMatMul'
  - Line 1899: function 'MoveAddPastMatMul.apply'
  - Line 1999: class 'MoveConstMulPastJoinMul'
  - Line 2001: function 'MoveConstMulPastJoinMul.apply'
  - Line 2076: class 'MoveMulPastAdd'
  - Line 2078: function 'MoveMulPastAdd.apply'
  - Line 2159: class 'MoveScalarLinearPastFork'
  - Line 2161: function 'MoveScalarLinearPastFork.apply'
  - Line 2213: class 'MoveChannelwiseLinearPastFork'
  - Line 2215: function 'MoveChannelwiseLinearPastFork.apply'
  - Line 2264: function 'MoveChannelwiseLinearPastFork.can_broadcast_to'
  - Line 2319: class 'MoveScalesPastIm2Col'
  - Line 2321: function 'MoveScalesPastIm2Col.apply'

Total missing docstrings: 139

@github-actions
Copy link

📋 Docstring Check Report

Checked files:

  • src/finn/builder/build_dataflow_config.py
  • src/finn/builder/build_dataflow_steps.py
  • src/finn/builder/passes.py
  • src/finn/custom_op/fpgadataflow/attention.py
  • src/finn/custom_op/fpgadataflow/hls/attention_hls.py
  • src/finn/custom_op/fpgadataflow/hls/pool_hls.py
  • src/finn/custom_op/fpgadataflow/pool.py
  • src/finn/custom_op/fpgadataflow/reshape.py
  • src/finn/interface/run_finn.py
  • src/finn/transformation/fpgadataflow/attention_heads.py
  • src/finn/transformation/fpgadataflow/convert_to_hw_layers.py
  • src/finn/transformation/streamline/__init__.py
  • src/finn/transformation/streamline/reorder.py
  • src/finn/transformation/streamline/round_thresholds.py

Docstring check failed!

Missing Docstrings Details:

📄 src/finn/custom_op/fpgadataflow/attention.py:

    • Line 1: module 'attention.py'
    • Line 30: function 'softmax'
    • Line 48: class 'ScaledDotProductAttention'
    • Line 50: function 'ScaledDotProductAttention.init'
    • Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
    • Line 171: function 'ScaledDotProductAttention.shapes'
    • Line 179: function 'ScaledDotProductAttention.folds'
    • Line 187: function 'ScaledDotProductAttention.is_valid_folding'
    • Line 199: function 'ScaledDotProductAttention.iterations'
    • Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
    • Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
    • Line 286: function 'ScaledDotProductAttention._execute_node_python'
    • Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
    • Line 315: function 'ScaledDotProductAttention.act_a_softmax'
    • Line 333: function 'ScaledDotProductAttention.act_av_matmul'
    • Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
    • Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
    • Line 423: function 'ScaledDotProductAttention.execute_node'
    • Line 436: function 'ScaledDotProductAttention.verify_node'
    • Line 440: function 'ScaledDotProductAttention.get_input_datatype'
    • Line 479: function 'ScaledDotProductAttention.get_output_datatype'
    • Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
    • Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
    • Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
    • Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
    • Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
    • Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
    • Line 602: function 'ScaledDotProductAttention.get_instream_width'
    • Line 612: function 'ScaledDotProductAttention.get_outstream_width'
    • Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
    • Line 709: function 'ScaledDotProductAttention.get_number_input_values'
    • Line 716: function 'ScaledDotProductAttention.get_number_output_values'
    • Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
    • Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

📄 src/finn/interface/run_finn.py:

    • Line 1: module 'run_finn.py'
    • Line 37: function '_resolve_module_path'
    • Line 128: function 'main_group'
    • Line 181: function 'build'
    • Line 284: function 'run'
    • Line 317: function 'bench'
    • Line 348: function 'test'
    • Line 361: function 'deps'
    • Line 373: function 'update'
    • Line 379: function 'config'
    • Line 384: function '_command_get_settings'
    • Line 394: function 'config_list'
    • Line 402: function 'config_get'
    • Line 413: function 'config_set'
    • Line 433: function 'config_create'
    • Line 447: function 'main'

📄 src/finn/transformation/fpgadataflow/attention_heads.py:

    • Line 1: module 'attention_heads.py'
    • Line 44: class 'InferMultiHeads'
    • Line 46: function 'InferMultiHeads.apply'
    • Line 342: class 'MoveSplitMultiHeadsPastMultiThreshold'
    • Line 344: function 'MoveSplitMultiHeadsPastMultiThreshold.apply'
    • Line 457: class 'MoveMergeMultiHeadsPastMultiThreshold'
    • Line 459: function 'MoveMergeMultiHeadsPastMultiThreshold.apply'
    • Line 585: function 'is_multi_head_attention'
    • Line 615: class 'UnrollMultiHeadAttention'
    • Line 617: function 'UnrollMultiHeadAttention.apply'

📄 src/finn/transformation/streamline/reorder.py:

    • Line 1: module 'reorder.py'
    • Line 56: function 'MoveAddPastMul.apply'
    • Line 120: function 'MoveScalarMulPastMatMul.apply'
    • Line 177: function 'MoveScalarAddPastMatMul.apply'
    • Line 234: function 'MoveAddPastConv.apply'
    • Line 312: function 'MoveScalarMulPastConv.apply'
    • Line 361: function 'MoveScalarMulPastConvTranspose.apply'
    • Line 410: function 'MoveMulPastDWConv.apply'
    • Line 472: function 'MoveMulPastMaxPool.apply'
    • Line 543: function 'MoveLinearPastEltwiseAdd.move_node'
    • Line 563: function 'MoveLinearPastEltwiseAdd.apply'
    • Line 652: function 'MoveScalarLinearPastInvariants.apply'
    • Line 731: function 'MakeMaxPoolNHWC.apply'
    • Line 805: function 'MakeScaleResizeNHWC.apply'
    • Line 906: function 'MoveOpPastFork.init'
    • Line 910: function 'MoveOpPastFork.apply'
    • Line 974: class 'MoveAddPastFork'
    • Line 975: function 'MoveAddPastFork.init'
    • Line 979: class 'MoveMulPastFork'
    • Line 980: function 'MoveMulPastFork.init'
    • Line 984: class 'MoveLinearPastFork'
    • Line 985: function 'MoveLinearPastFork.init'
    • Line 989: class 'MoveTransposePastFork'
    • Line 990: function 'MoveTransposePastFork.init'
    • Line 994: function 'permute_shape'
    • Line 1006: function 'MoveScalarLinearPastSplit.init'
    • Line 1011: function 'MoveScalarLinearPastSplit.apply'
    • Line 1057: class 'MoveTransposePastSplit'
    • Line 1058: function 'MoveTransposePastSplit.init'
    • Line 1063: function 'MoveTransposePastSplit.apply'
    • Line 1110: function 'MoveMaxPoolPastMultiThreshold.apply'
    • Line 1173: function 'MoveFlattenPastTopK.apply'
    • Line 1233: function 'MoveFlattenPastAffine.apply'
    • Line 1319: function 'MoveTransposePastScalarMul.apply'
    • Line 1381: function 'MoveIdenticalOpPastJoinOp.init'
    • Line 1425: function 'MoveIdenticalOpPastJoinOp.apply'
    • Line 1460: class 'MoveTransposePastJoinAdd'
    • Line 1461: function 'MoveTransposePastJoinAdd.init'
    • Line 1464: function 'MoveTransposePastJoinAdd.are_producers_identical'
    • Line 1474: class 'MoveTransposePastJoinMul'
    • Line 1475: function 'MoveTransposePastJoinMul.init'
    • Line 1478: function 'MoveTransposePastJoinMul.are_producers_identical'
    • Line 1488: class 'MoveMulPastJoinAdd'
    • Line 1489: function 'MoveMulPastJoinAdd.init'
    • Line 1492: function 'MoveMulPastJoinAdd.are_producers_identical'
    • Line 1504: class 'MoveAddPastJoinAdd'
    • Line 1505: function 'MoveAddPastJoinAdd.init'
    • Line 1508: function 'MoveAddPastJoinAdd.are_producers_identical'
    • Line 1529: class 'MoveTransposePastJoinConcat'
    • Line 1530: function 'MoveTransposePastJoinConcat.init'
    • Line 1533: function 'MoveTransposePastJoinConcat.are_producers_identical'
    • Line 1542: function 'MoveTransposePastJoinConcat.move_node'
    • Line 1579: function 'MoveAffinePastJoinConcat.init'
    • Line 1582: function 'MoveAffinePastJoinConcat.are_producers_identical_scalar_ops'
    • Line 1591: function 'MoveAffinePastJoinConcat.are_producers_channelwise_ops'
    • Line 1604: function 'MoveAffinePastJoinConcat.move_node'
    • Line 1654: class 'MoveMulPastJoinConcat'
    • Line 1655: function 'MoveMulPastJoinConcat.init'
    • Line 1659: class 'MoveAddPastJoinConcat'
    • Line 1660: function 'MoveAddPastJoinConcat.init'
    • Line 1666: class 'MoveSqueezePastMultiThreshold'
    • Line 1668: function 'MoveSqueezePastMultiThreshold.apply'
    • Line 1728: class 'MoveSqueezePastMatMul'
    • Line 1730: function 'MoveSqueezePastMatMul.apply'
    • Line 1792: class 'MoveTransposePastEltwise'
    • Line 1794: function 'MoveTransposePastEltwise.apply'
    • Line 1897: class 'MoveAddPastMatMul'
    • Line 1899: function 'MoveAddPastMatMul.apply'
    • Line 1999: class 'MoveConstMulPastJoinMul'
    • Line 2001: function 'MoveConstMulPastJoinMul.apply'
    • Line 2076: class 'MoveMulPastAdd'
    • Line 2078: function 'MoveMulPastAdd.apply'
    • Line 2159: class 'MoveScalarLinearPastFork'
    • Line 2161: function 'MoveScalarLinearPastFork.apply'
    • Line 2213: class 'MoveChannelwiseLinearPastFork'
    • Line 2215: function 'MoveChannelwiseLinearPastFork.apply'
    • Line 2264: function 'MoveChannelwiseLinearPastFork.can_broadcast_to'
    • Line 2319: class 'MoveScalesPastIm2Col'
    • Line 2321: function 'MoveScalesPastIm2Col.apply'

Total missing docstrings: 139

How to Fix:

Please add docstrings to the missing functions, classes, and modules listed above.

Docstring Guidelines:

  • All modules should have a module-level docstring
  • All public functions and methods should have docstrings
  • All private functions should have docstrings
  • All classes should have docstrings
  • Use triple quotes (""") for docstrings
  • Follow PEP 257 conventions
Raw output from docstring checker
Checking 14 changed Python file(s):
❌ Missing docstrings found:

📄 src/finn/custom_op/fpgadataflow/attention.py:
  - Line 1: module 'attention.py'
  - Line 30: function 'softmax'
  - Line 48: class 'ScaledDotProductAttention'
  - Line 50: function 'ScaledDotProductAttention.__init__'
  - Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
  - Line 171: function 'ScaledDotProductAttention.shapes'
  - Line 179: function 'ScaledDotProductAttention.folds'
  - Line 187: function 'ScaledDotProductAttention.is_valid_folding'
  - Line 199: function 'ScaledDotProductAttention.iterations'
  - Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
  - Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
  - Line 286: function 'ScaledDotProductAttention._execute_node_python'
  - Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
  - Line 315: function 'ScaledDotProductAttention.act_a_softmax'
  - Line 333: function 'ScaledDotProductAttention.act_av_matmul'
  - Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
  - Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
  - Line 423: function 'ScaledDotProductAttention.execute_node'
  - Line 436: function 'ScaledDotProductAttention.verify_node'
  - Line 440: function 'ScaledDotProductAttention.get_input_datatype'
  - Line 479: function 'ScaledDotProductAttention.get_output_datatype'
  - Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
  - Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
  - Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
  - Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
  - Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
  - Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
  - Line 602: function 'ScaledDotProductAttention.get_instream_width'
  - Line 612: function 'ScaledDotProductAttention.get_outstream_width'
  - Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
  - Line 709: function 'ScaledDotProductAttention.get_number_input_values'
  - Line 716: function 'ScaledDotProductAttention.get_number_output_values'
  - Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
  - Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

📄 src/finn/interface/run_finn.py:
  - Line 1: module 'run_finn.py'
  - Line 37: function '_resolve_module_path'
  - Line 128: function 'main_group'
  - Line 181: function 'build'
  - Line 284: function 'run'
  - Line 317: function 'bench'
  - Line 348: function 'test'
  - Line 361: function 'deps'
  - Line 373: function 'update'
  - Line 379: function 'config'
  - Line 384: function '_command_get_settings'
  - Line 394: function 'config_list'
  - Line 402: function 'config_get'
  - Line 413: function 'config_set'
  - Line 433: function 'config_create'
  - Line 447: function 'main'

📄 src/finn/transformation/fpgadataflow/attention_heads.py:
  - Line 1: module 'attention_heads.py'
  - Line 44: class 'InferMultiHeads'
  - Line 46: function 'InferMultiHeads.apply'
  - Line 342: class 'MoveSplitMultiHeadsPastMultiThreshold'
  - Line 344: function 'MoveSplitMultiHeadsPastMultiThreshold.apply'
  - Line 457: class 'MoveMergeMultiHeadsPastMultiThreshold'
  - Line 459: function 'MoveMergeMultiHeadsPastMultiThreshold.apply'
  - Line 585: function 'is_multi_head_attention'
  - Line 615: class 'UnrollMultiHeadAttention'
  - Line 617: function 'UnrollMultiHeadAttention.apply'

📄 src/finn/transformation/streamline/reorder.py:
  - Line 1: module 'reorder.py'
  - Line 56: function 'MoveAddPastMul.apply'
  - Line 120: function 'MoveScalarMulPastMatMul.apply'
  - Line 177: function 'MoveScalarAddPastMatMul.apply'
  - Line 234: function 'MoveAddPastConv.apply'
  - Line 312: function 'MoveScalarMulPastConv.apply'
  - Line 361: function 'MoveScalarMulPastConvTranspose.apply'
  - Line 410: function 'MoveMulPastDWConv.apply'
  - Line 472: function 'MoveMulPastMaxPool.apply'
  - Line 543: function 'MoveLinearPastEltwiseAdd.move_node'
  - Line 563: function 'MoveLinearPastEltwiseAdd.apply'
  - Line 652: function 'MoveScalarLinearPastInvariants.apply'
  - Line 731: function 'MakeMaxPoolNHWC.apply'
  - Line 805: function 'MakeScaleResizeNHWC.apply'
  - Line 906: function 'MoveOpPastFork.__init__'
  - Line 910: function 'MoveOpPastFork.apply'
  - Line 974: class 'MoveAddPastFork'
  - Line 975: function 'MoveAddPastFork.__init__'
  - Line 979: class 'MoveMulPastFork'
  - Line 980: function 'MoveMulPastFork.__init__'
  - Line 984: class 'MoveLinearPastFork'
  - Line 985: function 'MoveLinearPastFork.__init__'
  - Line 989: class 'MoveTransposePastFork'
  - Line 990: function 'MoveTransposePastFork.__init__'
  - Line 994: function 'permute_shape'
  - Line 1006: function 'MoveScalarLinearPastSplit.__init__'
  - Line 1011: function 'MoveScalarLinearPastSplit.apply'
  - Line 1057: class 'MoveTransposePastSplit'
  - Line 1058: function 'MoveTransposePastSplit.__init__'
  - Line 1063: function 'MoveTransposePastSplit.apply'
  - Line 1110: function 'MoveMaxPoolPastMultiThreshold.apply'
  - Line 1173: function 'MoveFlattenPastTopK.apply'
  - Line 1233: function 'MoveFlattenPastAffine.apply'
  - Line 1319: function 'MoveTransposePastScalarMul.apply'
  - Line 1381: function 'MoveIdenticalOpPastJoinOp.__init__'
  - Line 1425: function 'MoveIdenticalOpPastJoinOp.apply'
  - Line 1460: class 'MoveTransposePastJoinAdd'
  - Line 1461: function 'MoveTransposePastJoinAdd.__init__'
  - Line 1464: function 'MoveTransposePastJoinAdd.are_producers_identical'
  - Line 1474: class 'MoveTransposePastJoinMul'
  - Line 1475: function 'MoveTransposePastJoinMul.__init__'
  - Line 1478: function 'MoveTransposePastJoinMul.are_producers_identical'
  - Line 1488: class 'MoveMulPastJoinAdd'
  - Line 1489: function 'MoveMulPastJoinAdd.__init__'
  - Line 1492: function 'MoveMulPastJoinAdd.are_producers_identical'
  - Line 1504: class 'MoveAddPastJoinAdd'
  - Line 1505: function 'MoveAddPastJoinAdd.__init__'
  - Line 1508: function 'MoveAddPastJoinAdd.are_producers_identical'
  - Line 1529: class 'MoveTransposePastJoinConcat'
  - Line 1530: function 'MoveTransposePastJoinConcat.__init__'
  - Line 1533: function 'MoveTransposePastJoinConcat.are_producers_identical'
  - Line 1542: function 'MoveTransposePastJoinConcat.move_node'
  - Line 1579: function 'MoveAffinePastJoinConcat.__init__'
  - Line 1582: function 'MoveAffinePastJoinConcat.are_producers_identical_scalar_ops'
  - Line 1591: function 'MoveAffinePastJoinConcat.are_producers_channelwise_ops'
  - Line 1604: function 'MoveAffinePastJoinConcat.move_node'
  - Line 1654: class 'MoveMulPastJoinConcat'
  - Line 1655: function 'MoveMulPastJoinConcat.__init__'
  - Line 1659: class 'MoveAddPastJoinConcat'
  - Line 1660: function 'MoveAddPastJoinConcat.__init__'
  - Line 1666: class 'MoveSqueezePastMultiThreshold'
  - Line 1668: function 'MoveSqueezePastMultiThreshold.apply'
  - Line 1728: class 'MoveSqueezePastMatMul'
  - Line 1730: function 'MoveSqueezePastMatMul.apply'
  - Line 1792: class 'MoveTransposePastEltwise'
  - Line 1794: function 'MoveTransposePastEltwise.apply'
  - Line 1897: class 'MoveAddPastMatMul'
  - Line 1899: function 'MoveAddPastMatMul.apply'
  - Line 1999: class 'MoveConstMulPastJoinMul'
  - Line 2001: function 'MoveConstMulPastJoinMul.apply'
  - Line 2076: class 'MoveMulPastAdd'
  - Line 2078: function 'MoveMulPastAdd.apply'
  - Line 2159: class 'MoveScalarLinearPastFork'
  - Line 2161: function 'MoveScalarLinearPastFork.apply'
  - Line 2213: class 'MoveChannelwiseLinearPastFork'
  - Line 2215: function 'MoveChannelwiseLinearPastFork.apply'
  - Line 2264: function 'MoveChannelwiseLinearPastFork.can_broadcast_to'
  - Line 2319: class 'MoveScalesPastIm2Col'
  - Line 2321: function 'MoveScalesPastIm2Col.apply'

Total missing docstrings: 139

@github-actions
Copy link

📋 Docstring Check Report

Checked files:

  • src/finn/builder/build_dataflow_config.py
  • src/finn/builder/build_dataflow_steps.py
  • src/finn/builder/passes.py
  • src/finn/custom_op/fpgadataflow/attention.py
  • src/finn/custom_op/fpgadataflow/hls/attention_hls.py
  • src/finn/custom_op/fpgadataflow/hls/pool_hls.py
  • src/finn/custom_op/fpgadataflow/pool.py
  • src/finn/custom_op/fpgadataflow/reshape.py
  • src/finn/interface/run_finn.py
  • src/finn/transformation/fpgadataflow/attention_heads.py
  • src/finn/transformation/fpgadataflow/convert_to_hw_layers.py
  • src/finn/transformation/streamline/__init__.py
  • src/finn/transformation/streamline/reorder.py
  • src/finn/transformation/streamline/round_thresholds.py

Docstring check failed!

Missing Docstrings Details:

📄 src/finn/custom_op/fpgadataflow/attention.py:

    • Line 1: module 'attention.py'
    • Line 30: function 'softmax'
    • Line 48: class 'ScaledDotProductAttention'
    • Line 50: function 'ScaledDotProductAttention.init'
    • Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
    • Line 171: function 'ScaledDotProductAttention.shapes'
    • Line 179: function 'ScaledDotProductAttention.folds'
    • Line 187: function 'ScaledDotProductAttention.is_valid_folding'
    • Line 199: function 'ScaledDotProductAttention.iterations'
    • Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
    • Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
    • Line 286: function 'ScaledDotProductAttention._execute_node_python'
    • Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
    • Line 315: function 'ScaledDotProductAttention.act_a_softmax'
    • Line 333: function 'ScaledDotProductAttention.act_av_matmul'
    • Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
    • Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
    • Line 423: function 'ScaledDotProductAttention.execute_node'
    • Line 436: function 'ScaledDotProductAttention.verify_node'
    • Line 440: function 'ScaledDotProductAttention.get_input_datatype'
    • Line 479: function 'ScaledDotProductAttention.get_output_datatype'
    • Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
    • Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
    • Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
    • Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
    • Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
    • Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
    • Line 602: function 'ScaledDotProductAttention.get_instream_width'
    • Line 612: function 'ScaledDotProductAttention.get_outstream_width'
    • Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
    • Line 709: function 'ScaledDotProductAttention.get_number_input_values'
    • Line 716: function 'ScaledDotProductAttention.get_number_output_values'
    • Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
    • Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

📄 src/finn/interface/run_finn.py:

    • Line 1: module 'run_finn.py'
    • Line 37: function '_resolve_module_path'
    • Line 128: function 'main_group'
    • Line 181: function 'build'
    • Line 284: function 'run'
    • Line 317: function 'bench'
    • Line 348: function 'test'
    • Line 361: function 'deps'
    • Line 373: function 'update'
    • Line 379: function 'config'
    • Line 384: function '_command_get_settings'
    • Line 394: function 'config_list'
    • Line 402: function 'config_get'
    • Line 413: function 'config_set'
    • Line 433: function 'config_create'
    • Line 447: function 'main'

📄 src/finn/transformation/fpgadataflow/attention_heads.py:

    • Line 1: module 'attention_heads.py'
    • Line 44: class 'InferMultiHeads'
    • Line 46: function 'InferMultiHeads.apply'
    • Line 342: class 'MoveSplitMultiHeadsPastMultiThreshold'
    • Line 344: function 'MoveSplitMultiHeadsPastMultiThreshold.apply'
    • Line 457: class 'MoveMergeMultiHeadsPastMultiThreshold'
    • Line 459: function 'MoveMergeMultiHeadsPastMultiThreshold.apply'
    • Line 585: function 'is_multi_head_attention'
    • Line 615: class 'UnrollMultiHeadAttention'
    • Line 617: function 'UnrollMultiHeadAttention.apply'

📄 src/finn/transformation/streamline/reorder.py:

    • Line 1: module 'reorder.py'
    • Line 56: function 'MoveAddPastMul.apply'
    • Line 120: function 'MoveScalarMulPastMatMul.apply'
    • Line 177: function 'MoveScalarAddPastMatMul.apply'
    • Line 234: function 'MoveAddPastConv.apply'
    • Line 312: function 'MoveScalarMulPastConv.apply'
    • Line 361: function 'MoveScalarMulPastConvTranspose.apply'
    • Line 410: function 'MoveMulPastDWConv.apply'
    • Line 472: function 'MoveMulPastMaxPool.apply'
    • Line 543: function 'MoveLinearPastEltwiseAdd.move_node'
    • Line 563: function 'MoveLinearPastEltwiseAdd.apply'
    • Line 652: function 'MoveScalarLinearPastInvariants.apply'
    • Line 731: function 'MakeMaxPoolNHWC.apply'
    • Line 805: function 'MakeScaleResizeNHWC.apply'
    • Line 906: function 'MoveOpPastFork.init'
    • Line 910: function 'MoveOpPastFork.apply'
    • Line 974: class 'MoveAddPastFork'
    • Line 975: function 'MoveAddPastFork.init'
    • Line 979: class 'MoveMulPastFork'
    • Line 980: function 'MoveMulPastFork.init'
    • Line 984: class 'MoveLinearPastFork'
    • Line 985: function 'MoveLinearPastFork.init'
    • Line 989: class 'MoveTransposePastFork'
    • Line 990: function 'MoveTransposePastFork.init'
    • Line 994: function 'permute_shape'
    • Line 1006: function 'MoveScalarLinearPastSplit.init'
    • Line 1011: function 'MoveScalarLinearPastSplit.apply'
    • Line 1057: class 'MoveTransposePastSplit'
    • Line 1058: function 'MoveTransposePastSplit.init'
    • Line 1063: function 'MoveTransposePastSplit.apply'
    • Line 1110: function 'MoveMaxPoolPastMultiThreshold.apply'
    • Line 1173: function 'MoveFlattenPastTopK.apply'
    • Line 1233: function 'MoveFlattenPastAffine.apply'
    • Line 1319: function 'MoveTransposePastScalarMul.apply'
    • Line 1381: function 'MoveIdenticalOpPastJoinOp.init'
    • Line 1425: function 'MoveIdenticalOpPastJoinOp.apply'
    • Line 1460: class 'MoveTransposePastJoinAdd'
    • Line 1461: function 'MoveTransposePastJoinAdd.init'
    • Line 1464: function 'MoveTransposePastJoinAdd.are_producers_identical'
    • Line 1474: class 'MoveTransposePastJoinMul'
    • Line 1475: function 'MoveTransposePastJoinMul.init'
    • Line 1478: function 'MoveTransposePastJoinMul.are_producers_identical'
    • Line 1488: class 'MoveMulPastJoinAdd'
    • Line 1489: function 'MoveMulPastJoinAdd.init'
    • Line 1492: function 'MoveMulPastJoinAdd.are_producers_identical'
    • Line 1504: class 'MoveAddPastJoinAdd'
    • Line 1505: function 'MoveAddPastJoinAdd.init'
    • Line 1508: function 'MoveAddPastJoinAdd.are_producers_identical'
    • Line 1529: class 'MoveTransposePastJoinConcat'
    • Line 1530: function 'MoveTransposePastJoinConcat.init'
    • Line 1533: function 'MoveTransposePastJoinConcat.are_producers_identical'
    • Line 1542: function 'MoveTransposePastJoinConcat.move_node'
    • Line 1579: function 'MoveAffinePastJoinConcat.init'
    • Line 1582: function 'MoveAffinePastJoinConcat.are_producers_identical_scalar_ops'
    • Line 1591: function 'MoveAffinePastJoinConcat.are_producers_channelwise_ops'
    • Line 1604: function 'MoveAffinePastJoinConcat.move_node'
    • Line 1654: class 'MoveMulPastJoinConcat'
    • Line 1655: function 'MoveMulPastJoinConcat.init'
    • Line 1659: class 'MoveAddPastJoinConcat'
    • Line 1660: function 'MoveAddPastJoinConcat.init'
    • Line 1666: class 'MoveSqueezePastMultiThreshold'
    • Line 1668: function 'MoveSqueezePastMultiThreshold.apply'
    • Line 1728: class 'MoveSqueezePastMatMul'
    • Line 1730: function 'MoveSqueezePastMatMul.apply'
    • Line 1792: class 'MoveTransposePastEltwise'
    • Line 1794: function 'MoveTransposePastEltwise.apply'
    • Line 1897: class 'MoveAddPastMatMul'
    • Line 1899: function 'MoveAddPastMatMul.apply'
    • Line 1999: class 'MoveConstMulPastJoinMul'
    • Line 2001: function 'MoveConstMulPastJoinMul.apply'
    • Line 2076: class 'MoveMulPastAdd'
    • Line 2078: function 'MoveMulPastAdd.apply'
    • Line 2159: class 'MoveScalarLinearPastFork'
    • Line 2161: function 'MoveScalarLinearPastFork.apply'
    • Line 2213: class 'MoveChannelwiseLinearPastFork'
    • Line 2215: function 'MoveChannelwiseLinearPastFork.apply'
    • Line 2264: function 'MoveChannelwiseLinearPastFork.can_broadcast_to'
    • Line 2319: class 'MoveScalesPastIm2Col'
    • Line 2321: function 'MoveScalesPastIm2Col.apply'

Total missing docstrings: 139

How to Fix:

Please add docstrings to the missing functions, classes, and modules listed above.

Docstring Guidelines:

  • All modules should have a module-level docstring
  • All public functions and methods should have docstrings
  • All private functions should have docstrings
  • All classes should have docstrings
  • Use triple quotes (""") for docstrings
  • Follow PEP 257 conventions
Raw output from docstring checker
Checking 14 changed Python file(s):
❌ Missing docstrings found:

📄 src/finn/custom_op/fpgadataflow/attention.py:
  - Line 1: module 'attention.py'
  - Line 30: function 'softmax'
  - Line 48: class 'ScaledDotProductAttention'
  - Line 50: function 'ScaledDotProductAttention.__init__'
  - Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
  - Line 171: function 'ScaledDotProductAttention.shapes'
  - Line 179: function 'ScaledDotProductAttention.folds'
  - Line 187: function 'ScaledDotProductAttention.is_valid_folding'
  - Line 199: function 'ScaledDotProductAttention.iterations'
  - Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
  - Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
  - Line 286: function 'ScaledDotProductAttention._execute_node_python'
  - Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
  - Line 315: function 'ScaledDotProductAttention.act_a_softmax'
  - Line 333: function 'ScaledDotProductAttention.act_av_matmul'
  - Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
  - Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
  - Line 423: function 'ScaledDotProductAttention.execute_node'
  - Line 436: function 'ScaledDotProductAttention.verify_node'
  - Line 440: function 'ScaledDotProductAttention.get_input_datatype'
  - Line 479: function 'ScaledDotProductAttention.get_output_datatype'
  - Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
  - Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
  - Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
  - Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
  - Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
  - Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
  - Line 602: function 'ScaledDotProductAttention.get_instream_width'
  - Line 612: function 'ScaledDotProductAttention.get_outstream_width'
  - Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
  - Line 709: function 'ScaledDotProductAttention.get_number_input_values'
  - Line 716: function 'ScaledDotProductAttention.get_number_output_values'
  - Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
  - Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

📄 src/finn/interface/run_finn.py:
  - Line 1: module 'run_finn.py'
  - Line 37: function '_resolve_module_path'
  - Line 128: function 'main_group'
  - Line 181: function 'build'
  - Line 284: function 'run'
  - Line 317: function 'bench'
  - Line 348: function 'test'
  - Line 361: function 'deps'
  - Line 373: function 'update'
  - Line 379: function 'config'
  - Line 384: function '_command_get_settings'
  - Line 394: function 'config_list'
  - Line 402: function 'config_get'
  - Line 413: function 'config_set'
  - Line 433: function 'config_create'
  - Line 447: function 'main'

📄 src/finn/transformation/fpgadataflow/attention_heads.py:
  - Line 1: module 'attention_heads.py'
  - Line 44: class 'InferMultiHeads'
  - Line 46: function 'InferMultiHeads.apply'
  - Line 342: class 'MoveSplitMultiHeadsPastMultiThreshold'
  - Line 344: function 'MoveSplitMultiHeadsPastMultiThreshold.apply'
  - Line 457: class 'MoveMergeMultiHeadsPastMultiThreshold'
  - Line 459: function 'MoveMergeMultiHeadsPastMultiThreshold.apply'
  - Line 585: function 'is_multi_head_attention'
  - Line 615: class 'UnrollMultiHeadAttention'
  - Line 617: function 'UnrollMultiHeadAttention.apply'

📄 src/finn/transformation/streamline/reorder.py:
  - Line 1: module 'reorder.py'
  - Line 56: function 'MoveAddPastMul.apply'
  - Line 120: function 'MoveScalarMulPastMatMul.apply'
  - Line 177: function 'MoveScalarAddPastMatMul.apply'
  - Line 234: function 'MoveAddPastConv.apply'
  - Line 312: function 'MoveScalarMulPastConv.apply'
  - Line 361: function 'MoveScalarMulPastConvTranspose.apply'
  - Line 410: function 'MoveMulPastDWConv.apply'
  - Line 472: function 'MoveMulPastMaxPool.apply'
  - Line 543: function 'MoveLinearPastEltwiseAdd.move_node'
  - Line 563: function 'MoveLinearPastEltwiseAdd.apply'
  - Line 652: function 'MoveScalarLinearPastInvariants.apply'
  - Line 731: function 'MakeMaxPoolNHWC.apply'
  - Line 805: function 'MakeScaleResizeNHWC.apply'
  - Line 906: function 'MoveOpPastFork.__init__'
  - Line 910: function 'MoveOpPastFork.apply'
  - Line 974: class 'MoveAddPastFork'
  - Line 975: function 'MoveAddPastFork.__init__'
  - Line 979: class 'MoveMulPastFork'
  - Line 980: function 'MoveMulPastFork.__init__'
  - Line 984: class 'MoveLinearPastFork'
  - Line 985: function 'MoveLinearPastFork.__init__'
  - Line 989: class 'MoveTransposePastFork'
  - Line 990: function 'MoveTransposePastFork.__init__'
  - Line 994: function 'permute_shape'
  - Line 1006: function 'MoveScalarLinearPastSplit.__init__'
  - Line 1011: function 'MoveScalarLinearPastSplit.apply'
  - Line 1057: class 'MoveTransposePastSplit'
  - Line 1058: function 'MoveTransposePastSplit.__init__'
  - Line 1063: function 'MoveTransposePastSplit.apply'
  - Line 1110: function 'MoveMaxPoolPastMultiThreshold.apply'
  - Line 1173: function 'MoveFlattenPastTopK.apply'
  - Line 1233: function 'MoveFlattenPastAffine.apply'
  - Line 1319: function 'MoveTransposePastScalarMul.apply'
  - Line 1381: function 'MoveIdenticalOpPastJoinOp.__init__'
  - Line 1425: function 'MoveIdenticalOpPastJoinOp.apply'
  - Line 1460: class 'MoveTransposePastJoinAdd'
  - Line 1461: function 'MoveTransposePastJoinAdd.__init__'
  - Line 1464: function 'MoveTransposePastJoinAdd.are_producers_identical'
  - Line 1474: class 'MoveTransposePastJoinMul'
  - Line 1475: function 'MoveTransposePastJoinMul.__init__'
  - Line 1478: function 'MoveTransposePastJoinMul.are_producers_identical'
  - Line 1488: class 'MoveMulPastJoinAdd'
  - Line 1489: function 'MoveMulPastJoinAdd.__init__'
  - Line 1492: function 'MoveMulPastJoinAdd.are_producers_identical'
  - Line 1504: class 'MoveAddPastJoinAdd'
  - Line 1505: function 'MoveAddPastJoinAdd.__init__'
  - Line 1508: function 'MoveAddPastJoinAdd.are_producers_identical'
  - Line 1529: class 'MoveTransposePastJoinConcat'
  - Line 1530: function 'MoveTransposePastJoinConcat.__init__'
  - Line 1533: function 'MoveTransposePastJoinConcat.are_producers_identical'
  - Line 1542: function 'MoveTransposePastJoinConcat.move_node'
  - Line 1579: function 'MoveAffinePastJoinConcat.__init__'
  - Line 1582: function 'MoveAffinePastJoinConcat.are_producers_identical_scalar_ops'
  - Line 1591: function 'MoveAffinePastJoinConcat.are_producers_channelwise_ops'
  - Line 1604: function 'MoveAffinePastJoinConcat.move_node'
  - Line 1654: class 'MoveMulPastJoinConcat'
  - Line 1655: function 'MoveMulPastJoinConcat.__init__'
  - Line 1659: class 'MoveAddPastJoinConcat'
  - Line 1660: function 'MoveAddPastJoinConcat.__init__'
  - Line 1666: class 'MoveSqueezePastMultiThreshold'
  - Line 1668: function 'MoveSqueezePastMultiThreshold.apply'
  - Line 1728: class 'MoveSqueezePastMatMul'
  - Line 1730: function 'MoveSqueezePastMatMul.apply'
  - Line 1792: class 'MoveTransposePastEltwise'
  - Line 1794: function 'MoveTransposePastEltwise.apply'
  - Line 1897: class 'MoveAddPastMatMul'
  - Line 1899: function 'MoveAddPastMatMul.apply'
  - Line 1999: class 'MoveConstMulPastJoinMul'
  - Line 2001: function 'MoveConstMulPastJoinMul.apply'
  - Line 2076: class 'MoveMulPastAdd'
  - Line 2078: function 'MoveMulPastAdd.apply'
  - Line 2159: class 'MoveScalarLinearPastFork'
  - Line 2161: function 'MoveScalarLinearPastFork.apply'
  - Line 2213: class 'MoveChannelwiseLinearPastFork'
  - Line 2215: function 'MoveChannelwiseLinearPastFork.apply'
  - Line 2264: function 'MoveChannelwiseLinearPastFork.can_broadcast_to'
  - Line 2319: class 'MoveScalesPastIm2Col'
  - Line 2321: function 'MoveScalesPastIm2Col.apply'

Total missing docstrings: 139

@github-actions
Copy link

📋 Docstring Check Report

Checked files:

  • src/finn/builder/build_dataflow_config.py
  • src/finn/builder/build_dataflow_steps.py
  • src/finn/builder/passes.py
  • src/finn/custom_op/fpgadataflow/attention.py
  • src/finn/custom_op/fpgadataflow/hls/attention_hls.py
  • src/finn/custom_op/fpgadataflow/hls/pool_hls.py
  • src/finn/custom_op/fpgadataflow/pool.py
  • src/finn/custom_op/fpgadataflow/reshape.py
  • src/finn/interface/run_finn.py
  • src/finn/transformation/fpgadataflow/attention_heads.py
  • src/finn/transformation/fpgadataflow/convert_to_hw_layers.py
  • src/finn/transformation/streamline/__init__.py
  • src/finn/transformation/streamline/reorder.py
  • src/finn/transformation/streamline/round_thresholds.py

Docstring check failed!

Missing Docstrings Details:

📄 src/finn/custom_op/fpgadataflow/attention.py:

    • Line 1: module 'attention.py'
    • Line 30: function 'softmax'
    • Line 48: class 'ScaledDotProductAttention'
    • Line 50: function 'ScaledDotProductAttention.init'
    • Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
    • Line 171: function 'ScaledDotProductAttention.shapes'
    • Line 179: function 'ScaledDotProductAttention.folds'
    • Line 187: function 'ScaledDotProductAttention.is_valid_folding'
    • Line 199: function 'ScaledDotProductAttention.iterations'
    • Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
    • Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
    • Line 286: function 'ScaledDotProductAttention._execute_node_python'
    • Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
    • Line 315: function 'ScaledDotProductAttention.act_a_softmax'
    • Line 333: function 'ScaledDotProductAttention.act_av_matmul'
    • Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
    • Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
    • Line 423: function 'ScaledDotProductAttention.execute_node'
    • Line 436: function 'ScaledDotProductAttention.verify_node'
    • Line 440: function 'ScaledDotProductAttention.get_input_datatype'
    • Line 479: function 'ScaledDotProductAttention.get_output_datatype'
    • Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
    • Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
    • Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
    • Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
    • Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
    • Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
    • Line 602: function 'ScaledDotProductAttention.get_instream_width'
    • Line 612: function 'ScaledDotProductAttention.get_outstream_width'
    • Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
    • Line 709: function 'ScaledDotProductAttention.get_number_input_values'
    • Line 716: function 'ScaledDotProductAttention.get_number_output_values'
    • Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
    • Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

📄 src/finn/interface/run_finn.py:

    • Line 1: module 'run_finn.py'
    • Line 37: function '_resolve_module_path'
    • Line 128: function 'main_group'
    • Line 181: function 'build'
    • Line 284: function 'run'
    • Line 317: function 'bench'
    • Line 348: function 'test'
    • Line 361: function 'deps'
    • Line 373: function 'update'
    • Line 379: function 'config'
    • Line 384: function '_command_get_settings'
    • Line 394: function 'config_list'
    • Line 402: function 'config_get'
    • Line 413: function 'config_set'
    • Line 433: function 'config_create'
    • Line 447: function 'main'

📄 src/finn/transformation/fpgadataflow/attention_heads.py:

    • Line 1: module 'attention_heads.py'
    • Line 44: class 'InferMultiHeads'
    • Line 46: function 'InferMultiHeads.apply'
    • Line 342: class 'MoveSplitMultiHeadsPastMultiThreshold'
    • Line 344: function 'MoveSplitMultiHeadsPastMultiThreshold.apply'
    • Line 457: class 'MoveMergeMultiHeadsPastMultiThreshold'
    • Line 459: function 'MoveMergeMultiHeadsPastMultiThreshold.apply'
    • Line 585: function 'is_multi_head_attention'
    • Line 615: class 'UnrollMultiHeadAttention'
    • Line 617: function 'UnrollMultiHeadAttention.apply'

📄 src/finn/transformation/streamline/reorder.py:

    • Line 1: module 'reorder.py'
    • Line 56: function 'MoveAddPastMul.apply'
    • Line 120: function 'MoveScalarMulPastMatMul.apply'
    • Line 177: function 'MoveScalarAddPastMatMul.apply'
    • Line 234: function 'MoveAddPastConv.apply'
    • Line 312: function 'MoveScalarMulPastConv.apply'
    • Line 361: function 'MoveScalarMulPastConvTranspose.apply'
    • Line 410: function 'MoveMulPastDWConv.apply'
    • Line 472: function 'MoveMulPastMaxPool.apply'
    • Line 543: function 'MoveLinearPastEltwiseAdd.move_node'
    • Line 563: function 'MoveLinearPastEltwiseAdd.apply'
    • Line 652: function 'MoveScalarLinearPastInvariants.apply'
    • Line 731: function 'MakeMaxPoolNHWC.apply'
    • Line 805: function 'MakeScaleResizeNHWC.apply'
    • Line 906: function 'MoveOpPastFork.init'
    • Line 910: function 'MoveOpPastFork.apply'
    • Line 974: class 'MoveAddPastFork'
    • Line 975: function 'MoveAddPastFork.init'
    • Line 979: class 'MoveMulPastFork'
    • Line 980: function 'MoveMulPastFork.init'
    • Line 984: class 'MoveLinearPastFork'
    • Line 985: function 'MoveLinearPastFork.init'
    • Line 989: class 'MoveTransposePastFork'
    • Line 990: function 'MoveTransposePastFork.init'
    • Line 994: function 'permute_shape'
    • Line 1006: function 'MoveScalarLinearPastSplit.init'
    • Line 1011: function 'MoveScalarLinearPastSplit.apply'
    • Line 1057: class 'MoveTransposePastSplit'
    • Line 1058: function 'MoveTransposePastSplit.init'
    • Line 1063: function 'MoveTransposePastSplit.apply'
    • Line 1110: function 'MoveMaxPoolPastMultiThreshold.apply'
    • Line 1173: function 'MoveFlattenPastTopK.apply'
    • Line 1233: function 'MoveFlattenPastAffine.apply'
    • Line 1319: function 'MoveTransposePastScalarMul.apply'
    • Line 1381: function 'MoveIdenticalOpPastJoinOp.init'
    • Line 1425: function 'MoveIdenticalOpPastJoinOp.apply'
    • Line 1460: class 'MoveTransposePastJoinAdd'
    • Line 1461: function 'MoveTransposePastJoinAdd.init'
    • Line 1464: function 'MoveTransposePastJoinAdd.are_producers_identical'
    • Line 1474: class 'MoveTransposePastJoinMul'
    • Line 1475: function 'MoveTransposePastJoinMul.init'
    • Line 1478: function 'MoveTransposePastJoinMul.are_producers_identical'
    • Line 1488: class 'MoveMulPastJoinAdd'
    • Line 1489: function 'MoveMulPastJoinAdd.init'
    • Line 1492: function 'MoveMulPastJoinAdd.are_producers_identical'
    • Line 1504: class 'MoveAddPastJoinAdd'
    • Line 1505: function 'MoveAddPastJoinAdd.init'
    • Line 1508: function 'MoveAddPastJoinAdd.are_producers_identical'
    • Line 1529: class 'MoveTransposePastJoinConcat'
    • Line 1530: function 'MoveTransposePastJoinConcat.init'
    • Line 1533: function 'MoveTransposePastJoinConcat.are_producers_identical'
    • Line 1542: function 'MoveTransposePastJoinConcat.move_node'
    • Line 1579: function 'MoveAffinePastJoinConcat.init'
    • Line 1582: function 'MoveAffinePastJoinConcat.are_producers_identical_scalar_ops'
    • Line 1591: function 'MoveAffinePastJoinConcat.are_producers_channelwise_ops'
    • Line 1604: function 'MoveAffinePastJoinConcat.move_node'
    • Line 1654: class 'MoveMulPastJoinConcat'
    • Line 1655: function 'MoveMulPastJoinConcat.init'
    • Line 1659: class 'MoveAddPastJoinConcat'
    • Line 1660: function 'MoveAddPastJoinConcat.init'
    • Line 1666: class 'MoveSqueezePastMultiThreshold'
    • Line 1668: function 'MoveSqueezePastMultiThreshold.apply'
    • Line 1728: class 'MoveSqueezePastMatMul'
    • Line 1730: function 'MoveSqueezePastMatMul.apply'
    • Line 1792: class 'MoveTransposePastEltwise'
    • Line 1794: function 'MoveTransposePastEltwise.apply'
    • Line 1897: class 'MoveAddPastMatMul'
    • Line 1899: function 'MoveAddPastMatMul.apply'
    • Line 1999: class 'MoveConstMulPastJoinMul'
    • Line 2001: function 'MoveConstMulPastJoinMul.apply'
    • Line 2076: class 'MoveMulPastAdd'
    • Line 2078: function 'MoveMulPastAdd.apply'
    • Line 2159: class 'MoveScalarLinearPastFork'
    • Line 2161: function 'MoveScalarLinearPastFork.apply'
    • Line 2213: class 'MoveChannelwiseLinearPastFork'
    • Line 2215: function 'MoveChannelwiseLinearPastFork.apply'
    • Line 2264: function 'MoveChannelwiseLinearPastFork.can_broadcast_to'
    • Line 2319: class 'MoveScalesPastIm2Col'
    • Line 2321: function 'MoveScalesPastIm2Col.apply'

Total missing docstrings: 139

How to Fix:

Please add docstrings to the missing functions, classes, and modules listed above.

Docstring Guidelines:

  • All modules should have a module-level docstring
  • All public functions and methods should have docstrings
  • All private functions should have docstrings
  • All classes should have docstrings
  • Use triple quotes (""") for docstrings
  • Follow PEP 257 conventions
Raw output from docstring checker
Checking 14 changed Python file(s):
❌ Missing docstrings found:

📄 src/finn/custom_op/fpgadataflow/attention.py:
  - Line 1: module 'attention.py'
  - Line 30: function 'softmax'
  - Line 48: class 'ScaledDotProductAttention'
  - Line 50: function 'ScaledDotProductAttention.__init__'
  - Line 55: function 'ScaledDotProductAttention.get_nodeattr_types'
  - Line 171: function 'ScaledDotProductAttention.shapes'
  - Line 179: function 'ScaledDotProductAttention.folds'
  - Line 187: function 'ScaledDotProductAttention.is_valid_folding'
  - Line 199: function 'ScaledDotProductAttention.iterations'
  - Line 206: function 'ScaledDotProductAttention.make_shape_compatible_op'
  - Line 225: function 'ScaledDotProductAttention.infer_node_datatype'
  - Line 286: function 'ScaledDotProductAttention._execute_node_python'
  - Line 298: function 'ScaledDotProductAttention.act_qk_matmul'
  - Line 315: function 'ScaledDotProductAttention.act_a_softmax'
  - Line 333: function 'ScaledDotProductAttention.act_av_matmul'
  - Line 409: function 'ScaledDotProductAttention._execute_node_cppsim'
  - Line 416: function 'ScaledDotProductAttention._execute_node_rtlsim'
  - Line 423: function 'ScaledDotProductAttention.execute_node'
  - Line 436: function 'ScaledDotProductAttention.verify_node'
  - Line 440: function 'ScaledDotProductAttention.get_input_datatype'
  - Line 479: function 'ScaledDotProductAttention.get_output_datatype'
  - Line 486: function 'ScaledDotProductAttention.get_normal_input_shape'
  - Line 535: function 'ScaledDotProductAttention.get_normal_output_shape'
  - Line 543: function 'ScaledDotProductAttention.get_normal_attention_shape'
  - Line 549: function 'ScaledDotProductAttention.get_folded_input_shape'
  - Line 581: function 'ScaledDotProductAttention.get_folded_output_shape'
  - Line 592: function 'ScaledDotProductAttention.get_folded_attention_shape'
  - Line 602: function 'ScaledDotProductAttention.get_instream_width'
  - Line 612: function 'ScaledDotProductAttention.get_outstream_width'
  - Line 622: function 'ScaledDotProductAttention.minimize_accumulator_width'
  - Line 709: function 'ScaledDotProductAttention.get_number_input_values'
  - Line 716: function 'ScaledDotProductAttention.get_number_output_values'
  - Line 726: function 'ScaledDotProductAttention.get_input_name_by_name'
  - Line 765: function 'ScaledDotProductAttention.get_exp_cycles'

📄 src/finn/interface/run_finn.py:
  - Line 1: module 'run_finn.py'
  - Line 37: function '_resolve_module_path'
  - Line 128: function 'main_group'
  - Line 181: function 'build'
  - Line 284: function 'run'
  - Line 317: function 'bench'
  - Line 348: function 'test'
  - Line 361: function 'deps'
  - Line 373: function 'update'
  - Line 379: function 'config'
  - Line 384: function '_command_get_settings'
  - Line 394: function 'config_list'
  - Line 402: function 'config_get'
  - Line 413: function 'config_set'
  - Line 433: function 'config_create'
  - Line 447: function 'main'

📄 src/finn/transformation/fpgadataflow/attention_heads.py:
  - Line 1: module 'attention_heads.py'
  - Line 44: class 'InferMultiHeads'
  - Line 46: function 'InferMultiHeads.apply'
  - Line 342: class 'MoveSplitMultiHeadsPastMultiThreshold'
  - Line 344: function 'MoveSplitMultiHeadsPastMultiThreshold.apply'
  - Line 457: class 'MoveMergeMultiHeadsPastMultiThreshold'
  - Line 459: function 'MoveMergeMultiHeadsPastMultiThreshold.apply'
  - Line 585: function 'is_multi_head_attention'
  - Line 615: class 'UnrollMultiHeadAttention'
  - Line 617: function 'UnrollMultiHeadAttention.apply'

📄 src/finn/transformation/streamline/reorder.py:
  - Line 1: module 'reorder.py'
  - Line 56: function 'MoveAddPastMul.apply'
  - Line 120: function 'MoveScalarMulPastMatMul.apply'
  - Line 177: function 'MoveScalarAddPastMatMul.apply'
  - Line 234: function 'MoveAddPastConv.apply'
  - Line 312: function 'MoveScalarMulPastConv.apply'
  - Line 361: function 'MoveScalarMulPastConvTranspose.apply'
  - Line 410: function 'MoveMulPastDWConv.apply'
  - Line 472: function 'MoveMulPastMaxPool.apply'
  - Line 543: function 'MoveLinearPastEltwiseAdd.move_node'
  - Line 563: function 'MoveLinearPastEltwiseAdd.apply'
  - Line 652: function 'MoveScalarLinearPastInvariants.apply'
  - Line 731: function 'MakeMaxPoolNHWC.apply'
  - Line 805: function 'MakeScaleResizeNHWC.apply'
  - Line 906: function 'MoveOpPastFork.__init__'
  - Line 910: function 'MoveOpPastFork.apply'
  - Line 974: class 'MoveAddPastFork'
  - Line 975: function 'MoveAddPastFork.__init__'
  - Line 979: class 'MoveMulPastFork'
  - Line 980: function 'MoveMulPastFork.__init__'
  - Line 984: class 'MoveLinearPastFork'
  - Line 985: function 'MoveLinearPastFork.__init__'
  - Line 989: class 'MoveTransposePastFork'
  - Line 990: function 'MoveTransposePastFork.__init__'
  - Line 994: function 'permute_shape'
  - Line 1006: function 'MoveScalarLinearPastSplit.__init__'
  - Line 1011: function 'MoveScalarLinearPastSplit.apply'
  - Line 1057: class 'MoveTransposePastSplit'
  - Line 1058: function 'MoveTransposePastSplit.__init__'
  - Line 1063: function 'MoveTransposePastSplit.apply'
  - Line 1110: function 'MoveMaxPoolPastMultiThreshold.apply'
  - Line 1173: function 'MoveFlattenPastTopK.apply'
  - Line 1233: function 'MoveFlattenPastAffine.apply'
  - Line 1319: function 'MoveTransposePastScalarMul.apply'
  - Line 1381: function 'MoveIdenticalOpPastJoinOp.__init__'
  - Line 1425: function 'MoveIdenticalOpPastJoinOp.apply'
  - Line 1460: class 'MoveTransposePastJoinAdd'
  - Line 1461: function 'MoveTransposePastJoinAdd.__init__'
  - Line 1464: function 'MoveTransposePastJoinAdd.are_producers_identical'
  - Line 1474: class 'MoveTransposePastJoinMul'
  - Line 1475: function 'MoveTransposePastJoinMul.__init__'
  - Line 1478: function 'MoveTransposePastJoinMul.are_producers_identical'
  - Line 1488: class 'MoveMulPastJoinAdd'
  - Line 1489: function 'MoveMulPastJoinAdd.__init__'
  - Line 1492: function 'MoveMulPastJoinAdd.are_producers_identical'
  - Line 1504: class 'MoveAddPastJoinAdd'
  - Line 1505: function 'MoveAddPastJoinAdd.__init__'
  - Line 1508: function 'MoveAddPastJoinAdd.are_producers_identical'
  - Line 1529: class 'MoveTransposePastJoinConcat'
  - Line 1530: function 'MoveTransposePastJoinConcat.__init__'
  - Line 1533: function 'MoveTransposePastJoinConcat.are_producers_identical'
  - Line 1542: function 'MoveTransposePastJoinConcat.move_node'
  - Line 1579: function 'MoveAffinePastJoinConcat.__init__'
  - Line 1582: function 'MoveAffinePastJoinConcat.are_producers_identical_scalar_ops'
  - Line 1591: function 'MoveAffinePastJoinConcat.are_producers_channelwise_ops'
  - Line 1604: function 'MoveAffinePastJoinConcat.move_node'
  - Line 1654: class 'MoveMulPastJoinConcat'
  - Line 1655: function 'MoveMulPastJoinConcat.__init__'
  - Line 1659: class 'MoveAddPastJoinConcat'
  - Line 1660: function 'MoveAddPastJoinConcat.__init__'
  - Line 1666: class 'MoveSqueezePastMultiThreshold'
  - Line 1668: function 'MoveSqueezePastMultiThreshold.apply'
  - Line 1728: class 'MoveSqueezePastMatMul'
  - Line 1730: function 'MoveSqueezePastMatMul.apply'
  - Line 1792: class 'MoveTransposePastEltwise'
  - Line 1794: function 'MoveTransposePastEltwise.apply'
  - Line 1897: class 'MoveAddPastMatMul'
  - Line 1899: function 'MoveAddPastMatMul.apply'
  - Line 1999: class 'MoveConstMulPastJoinMul'
  - Line 2001: function 'MoveConstMulPastJoinMul.apply'
  - Line 2076: class 'MoveMulPastAdd'
  - Line 2078: function 'MoveMulPastAdd.apply'
  - Line 2159: class 'MoveScalarLinearPastFork'
  - Line 2161: function 'MoveScalarLinearPastFork.apply'
  - Line 2213: class 'MoveChannelwiseLinearPastFork'
  - Line 2215: function 'MoveChannelwiseLinearPastFork.apply'
  - Line 2264: function 'MoveChannelwiseLinearPastFork.can_broadcast_to'
  - Line 2319: class 'MoveScalesPastIm2Col'
  - Line 2321: function 'MoveScalesPastIm2Col.apply'

Total missing docstrings: 139

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants