Skip to content

Commit a10884c

Browse files
authored
[TMVA] fix some inconsistencies and warnings found by doxygen
Fixes several doxygen warnings in TMVA.
1 parent c6c1599 commit a10884c

File tree

12 files changed

+75
-58
lines changed

12 files changed

+75
-58
lines changed

tmva/sofie/README.md

+39-28
Original file line numberDiff line numberDiff line change
@@ -18,57 +18,68 @@ Build ROOT with the cmake option tmva-sofie enabled.
1818
cmake ../root -Dtmva-sofie=ON
1919
make -j8
2020
```
21-
21+
2222
## Usage
2323
SOFIE works in a parser-generator working architecture. With SOFIE, the user gets an [ONNX](https://github.com/root-project/root/tree/master/tmva/sofie_parsers), [Keras](https://github.com/root-project/root/blob/master/tmva/pymva/src/RModelParser_Keras.cxx) and a [PyTorch](https://github.com/root-project/root/blob/master/tmva/pymva/src/RModelParser_PyTorch.cxx) parser for translating models in respective formats into SOFIE's internal representation.
2424

2525
From ROOT command line, or in a ROOT macro, we can proceed with an ONNX model:
2626

27-
using namespace TMVA::Experimental;
28-
SOFIE::RModelParser_ONNX parser;
29-
SOFIE::RModel model = parser.Parse(“./example_model.onnx”);
30-
model.Generate();
31-
model.OutputGenerated(“./example_output.hxx”);
27+
```c++
28+
using namespace TMVA::Experimental;
29+
SOFIE::RModelParser_ONNX parser;
30+
SOFIE::RModel model = parser.Parse(“./example_model.onnx”);
31+
model.Generate();
32+
model.OutputGenerated(“./example_output.hxx”);
33+
```
3234
3335
And an C++ header file and a `.dat` file containing the model weights will be generated. You can also use
3436
35-
model.PrintRequiredInputTensors();
37+
```c++
38+
model.PrintRequiredInputTensors();
39+
```
3640

3741
to check the required size and type of input tensor for that particular model, and use
3842

39-
model.PrintInitializedTensors();
43+
```c++
44+
model.PrintInitializedTensors();
45+
```
4046

4147
to check the tensors (weights) already included in the model.
4248

4349
To use the generated inference code:
4450

45-
#include "example_output.hxx"
46-
float input[INPUT_SIZE];
51+
```c++
52+
#include "example_output.hxx"
53+
float input[INPUT_SIZE];
54+
std::vector<float> out = TMVA_SOFIE_example_model::infer(input);
4755

48-
// Generated header file shall contain a Session class which requires initialization to load the corresponding weights.
49-
TMVA_SOFIE_example_model::Session s("example_model.dat")
56+
// Generated header file shall contain a Session class which requires initialization to load the corresponding weights.
57+
TMVA_SOFIE_example_model::Session s("example_model.dat")
5058

51-
// Once instantiated the session object's infer method can be used
52-
std::vector<float> out = s.infer(input);
59+
// Once instantiated the session object's infer method can be used
60+
std::vector<float> out = s.infer(input);
61+
```
62+
63+
With the default settings, the weights are contained in a separate binary file, but if the user instead wants them to be in the generated header file itself, they can use approproiate generation options.
5364
54-
With the default settings, the weights are contained in a separate binary file, but if the user instead wants them to be in the generated header file itself, they can use approproiate generation options.
55-
56-
model.Generate(Options::kNoWeightFile);
65+
```c++
66+
model.Generate(Options::kNoWeightFile);
67+
```
5768

58-
Other such options includes `Options::kNoSession` (for not generating the Session class, and instead keeping the infer function independent).
69+
Other such options includes `Options::kNoSession` (for not generating the Session class, and instead keeping the infer function independent).
5970
SOFIE also supports generating inference code with RDataFrame as inputs, refer to the tutorials below for examples.
6071

61-
72+
6273
## Additional Links
6374

6475
- **Tutorials**
65-
- [TMVA_SOFIE_Inference](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_Inference.py)
66-
- [TMVA_SOFIE_Keras](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_Keras.C)
67-
- [TMVA_SOFIE_Keras_HiggsModel](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_Keras_HiggsModel.C)
68-
- [TMVA_SOFIE_ONNX](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_ONNX.C)
69-
- [TMVA_SOFIE_PyTorch](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_PyTorch.C)
70-
- [TMVA_SOFIE_RDataFrame](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_RDataFrame.C)
71-
- [TMVA_SOFIE_RDataFrame](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_RDataFrame.py)
72-
- [TMVA_SOFIE_RDataFrame_JIT](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_RDataFrame_JIT.C)
73-
- [TMVA_SOFIE_RSofieReader](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_RSofieReader.C)
76+
- [TMVA_SOFIE_Inference](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_Inference.py)
77+
- [TMVA_SOFIE_Keras](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_Keras.C)
78+
- [TMVA_SOFIE_Keras_HiggsModel](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_Keras_HiggsModel.C)
79+
- [TMVA_SOFIE_ONNX](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_ONNX.C)
80+
- [TMVA_SOFIE_PyTorch](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_PyTorch.C)
81+
- [TMVA_SOFIE_RDataFrame](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_RDataFrame.C)
82+
- [TMVA_SOFIE_RDataFrame](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_RDataFrame.py)
83+
- [TMVA_SOFIE_RDataFrame_JIT](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_RDataFrame_JIT.C)
84+
- [TMVA_SOFIE_RSofieReader](https://github.com/root-project/root/blob/master/tutorials/tmva/TMVA_SOFIE_RSofieReader.C)
7485

tmva/tmva/inc/TMVA/DNN/Architectures/Cuda.h

+2-2
Original file line numberDiff line numberDiff line change
@@ -399,7 +399,7 @@ class TCuda
399399

400400
/** @name Regularization
401401
* For each regularization type two functions are required, one named
402-
* <tt><Type>Regularization</tt> that evaluates the corresponding
402+
* <tt>`<Type>`Regularization</tt> that evaluates the corresponding
403403
* regularization functional for a given weight matrix and the
404404
* <tt>Add`<Type>`RegularizationGradients</tt>, that adds the regularization
405405
* component in the gradients to the provided matrix.
@@ -424,7 +424,7 @@ class TCuda
424424

425425
/** @name Initialization
426426
* For each initialization method, one function in the low-level interface
427-
* is provided. The naming scheme is <p>Initialize<Type></p> for a given
427+
* is provided. The naming scheme is <p>Initialize`<Type>`</p> for a given
428428
* initialization method Type.
429429
*/
430430
///@{

tmva/tmva/inc/TMVA/DNN/Architectures/Reference.h

+4-4
Original file line numberDiff line numberDiff line change
@@ -316,10 +316,10 @@ class TReference
316316
//____________________________________________________________________________
317317

318318
/** @name Regularization
319-
* For each regularization type two functions are required, one named
320-
* <tt><Type>Regularization</tt> that evaluates the corresponding
319+
* For each regularization type, two functions are required, one named
320+
* `<Type>Regularization` that evaluates the corresponding
321321
* regularization functional for a given weight matrix and the
322-
* <tt>Add`<Type>`RegularizationGradients</tt>, that adds the regularization
322+
* `Add<Type>RegularizationGradients`, that adds the regularization
323323
* component in the gradients to the provided matrix.
324324
*/
325325
///@{
@@ -342,7 +342,7 @@ class TReference
342342

343343
/** @name Initialization
344344
* For each initialization method, one function in the low-level interface
345-
* is provided. The naming scheme is <p>Initialize<Type></p> for a given
345+
* is provided. The naming scheme is `Initialize<Type>` for a given
346346
* initialization method Type.
347347
*/
348348
///@{

tmva/tmva/inc/TMVA/Executor.h

+9-3
Original file line numberDiff line numberDiff line change
@@ -46,9 +46,15 @@ class Executor {
4646
//////////////////////////////////////
4747
/// Default constructor of TMVA Executor class
4848
/// if ROOT::EnableImplicitMT has not been called then by default a serial executor will be created
49-
/// A user can create a thread pool and enable multi-thread execution by calling TMVA::Config::Instance()::EnableMT(nthreads)
50-
/// For releasing the thread pool used by TMVA one can do it by calling TMVA::Config::Instance()::DisableMT() or
51-
/// calling TMVA::Config::Instance()::EnableMT() with only one thread
49+
/// A user can create a thread pool and enable multi-thread excution by calling
50+
///
51+
/// ~~~{.cpp}
52+
/// TMVA::Config::Instance()::%EnableMT(int nthreads);
53+
/// ~~~
54+
///
55+
/// For releasing the thread pool used by TMVA one can do it by calling
56+
///
57+
/// TMVA::Config::Instance()::%DisableMT();
5258
////////////////////////////////////////////
5359
Executor() {
5460
// enable MT in TMVA if ROOT::IsImplicitMT is enabled

tmva/tmva/src/DNN/Architectures/Cpu/ActivationFunctions.hxx

+1-1
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ void TCpu<AFloat>::ReluDerivative(TCpuTensor<AFloat> & B,
7979

8080
//______________________________________________________________________________
8181
template<typename AFloat>
82-
void TCpu<AFloat>::Sigmoid(TCpuTensor<AFloat> & B)
82+
void TCpu<AFloat>::Sigmoid(TCpu<AFloat>::Tensor_t & B)
8383
{
8484
auto f = [](AFloat x) {return 1.0 / (1.0 + exp(-x));};
8585
B.Map(f);

tmva/tmva/src/DNN/Architectures/Cpu/Dropout.hxx

+2-2
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,8 @@ namespace DNN {
2323
template<typename AFloat>
2424
void TCpu<AFloat>::DropoutForward(TCpuTensor<AFloat> & A,
2525
TDescriptors * /*descriptors*/,
26-
TWorkspace * /*workspace*/,
27-
AFloat dropoutProbability)
26+
TWorkspace * /*workspace*/,
27+
TCpu<AFloat>::Scalar_t dropoutProbability)
2828
{
2929
AFloat *data = A.GetData();
3030

tmva/tmva/src/DNN/Architectures/Cpu/Initialization.hxx

+2-2
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,7 @@ void TCpu<AFloat>::InitializeIdentity(TCpuMatrix<AFloat> & A)
150150

151151
//______________________________________________________________________________
152152
template<typename AFloat>
153-
void TCpu<AFloat>::InitializeZero(TCpuMatrix<AFloat> & A)
153+
void TCpu<AFloat>::InitializeZero(TCpu<AFloat>::Matrix_t & A)
154154
{
155155
size_t m,n;
156156
m = A.GetNrows();
@@ -164,7 +164,7 @@ void TCpu<AFloat>::InitializeZero(TCpuMatrix<AFloat> & A)
164164
}
165165
//______________________________________________________________________________
166166
template <typename AFloat>
167-
void TCpu<AFloat>::InitializeZero(TCpuTensor<AFloat> &A)
167+
void TCpu<AFloat>::InitializeZero(TCpu<AFloat>::Tensor_t &A)
168168
{
169169
size_t n = A.GetSize();
170170

tmva/tmva/src/DNN/Architectures/Cpu/OutputFunctions.hxx

+2-2
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,8 @@ namespace DNN
2222
{
2323

2424
template<typename AFloat>
25-
void TCpu<AFloat>::Sigmoid(TCpuMatrix<AFloat> & B,
26-
const TCpuMatrix<AFloat> & A)
25+
void TCpu<AFloat>::Sigmoid(TCpu<AFloat>::Matrix_t & B,
26+
const TCpu<AFloat>::Matrix_t & A)
2727
{
2828
auto f = [](AFloat x) {return 1.0 / (1.0 + exp(-x));};
2929
B.MapFrom(f, A);

tmva/tmva/src/DNN/Architectures/Cpu/Propagation.hxx

+3-3
Original file line numberDiff line numberDiff line change
@@ -30,8 +30,8 @@ namespace DNN {
3030

3131

3232
template <typename AFloat>
33-
void TCpu<AFloat>::MultiplyTranspose(TCpuMatrix<AFloat> &output, const TCpuMatrix<AFloat> &input,
34-
const TCpuMatrix<AFloat> &Weights)
33+
void TCpu<AFloat>::MultiplyTranspose(TCpu<AFloat>::Matrix_t &output, const TCpu<AFloat>::Matrix_t &input,
34+
const TCpu<AFloat>::Matrix_t &Weights)
3535
{
3636

3737
int m = (int)input.GetNrows();
@@ -72,7 +72,7 @@ void TCpu<AFloat>::MultiplyTranspose(TCpuMatrix<AFloat> &output, const TCpuMatri
7272
}
7373

7474
template <typename AFloat>
75-
void TCpu<AFloat>::AddRowWise(TCpuMatrix<AFloat> &output, const TCpuMatrix<AFloat> &biases)
75+
void TCpu<AFloat>::AddRowWise(TCpu<AFloat>::Matrix_t &output, const TCpu<AFloat>::Matrix_t &biases)
7676
{
7777
#ifdef R__HAS_TMVACPU
7878
int m = (int)output.GetNrows();

tmva/tmva/src/DNN/Architectures/Reference/ActivationFunctions.hxx

+5-5
Original file line numberDiff line numberDiff line change
@@ -73,16 +73,16 @@ inline void TReference<Real_t>::ReluDerivative(TMatrixT<Real_t> & B,
7373

7474
//______________________________________________________________________________
7575
template<typename Real_t>
76-
void TReference<Real_t>::Sigmoid(TMatrixT<Real_t> & A)
76+
void TReference<Real_t>::Sigmoid(TMatrixT<Real_t> & B)
7777
{
7878
size_t m,n;
79-
m = A.GetNrows();
80-
n = A.GetNcols();
79+
m = B.GetNrows();
80+
n = B.GetNcols();
8181

8282
for (size_t i = 0; i < m; i++) {
8383
for (size_t j = 0; j < n; j++) {
84-
Real_t sig = 1.0 / (1.0 + std::exp(-A(i,j)));
85-
A(i,j) = sig;
84+
Real_t sig = 1.0 / (1.0 + std::exp(-B(i,j)));
85+
B(i,j) = sig;
8686
}
8787
}
8888
}

tmva/tmva/src/DNN/Architectures/Reference/Dropout.hxx

+5-5
Original file line numberDiff line numberDiff line change
@@ -26,19 +26,19 @@ namespace DNN
2626
//______________________________________________________________________________
2727

2828
template<typename Real_t>
29-
void TReference<Real_t>::DropoutForward(TMatrixT<Real_t> & B, TDescriptors*, TWorkspace*, Real_t dropoutProbability)
29+
void TReference<Real_t>::DropoutForward(TReference<Real_t>::Tensor_t & A, TDescriptors*, TWorkspace*, Real_t dropoutProbability)
3030
{
3131
size_t m,n;
32-
m = B.GetNrows();
33-
n = B.GetNcols();
32+
m = A.GetNrows();
33+
n = A.GetNcols();
3434

3535
for (size_t i = 0; i < m; i++) {
3636
for (size_t j = 0; j < n; j++) {
3737
Real_t r = gRandom->Uniform();
3838
if (r >= dropoutProbability) {
39-
B(i,j) = 0.0;
39+
A(i,j) = 0.0;
4040
} else {
41-
B(i,j) /= dropoutProbability;
41+
A(i,j) /= dropoutProbability;
4242
}
4343
}
4444
}

tmva/tmva/src/MethodPDERS.cxx

+1-1
Original file line numberDiff line numberDiff line change
@@ -223,7 +223,7 @@ TMVA::MethodPDERS::~MethodPDERS( void )
223223
/// - Unscaled
224224
/// - RMS
225225
/// - kNN
226-
/// - Adaptive <default>
226+
/// - Adaptive `<default>`
227227
///
228228
/// - KernelEstimator `<string>` Kernel estimation function
229229
/// available values are:

0 commit comments

Comments
 (0)