Skip to content

Commit 9e0411a

Browse files
authored
Merge: Big-Endian & Documentation Patches (#291)
2 parents 6e843db + 714c615 commit 9e0411a

File tree

11 files changed

+170
-162
lines changed

11 files changed

+170
-162
lines changed

.github/workflows/release.yml

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -459,9 +459,13 @@ jobs:
459459
- name: Dry Run Publish to NPM
460460
if: github.ref != 'refs/heads/main'
461461
run: npm publish --dry-run
462+
env:
463+
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
462464
- name: Publish to NPM
463465
if: github.ref == 'refs/heads/main'
464-
run: npm publish --access public
466+
run: npm publish --provenance --access public
467+
env:
468+
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
465469

466470
publish_rust:
467471
name: Publish Rust Crate

README.md

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -281,7 +281,7 @@ You can learn more about the technical implementation details in the following b
281281
> The code was compiled with GCC 12, using glibc v2.35.
282282
> The benchmarks performed on Arm-based Graviton3 AWS `c7g` instances and `r7iz` Intel Sapphire Rapids.
283283
> Most modern Arm-based 64-bit CPUs will have similar relative speedups.
284-
> Variance withing x86 CPUs will be larger.
284+
> Variance within x86 CPUs will be larger.
285285
286286
Similar speedups are often observed even when compared to BLAS and LAPACK libraries underlying most numerical computing libraries, including NumPy and SciPy in Python.
287287
Broader benchmarking results:
@@ -299,7 +299,7 @@ The same applies to processing `float16` and `bfloat16` values with `float32` pr
299299

300300
### Installation
301301

302-
Use the following snippet to install SimSIMD and list available hardware acceleration options available on your machine:
302+
Use the following snippet to install SimSIMD and list hardware acceleration options available on your machine:
303303

304304
```sh
305305
pip install simsimd
@@ -321,7 +321,7 @@ vec2 = np.random.randn(1536).astype(np.float32)
321321
dist = simsimd.cosine(vec1, vec2)
322322
```
323323

324-
Supported functions include `cosine`, `inner`, `sqeuclidean`, `hamming`, `jaccard`, `kulbackleibler`, `jensenshannon`, and `intersect`.
324+
Supported functions include `cosine`, `inner`, `sqeuclidean`, `hamming`, `jaccard`, `kullbackleibler`, `jensenshannon`, and `intersect`.
325325
Dot products are supported for both real and complex numbers:
326326

327327
```py
@@ -555,6 +555,7 @@ Luckily, to downcast `f32` to `bf16` you only have to drop the last 16 bits:
555555
import numpy as np
556556
import simsimd as simd
557557

558+
ndim = 1536
558559
a = np.random.randn(ndim).astype(np.float32)
559560
b = np.random.randn(ndim).astype(np.float32)
560561

@@ -971,8 +972,8 @@ To override compilation settings and switch between runtime and compile-time dis
971972
#include <simsimd/simsimd.h>
972973
973974
int main() {
974-
simsimd_i8_t i8[1536];
975-
simsimd_i8_t u8[1536];
975+
simsimd_i8_t i8s[1536];
976+
simsimd_u8_t u8s[1536];
976977
simsimd_f64_t f64s[1536];
977978
simsimd_f32_t f32s[1536];
978979
simsimd_f16_t f16s[1536];
@@ -1023,10 +1024,10 @@ int main() {
10231024
simsimd_dot_bf16(bf16s, bf16s, 1536, &product);
10241025

10251026
// SimSIMD provides complex types with `real` and `imag` fields
1026-
simsimd_f64c_t f64s[768];
1027-
simsimd_f32c_t f32s[768];
1028-
simsimd_f16c_t f16s[768];
1029-
simsimd_bf16c_t bf16s[768];
1027+
simsimd_f64c_t f64cs[768];
1028+
simsimd_f32c_t f32cs[768];
1029+
simsimd_f16c_t f16cs[768];
1030+
simsimd_bf16c_t bf16cs[768];
10301031
simsimd_distance_t products[2]; // real and imaginary parts
10311032

10321033
// Complex inner product between two vectors
@@ -1103,7 +1104,7 @@ To explicitly disable half-precision support, define the following macro before
11031104
> This flag does just that and is used to produce the `simsimd.so` shared library, as well as the Python and other bindings.
11041105
11051106
For Arm: `SIMSIMD_TARGET_NEON`, `SIMSIMD_TARGET_SVE`, `SIMSIMD_TARGET_SVE2`, `SIMSIMD_TARGET_NEON_F16`, `SIMSIMD_TARGET_SVE_F16`, `SIMSIMD_TARGET_NEON_BF16`, `SIMSIMD_TARGET_SVE_BF16`.
1106-
For x86: (`SIMSIMD_TARGET_HASWELL`, `SIMSIMD_TARGET_SKYLAKE`, `SIMSIMD_TARGET_ICE`, `SIMSIMD_TARGET_GENOA`, `SIMSIMD_TARGET_SAPPHIRE`, `SIMSIMD_TARGET_TURIN`, `SIMSIMD_TARGET_SIERRA`.
1107+
For x86: `SIMSIMD_TARGET_HASWELL`, `SIMSIMD_TARGET_SKYLAKE`, `SIMSIMD_TARGET_ICE`, `SIMSIMD_TARGET_GENOA`, `SIMSIMD_TARGET_SAPPHIRE`, `SIMSIMD_TARGET_TURIN`, `SIMSIMD_TARGET_SIERRA`.
11071108
11081109
> By default, SimSIMD automatically infers the target architecture and pre-compiles as many kernels as possible.
11091110
> In some cases, you may want to explicitly disable some of the kernels.
@@ -1305,7 +1306,7 @@ Both functions are defined for non-negative numbers, and the logarithm is a key
13051306
### Mixed Precision in Fused-Multiply-Add and Weighted Sums
13061307

13071308
The Fused-Multiply-Add (FMA) operation is a single operation that combines element-wise multiplication and addition with different scaling factors.
1308-
The Weighted Sum is it's simplified variant without element-wise multiplication.
1309+
The Weighted Sum is its simplified variant without element-wise multiplication.
13091310

13101311
```math
13111312
\text{FMA}_i(A, B, C, \alpha, \beta) = \alpha \cdot A_i \cdot B_i + \beta \cdot C_i

include/simsimd/binary.h

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,8 @@
77
* Contains:
88
* - Bit-level Hamming distance
99
* - Bit-level Jaccard distance (Tanimoto coefficient)
10-
* - TODO: Hamming distance for integer vectors - `u16`, `u32`
10+
* - TODO: Hamming distance for integer vectors - `u32`
11+
* - TODO: Jaccard distance for integer vectors - `u32` and `u32u32` count-min-sketches from StringZilla
1112
*
1213
* For hardware architectures:
1314
* - Arm: NEON, SVE
@@ -57,24 +58,24 @@ extern "C" {
5758
// clang-format off
5859

5960
/* Serial backends for bitsets and integers. */
60-
SIMSIMD_PUBLIC void simsimd_hamming_b8_serial(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* distance);
61-
SIMSIMD_PUBLIC void simsimd_jaccard_b8_serial(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* distance);
61+
SIMSIMD_PUBLIC void simsimd_hamming_b8_serial(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* result);
62+
SIMSIMD_PUBLIC void simsimd_jaccard_b8_serial(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* result);
6263

6364
/* Arm NEON backend for bitsets and integers. */
64-
SIMSIMD_PUBLIC void simsimd_hamming_b8_neon(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* distance);
65-
SIMSIMD_PUBLIC void simsimd_jaccard_b8_neon(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* distance);
65+
SIMSIMD_PUBLIC void simsimd_hamming_b8_neon(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* result);
66+
SIMSIMD_PUBLIC void simsimd_jaccard_b8_neon(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* result);
6667

6768
/* Arm SVE backend for bitsets and integers. */
68-
SIMSIMD_PUBLIC void simsimd_hamming_b8_sve(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* distance);
69-
SIMSIMD_PUBLIC void simsimd_jaccard_b8_sve(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* distance);
69+
SIMSIMD_PUBLIC void simsimd_hamming_b8_sve(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* result);
70+
SIMSIMD_PUBLIC void simsimd_jaccard_b8_sve(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* result);
7071

7172
/* x86 AVX2 backend for bitsets and integers for Intel Haswell CPUs and newer, needs only POPCNT extensions. */
72-
SIMSIMD_PUBLIC void simsimd_hamming_b8_haswell(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* distance);
73-
SIMSIMD_PUBLIC void simsimd_jaccard_b8_haswell(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* distance);
73+
SIMSIMD_PUBLIC void simsimd_hamming_b8_haswell(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* result);
74+
SIMSIMD_PUBLIC void simsimd_jaccard_b8_haswell(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* result);
7475

7576
/* x86 AVX512 backend for bitsets and integers for Intel Ice Lake CPUs and newer, using VPOPCNTDQ extensions. */
76-
SIMSIMD_PUBLIC void simsimd_hamming_b8_ice(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* distance);
77-
SIMSIMD_PUBLIC void simsimd_jaccard_b8_ice(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* distance);
77+
SIMSIMD_PUBLIC void simsimd_hamming_b8_ice(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* result);
78+
SIMSIMD_PUBLIC void simsimd_jaccard_b8_ice(simsimd_b8_t const* a, simsimd_b8_t const* b, simsimd_size_t n_words, simsimd_distance_t* result);
7879
// clang-format on
7980

8081
SIMSIMD_PUBLIC unsigned char simsimd_popcount_b8(simsimd_b8_t x) {

include/simsimd/dot.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1150,7 +1150,7 @@ SIMSIMD_PUBLIC void simsimd_dot_i8_haswell(simsimd_i8_t const *a_scalars, simsim
11501150
// __m256i ab_i16_vec = _mm256_maddubs_epi16(a_i8_abs_vec, b_i8_flipped_vec);
11511151
//
11521152
// The problem with this approach, however, is the `-128` value in the second vector.
1153-
// Flipping it's sign will do nothing, and the result will be incorrect.
1153+
// Flipping its sign will do nothing, and the result will be incorrect.
11541154
// This can easily lead to noticeable numerical errors in the final result.
11551155
simsimd_size_t idx_scalars = 0;
11561156
for (; idx_scalars + 32 <= count_scalars; idx_scalars += 32) {

include/simsimd/probability.h

Lines changed: 26 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,8 @@
55
* @date October 20, 2023
66
*
77
* Contains:
8-
* - Kullback-Leibler divergence
9-
* - Jensen–Shannon divergence
8+
* - Kullback-Leibler divergence (TODO: Rename handle to `kld`)
9+
* - Jensen–Shannon divergence (TODO: Rename handle to `jsd`)
1010
*
1111
* For datatypes:
1212
* - 32-bit floating point numbers
@@ -35,52 +35,52 @@ extern "C" {
3535
* By default they use 32-bit arithmetic, unless the arguments themselves contain 64-bit floats.
3636
* For double-precision computation check out the "*_accurate" variants of those "*_serial" functions.
3737
*/
38-
SIMSIMD_PUBLIC void simsimd_kl_f64_serial(simsimd_f64_t const* a, simsimd_f64_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
39-
SIMSIMD_PUBLIC void simsimd_js_f64_serial(simsimd_f64_t const* a, simsimd_f64_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
40-
SIMSIMD_PUBLIC void simsimd_kl_f32_serial(simsimd_f32_t const* a, simsimd_f32_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
41-
SIMSIMD_PUBLIC void simsimd_js_f32_serial(simsimd_f32_t const* a, simsimd_f32_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
42-
SIMSIMD_PUBLIC void simsimd_kl_f16_serial(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
43-
SIMSIMD_PUBLIC void simsimd_js_f16_serial(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
44-
SIMSIMD_PUBLIC void simsimd_kl_bf16_serial(simsimd_bf16_t const* a, simsimd_bf16_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
45-
SIMSIMD_PUBLIC void simsimd_js_bf16_serial(simsimd_bf16_t const* a, simsimd_bf16_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
38+
SIMSIMD_PUBLIC void simsimd_kl_f64_serial(simsimd_f64_t const* a, simsimd_f64_t const* b, simsimd_size_t n, simsimd_distance_t* result);
39+
SIMSIMD_PUBLIC void simsimd_js_f64_serial(simsimd_f64_t const* a, simsimd_f64_t const* b, simsimd_size_t n, simsimd_distance_t* result);
40+
SIMSIMD_PUBLIC void simsimd_kl_f32_serial(simsimd_f32_t const* a, simsimd_f32_t const* b, simsimd_size_t n, simsimd_distance_t* result);
41+
SIMSIMD_PUBLIC void simsimd_js_f32_serial(simsimd_f32_t const* a, simsimd_f32_t const* b, simsimd_size_t n, simsimd_distance_t* result);
42+
SIMSIMD_PUBLIC void simsimd_kl_f16_serial(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* result);
43+
SIMSIMD_PUBLIC void simsimd_js_f16_serial(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* result);
44+
SIMSIMD_PUBLIC void simsimd_kl_bf16_serial(simsimd_bf16_t const* a, simsimd_bf16_t const* b, simsimd_size_t n, simsimd_distance_t* result);
45+
SIMSIMD_PUBLIC void simsimd_js_bf16_serial(simsimd_bf16_t const* a, simsimd_bf16_t const* b, simsimd_size_t n, simsimd_distance_t* result);
4646

4747
/* Double-precision serial backends for all numeric types.
4848
* For single-precision computation check out the "*_serial" counterparts of those "*_accurate" functions.
4949
*/
50-
SIMSIMD_PUBLIC void simsimd_kl_f32_accurate(simsimd_f32_t const* a, simsimd_f32_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
51-
SIMSIMD_PUBLIC void simsimd_js_f32_accurate(simsimd_f32_t const* a, simsimd_f32_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
52-
SIMSIMD_PUBLIC void simsimd_kl_f16_accurate(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
53-
SIMSIMD_PUBLIC void simsimd_js_f16_accurate(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
54-
SIMSIMD_PUBLIC void simsimd_kl_bf16_accurate(simsimd_bf16_t const* a, simsimd_bf16_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
55-
SIMSIMD_PUBLIC void simsimd_js_bf16_accurate(simsimd_bf16_t const* a, simsimd_bf16_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
50+
SIMSIMD_PUBLIC void simsimd_kl_f32_accurate(simsimd_f32_t const* a, simsimd_f32_t const* b, simsimd_size_t n, simsimd_distance_t* result);
51+
SIMSIMD_PUBLIC void simsimd_js_f32_accurate(simsimd_f32_t const* a, simsimd_f32_t const* b, simsimd_size_t n, simsimd_distance_t* result);
52+
SIMSIMD_PUBLIC void simsimd_kl_f16_accurate(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* result);
53+
SIMSIMD_PUBLIC void simsimd_js_f16_accurate(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* result);
54+
SIMSIMD_PUBLIC void simsimd_kl_bf16_accurate(simsimd_bf16_t const* a, simsimd_bf16_t const* b, simsimd_size_t n, simsimd_distance_t* result);
55+
SIMSIMD_PUBLIC void simsimd_js_bf16_accurate(simsimd_bf16_t const* a, simsimd_bf16_t const* b, simsimd_size_t n, simsimd_distance_t* result);
5656

5757
/* SIMD-powered backends for Arm NEON, mostly using 32-bit arithmetic over 128-bit words.
5858
* By far the most portable backend, covering most Arm v8 devices, over a billion phones, and almost all
5959
* server CPUs produced before 2023.
6060
*/
61-
SIMSIMD_PUBLIC void simsimd_kl_f32_neon(simsimd_f32_t const* a, simsimd_f32_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
62-
SIMSIMD_PUBLIC void simsimd_js_f32_neon(simsimd_f32_t const* a, simsimd_f32_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
63-
SIMSIMD_PUBLIC void simsimd_kl_f16_neon(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
64-
SIMSIMD_PUBLIC void simsimd_js_f16_neon(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
61+
SIMSIMD_PUBLIC void simsimd_kl_f32_neon(simsimd_f32_t const* a, simsimd_f32_t const* b, simsimd_size_t n, simsimd_distance_t* result);
62+
SIMSIMD_PUBLIC void simsimd_js_f32_neon(simsimd_f32_t const* a, simsimd_f32_t const* b, simsimd_size_t n, simsimd_distance_t* result);
63+
SIMSIMD_PUBLIC void simsimd_kl_f16_neon(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* result);
64+
SIMSIMD_PUBLIC void simsimd_js_f16_neon(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* result);
6565

6666
/* SIMD-powered backends for AVX2 CPUs of Haswell generation and newer, using 32-bit arithmetic over 256-bit words.
6767
* First demonstrated in 2011, at least one Haswell-based processor was still being sold in 2022 — the Pentium G3420.
6868
* Practically all modern x86 CPUs support AVX2, FMA, and F16C, making it a perfect baseline for SIMD algorithms.
6969
* On other hand, there is no need to implement AVX2 versions of `f32` and `f64` functions, as those are
7070
* properly vectorized by recent compilers.
7171
*/
72-
SIMSIMD_PUBLIC void simsimd_kl_f16_haswell(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
73-
SIMSIMD_PUBLIC void simsimd_js_f16_haswell(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
72+
SIMSIMD_PUBLIC void simsimd_kl_f16_haswell(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* result);
73+
SIMSIMD_PUBLIC void simsimd_js_f16_haswell(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* result);
7474

7575
/* SIMD-powered backends for various generations of AVX512 CPUs.
7676
* Skylake is handy, as it supports masked loads and other operations, avoiding the need for the tail loop.
7777
* Ice Lake added VNNI, VPOPCNTDQ, IFMA, VBMI, VAES, GFNI, VBMI2, BITALG, VPCLMULQDQ, and other extensions for integral operations.
7878
* Sapphire Rapids added tiled matrix operations, but we are most interested in the new mixed-precision FMA instructions.
7979
*/
80-
SIMSIMD_PUBLIC void simsimd_kl_f32_skylake(simsimd_f32_t const* a, simsimd_f32_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
81-
SIMSIMD_PUBLIC void simsimd_js_f32_skylake(simsimd_f32_t const* a, simsimd_f32_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
82-
SIMSIMD_PUBLIC void simsimd_kl_f16_sapphire(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
83-
SIMSIMD_PUBLIC void simsimd_js_f16_sapphire(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* divergence);
80+
SIMSIMD_PUBLIC void simsimd_kl_f32_skylake(simsimd_f32_t const* a, simsimd_f32_t const* b, simsimd_size_t n, simsimd_distance_t* result);
81+
SIMSIMD_PUBLIC void simsimd_js_f32_skylake(simsimd_f32_t const* a, simsimd_f32_t const* b, simsimd_size_t n, simsimd_distance_t* result);
82+
SIMSIMD_PUBLIC void simsimd_kl_f16_sapphire(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* result);
83+
SIMSIMD_PUBLIC void simsimd_js_f16_sapphire(simsimd_f16_t const* a, simsimd_f16_t const* b, simsimd_size_t n, simsimd_distance_t* result);
8484
// clang-format on
8585

8686
#define SIMSIMD_MAKE_KL(name, input_type, accumulator_type, load_and_convert, epsilon) \

0 commit comments

Comments
 (0)