Skip to content

Commit 091ee6d

Browse files
author
mthomas
committed
updated README
1 parent df8464a commit 091ee6d

File tree

1 file changed

+10
-6
lines changed

1 file changed

+10
-6
lines changed

README.md

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -15,15 +15,15 @@ Keep the directory structure the way it is and put your data in the 'audio' and
1515
│ ├── 01_generate_spectrograms.ipynb
1616
│ ├── ...
1717
│ └── ...
18-
├── audio <- ! put your input soundfiles in this folder !
18+
├── audio <- ! put your input soundfiles in this folder or unzip the provided example audio!
1919
│ ├── call_1.wav
2020
│ ├── call_2.wav
2121
│ └── ...
2222
├── functions <- contains functions that will be called in analysis scripts
2323
│ ├── audio_functions.py
2424
│ ├── ...
2525
│ └── ...
26-
├── data <- ! put a .csv metadata file of your input in this folder !
26+
├── data <- ! put a .csv metadata file of your input in this folder or use the provided example csv!
2727
│ └── info_file.csv
2828
├── parameters
2929
│ └── spec_params.py <- this file contains parameters for spectrogramming (fft_win, fft_hop...)
@@ -125,10 +125,14 @@ and select the first jupyter notebook file to start your analysis (see section "
125125

126126
#### 2.2.1. Audio files
127127

128-
Use the provided example data (LINK) or put your own dataset of sound files in the subfolder "/audio" (make sure that the /audio folder contains __only__ your input files, nothing else).
129-
Each sound file should contain a single vocalization or syllable.
128+
All audio input files need to be in a subfolder /audio. This folder should not contain any other files.
129+
130+
To use the provided example data of meerkat calls, please unzip the file 'audio_please_unzip.zip' and verify that all audio files have been unpacked into an /audio folder according to the structure described in Section1.
131+
132+
To use your own data, create a subfolder "/audio" and put your sound files there (make sure that the /audio folder contains __only__ your input files, nothing else). Each sound file should contain a single vocalization or syllable.
130133
(You may have to detect and extract such vocal elements first, if working with acoustic recordings.)
131134

135+
132136
Ideally, start and end of the sound file correspond exactly to start and end of the vocalization.
133137
If there are delays in the onset of the vocalizations, these should be the same for all sound files.
134138
Otherwise, vocalizations may appear dissimilar or distant in latent space simply because their onset times are different.
@@ -137,10 +141,10 @@ but note that it comes at the cost of increased computation time.
137141

138142
#### 2.2.2. [Optional: Info file]
139143

140-
Use the .csv file provided with the example data (LINK) or, if you are using your own data,´ add a ";"-separated .csv file with headers containing the filenames of the input audio, some labels and any other additional metadata (if available) in the subfolder "/data".
144+
Use the provided info_file.csv file for the example audio data or, if you are using your own data,´ add a ";"-separated info_file.csv file with headers containing the filenames of the input audio, some labels and any other additional metadata (if available) in the subfolder "/data".
141145
If some or all labels are unknown, there should still be a label column and unkown labels should be marked with "unknown".
142146

143-
Structure of info_file.csv:
147+
Structure of info_file.csv must be:
144148

145149
| filename | label | ... | ....
146150
-----------------------------------------

0 commit comments

Comments
 (0)