To run the notebooks using UV:
uv run --with jupyter jupyter lab
Python functions for sonification of TESS light curves. Requires Python 3.8. Uses STRAUSS to generate audio from x,y data for each light curve, and conda for package management.
- Install dependencies in their own
condaenvironment:conda env create -f environment.yml conda activate tess-audio-utils
- Run
tess_examples.pyortess_subjects.pyto generate a set of WAV files with STRAUSS. - Once you've got the WAV files, you can convert them to MP3 with the ffmpeg shell scripts.
create_subjects.pywill create new Panoptes subjects from a list of PH-TESS subject IDs and the MP3 files that you've created in previous steps.
create_subjects.py: Create new Panoptes subjects from a list of PH-TESS subject IDs and a set of MP3 files.tess_examples.py: Generate a set of WAV files from a list of PH-TESS subject IDs.tess_subjects.py: Generate a set of WAV files for a given PH-TESS subject set ID.subject_utils.py: A collection of utility functions for reading light curve data and converting it to sound.fetch_light_curve: Return the light curve data, given the URL of a JSON filefrom subject_utils import fetch_light_curve days, fluxes = fetch_light_curve(url)
light_curve_url: Returns the URL of the light curve data for a given subject.from subject_utils import light_curve_url url = light_curve_url(subject)
normalise_light_curve: Normalise the light curve data to a [0…1] range in both axes.from subject_utils import normalise_light_curve x, y = normalise_light_curve(days, fluxes)
sonify: Create a STRAUSS sonification, which can be rendered and saved. Thex,ydata series must be converted to a set of STRAUSS parameters first.from subject_utils import sonify data = { 'pitch':1., 'time_evo':x, 'azimuth':(x*0.5+0.25) % 1, 'polar':0.5, 'pitch_shift':(y*10+1)**-0.7 } soni = sonify(data) soni.render()
examples2mp3.sh: convert the WAV files inwav/examplesto MP3 files inmp3/examples.sims2mp3.sh: convert the WAV files inwav/simsto MP3 files inmp3/sims.