Skip to content

Implement audio absolute time (DSP time) and scheduled play #105510

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

PizzaLovers007
Copy link

@PizzaLovers007 PizzaLovers007 commented Apr 17, 2025

Implementation is based off of @fire's notes 4 years ago: https://gist.github.com/fire/b9ed7853e7be24ab1d5355ef01f46bf1

The absolute time is calculated based on the total mixed frames and the mix rate. This means it only updates when a mix step happens.

Specific play_scheduled behaviors:

  • If a sound is playing, play_scheduled() will stop that sound (with single polyphony). This matches the behavior of play().
  • If a sound is scheduled, then paused, then resumed before the schedule happens, the sound still plays at the correct scheduled time.
  • If a playing sound is paused, then play_scheduled() is called, the sound will restart from the beginning. This matches the behavior of play().
  • With a higher max_polyphony, multiple sounds can be scheduled, and playing sounds can continue playing.
  • play_scheduled is unaffected by pitch scale.
  • play_scheduled does not support samples. The "Stream" default playback type is required for Web builds (ideally with threads enabled).

Scheduled stop is not implemented due to limited use cases.

Fixes godotengine/godot-proposals#1151.

@fire

This comment was marked as outdated.

@PizzaLovers007

This comment was marked as outdated.

@PizzaLovers007 PizzaLovers007 requested a review from a team as a code owner April 18, 2025 01:25
@fire
Copy link
Member

fire commented Apr 18, 2025

I removed breaks compat because we decided to only change the new parameters to double.

Edited:

At least that's the approach we want to take. It may still be breaking compat, but that's not by design and is a bug.

@fire

This comment was marked as outdated.

@PizzaLovers007
Copy link
Author

Added, thanks for the pioneer work and review!

PizzaLovers007 added a commit to PizzaLovers007/godot that referenced this pull request Apr 18, 2025
Two reasons to change this:
* At a default mix rate of 44100, the playback position in seconds can experience rounding errors with a 32-bit type if the value is high enough. 44100 requires 15 free bits in the mantissa to be within 1/2 an audio frame, so the cutoff is 512 seconds before rounding issues occur (512=2^9, and 9+15 > 23 mantissa bits in 32-bit float).
* Internally, AudioStreamPlayback and AudioStream use a 64-bit type for playback position.

See this discussion: godotengine#105510 (comment)
@@ -1263,6 +1277,14 @@ void AudioServer::start_playback_stream(Ref<AudioStreamPlayback> p_playback, con
playback_node->highshelf_gain.set(p_highshelf_gain);
playback_node->attenuation_filter_cutoff_hz.set(p_attenuation_cutoff_hz);

uint64_t scheduled_start_frame = uint64_t(p_scheduled_start_time * get_mix_rate());
if (scheduled_start_frame > 0 && scheduled_start_frame < mix_frames) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this treat the web browser situation where the frame rate goes near zero and all the deadlines are missed when tabbed away?

Copy link
Author

@PizzaLovers007 PizzaLovers007 Apr 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the threads version, the audio process is run on a separate thread that doesn't get throttled when tabbed out, so the absolute time continues incrementing at regular speed. The game logic that does the scheduling stops running though, so the metronome stops and eventually the music as well (song loop is manually scheduled). Returning back to the tab causes the metronome to be off due to the song scheduled before the current time as shown in the console error logs, but later scheduling brings everything back to normal.

I uploaded a nothreads version as well. Surprisingly, the onmessage callback on the audio worklet that triggers the audio process on the C++ side doesn't get throttled, so things work similarly as the threads version.

One key difference is that with lower framerate on nothreads, the audio gets crackly and the absolute time lags behind since there are less mixes happening than there should be. With "Use play_scheduled()" toggled off, the metronome (OS time based) is now unsynced from the song due to the missed mix deadlines.

Copy link
Member

@fire fire Apr 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it spam overlap at the restart?

Edited: Should it?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the current implementation the song does spam restart. Simplified code:

func _process(delta: float) -> void:
    if curr_time > last_scheduled_time:
        last_scheduled_time += song_length
        play_scheduled(last_scheduled_time)

You can definitely prevent this behavior with something like:

func _process(delta: float) -> void:
    if curr_time > last_scheduled_time:
        var next_song_loop = ceil((curr_time - initial_start_time) / song_length)
        last_scheduled_time = initial_start_time + song_length * next_song_loop
        play_scheduled(last_scheduled_time)

Copy link
Member

@fire fire Apr 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would like audio team input if the deadline has past if we should spam or ignore the events.

This sort of thing happens on mobile device sleep and web browser sleep.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, when a Godot app goes into the background or is hidden by another window I've noticed all manner of unpredictable timing issues and changes in frame rate. (These play havoc with my voip code.) This play_scheduled() feature is low-level enough that almost all of these problems can be fixed by adding GDScript code to handle the cases. I don't think it's possible to anticipate all the problems until it is put into use in various demanding projects. Can it marked it as @experimental as I have noticed on some features, such as AnimationNodeExtension?

Copy link
Author

@PizzaLovers007 PizzaLovers007 Apr 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Marking as experimental sounds good to me.

I'd imagine most of the issues caused by timing issues would be due to some mishandling in the GDScript side, like how my demo implementation is flawed. Some prevention mechanisms here would be good to add in the future as issues come up.

@mk56-spn
Copy link

mk56-spn commented Apr 19, 2025

Pardon the Intrusion with a related but tangential to a degree matter to the current discussions
https://docs.godotengine.org/en/stable/tutorials/audio/sync_with_audio.html

This current method of syncing visuals with gameplay Is finicky and poor in accuracy. It is my understanding that the monotonic nature of DSP time should allow proper non jittery advancement of notes in rhythm games as well as any music synced lerped movements right?

I bring it up because the linked fire GitHub gist post or whatever it's called (https://gist.github.com/fire/b9ed7853e7be24ab1d5355ef01f46bf1 )had syncing visuals as a goal when implementing this. But it doesn't seem present in the demo so I wondered if it wasnt aming the functionality being introduced in this PR

@fire
Copy link
Member

fire commented Apr 19, 2025

Maybe we can create demos for https://github.com/godotengine/godot-demo-projects I talked about:

Use cases:

  • metronome
  • graphic that moves towards you and you have to hit in sync to a end line to play a note.
  • Playing an audio clip of an chicken cry at 8 am and then a second cry at 8 am and 5 minutes.

@PizzaLovers007
Copy link
Author

You would implement it as mix_frame_plus_lookahead = mix_frame + LOOKAHEAD_BUFFER_SIZE to account for the offset at which the frames are laid down in the buffer. It's only 1.4ms, so below the threshold of perception, but would be relevant for a use case when aligning the exact phase of a sound wave.

Oh gotcha, yeah this is a much easier way to handle it. I've made the change.

Maybe play_scheduled() should add min(0,AudioServer.get_absolute_time()-absolute_time)to the from_position so as to trim the front of the sound sample when you set the time slightly in the past. This would make it more interesting, and would make it easy to synchronize longer tracks that you didn't get to the start of in time.

I think it's better to keep the existing behavior for a couple of reasons:

  • Someone coming from a Unity background may find this behavior unexpected since Unity's PlayScheduled plays immediately if it's in the past.
  • This can be done on the GDScript side manually.

I do think this can be useful, and we can add a param in the future to enable this behavior if this becomes a common use case.

@PizzaLovers007
Copy link
Author

Maybe we can create demos for https://github.com/godotengine/godot-demo-projects I talked about:

Use cases:

  • metronome
  • graphic that moves towards you and you have to hit in sync to a end line to play a note.
  • Playing an audio clip of an chicken cry at 8 am and then a second cry at 8 am and 5 minutes.

I honestly don't see this PR helping too much with the 2nd case since get_absolute_time() is based on the audio frames mixed rather than a "current" time. In reality it'll always return a value sometime in the future. $Player.get_playback_position() + AudioServer.get_time_since_last_mix() - AudioServer.get_output_latency() is a better option even if it is slightly jittery. Some logic can be applied to smooth out the jitter, which I can work on a demo project for.

The 3rd case would certainly be more exact with play_scheduled(), though I suspect this level of accuracy isn't needed for minutes long delays.

@mk56-spn
Copy link

Maybe we can create demos for https://github.com/godotengine/godot-demo-projects I talked about:
Use cases:

  • metronome
  • graphic that moves towards you and you have to hit in sync to a end line to play a note.
  • Playing an audio clip of an chicken cry at 8 am and then a second cry at 8 am and 5 minutes.

I honestly don't see this PR helping too much with the 2nd case since get_absolute_time() is based on the audio frames mixed rather than a "current" time. In reality it'll always return a value sometime in the future. $Player.get_playback_position() + AudioServer.get_time_since_last_mix() - AudioServer.get_output_latency() is a better option even if it is slightly jittery. Some logic can be applied to smooth out the jitter, which I can work on a demo project for.

The 3rd case would certainly be more exact with play_scheduled(), though I suspect this level of accuracy isn't needed for minutes long delays.

if your second point holds true thats a shame, im guessing most consumers of DSP time that really need it are rhythm games ( only other large scale usage i can think of is a midi player or other audio software) , most of which have a scrolling playfield ( taiko , SDVX, IIDX, etc etc ) , any amount of jitter is basically an experience killer, and the jitter really is quite bad, even in ideal scenarios where youre running both the audio driver and the logic at unreasonably taxing rates the jitter is bad, as it stands godot is not really viable for any sort of competitive rhythm game in my experience, there is one major exception, that being project heartbeat , but that uses a fork that tears out and replaces half the audio internals so it might aswell be an unreal engine game as far as that goes.

I hope your smoothing out solution pans out though and dont take this as a criticism of this PR which was very much neccesary, just that it seems like it needs accompanying adaptation to really reach its true potential usage wise

@fire
Copy link
Member

fire commented Apr 20, 2025

@goatchurchprime
Copy link
Contributor

The small godot audio meeting today (5 attendees) thought this one looked good.

@PizzaLovers007
Copy link
Author

@mk56-spn As promised, here's the rhythm game demo project with the playback position smoothing using the 1€ filter: godotengine/godot-demo-projects#1197.

Copy link
Member

@Mickeon Mickeon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Part of what originally confused up on the purpose of this PR is that there's no documented reason for this method to exist.

What I mean is, at a glance, most users will look at both play and play_scheduled. They will see that the latter has a longer name, longer description, and takes longer to set up. As such, they may stick to _process() callbacks, Timer nodes, etc. for accurate audio purposes. There's barely any mention, or any examples as to why this should be used over play().

I have no suggestions about that, but I would heavily recommend to at least provide one.

Copy link
Member

@Mickeon Mickeon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm generally questioning an implementation detail. I have not looked much at the prior conversation, so I'm not sure if this has ever been brought up.

If AudioServer.get_absolute_time is so integral to AudioStreamPlayer.play_scheduled, why should it be fetched every time in user code? Why should play_scheduled require an absolute time, instead of relative time?
Essentially my suggestion is as follows:

# Before:
var future_time = AudioServer.get_absolute_time() + 1
player1.play_scheduled(future_time)

# After:
player1.play_scheduled(1)

Regarding the precision of these methods, I wonder if #81873 would be of any inspiration, or if it would even supersede AudioServer.get_absolute_time in some situations?

The absolute time is calculated based on the total mixed frames and the mix rate. This means it only updates when a mix step happens.

Specific play_scheduled behaviors:
- If a sound is playing, play_scheduled() will stop that sound. This matches the behavior of play().
- If a sound is scheduled, then paused, then resumed before the schedule happens, the sound still plays at the correct scheduled time.
- If a playing sound is paused, then play_scheduled() is called, the sound will restart from the beginning. This matches the behavior of play().
- With a higher max_polyphony, multiple sounds can be scheduled.
- play_scheduled is unaffected by pitch scale
- play_scheduled does not support samples

Scheduled stop is not implemented due to limited use cases.

Co-authored-by: K. S. Ernest (iFire) Lee <[email protected]>
@PizzaLovers007
Copy link
Author

PizzaLovers007 commented Apr 29, 2025

Here's a trimmed down version of the metronome in my demo project:

var _start_absolute_time: float
var _scheduled_time: float

func _process(_delta: float) -> void:
	# Once the currently scheduled tick has started, begin scheduling the next
	# one.
	var curr_time := AudioServer.get_absolute_time()
	if curr_time > _scheduled_time:
		var next_tick := ceili((curr_time - _start_absolute_time) / beat_length)
		_scheduled_time = _start_absolute_time + next_tick * beat_length
		play_scheduled(_scheduled_time)

func start(start_absolute_time: float) -> void:
	_start_absolute_time = start_absolute_time
	_scheduled_time = start_absolute_time
	play_scheduled(_scheduled_time)

If we were to change the API to be relative, reimplementing it would look something like:

var _delay: float

func _process(delta: float) -> void:
	_delay -= delta
	if _delay > 0:
		return

	# Now that the currently scheduled tick has started, begin scheduling the
	# next one.
	var curr_time := song_player.get_playback_position()
	var next_tick := ceili(curr_time / beat_length)
	var relative_delay := curr_time - next_tick * beat_length
	play_scheduled(curr_time)
	_delay = relative_delay

func start(delay: float) -> void:
	_delay = delay
	play_scheduled(_delay)

For scheduling sounds at the very start, using relative seems a lot easier since you only need to care about the initial delay. However once the game logic advances past that first frame, you need some sort of "current time" in both cases in order to know when to schedule sounds (in the above example, the next metronome tick).

Using player.get_playback_position() without any correction would actually be ok today since the value only changes on mix steps. This would align the delay param with the mix buffer. The issue is that play_scheduled becomes strongly coupled with an active player, so you would essentially require something to be playing at all times, even if it's a pure silence track. The reliance on the value only changing on mix step is also quite brittle since this can change at any time like with the merge of #81873.

Time.get_ticks_usec() is another option as a "current time", but it's out of sync with the mix steps and thus you would lose out on any sample-accurate audio timing (the whole reason for this PR).

To me, the monotonic and audio thread synced nature of AudioServer.get_absolute_time() provides the best functionality even if the ergonomics can be a bit clunky needing to fetch it in user code. Quoting what @goatchurchprime said above:

It looks like the most common application of this feature is to play a sound on the next beat. If we were to look back in 5 years time on all the uses of this function in people's code, it might turn out that the function play_on_next_absolute_beat(beats_per_minute, skip_next_beats=0) would have been a more targeted implementation that didn't require get_absolute_time() to be exposed. Unfortunately we can't know this at this point in time.

IMO this follows best engine practices with this low-level API exposed to users that need it so they can build their own solutions.

@adamscott
Copy link
Member

(I'm so sorry for the review delay. 🙇) I will summarize here the thread I made on the Godot Developers Chat.

First and foremost, thank you @PizzaLovers007 for your great PR. You did put a lot of work on it, and it shows!

So.

The need for a new PR

I think a new PR is needed, as a little twist is needed on how to implement the functionality. I think nobody here is doubting of the necessity to add such feature to Godot.

And @PizzaLovers007 you should totally let this one intact (just to keep the history of the code and such), and start a new one if you're still interested!

The problem with the current implementation is that it relies heavily on the AudioStreamPlayer and its underlying AudioStreamPlayback. And it introduces hacks to make it work with the new absolute time start parameter.

We need to ponder about the fact that

  • AudioStreamPlayers aren't built to handle "future" playback. And I don't know how we could make it work with that in mind.
    • Like, what happens when you mix and match play_scheduled() and play() / stop()? It is not even clear conceptually.
  • Except that AudioStreamPlayers are actually frontends to AudioStreamPlayback, and it's those that are actually registered in on the AudioServer.

I think we should inspire ourselves from the Web Audio API AudioScheduledSourceNode. The Web Audio API doesn't really work like our current way of handling sound, but I think it can give us a great insight on how to handle precise scheduled playback.

An AudioScheduledSourceNode only plays once. Once ended, it cannot be restarted. This is great because it limits weird edge cases. And fortunately enough, it's quite easy on resources to create new Web Audio nodes.

And the API is so small: there's nothing really you can do once it started. All you can do is to disconnect the node from it's audio destination.

An alternative way to handle scheduled play

What about AudioStreamPlaybackScheduled? Instead of adding new parameters to support internal scheduled time and such (if kept, the current PR would have needed to add a new parameter for when to "stop" the sound also (it's currently missing from the PR), we should instead have a brand new playback that is specifically made to handle such tasks.

We can keep using AudioStreamPlayers too, but we could change the function name to really hammer in that you don't "play" the stream when "scheduling".

// When `p_start == -1.0`, it would be the equivalent of "now"
// I initially thought about `AudioStreamPlayer.schedule_play()`, but I think the new name states better what it does... and especially what it returns.
Ref<AudioStreamPlaybackScheduled> AudioStreamPlayer::start_scheduled_stream_playback(double p_start = -1.0, double p_stream_offset = 0.0);

// No default here because it's an optional call. Thanks @PizzaLovers007 for the suggestion.
void AudioStreamPlaybackScheduled::set_scheduled_stop(double p_end);

This has the added bonus to give the control to the user. The user can summon any number of AudioStreamPlaybackScheduled instances, but will have to handle them.

It can be useful to summon multiple streams in advance too. Currently, in this PR, the metronome plays on-time when the process is at 1FPS, but fails a lot of time to actually play, because the main code didn't have the opportunity to queue up new ticks. As a metronome is predictable, you could easily whip up something like this:

var player_in_range: = true
var playback_instances: Array[AudioStreamPlaybackScheduled] = []
const MAX_PLAYBACK_INSTANCES: = 10

func _on_playback_end(playback: AudioStreamPlaybackScheduled) -> void:
    playback_instances.erase(playback)

func _process() -> void:
    if player_in_range:
        while playback_instances.size() < MAX_PLAYBACK_INSTANCES:
            var playback: = $player.start_scheduled_stream_playback(AudioServer.get_current_time() + playback_instances.size())
            playback.connect("playback_ended", _on_playback_end.bind(playback), CONNECT_ONE_SHOT)
            playback_instances.push_back(playback)
    else:
        for playback_instance in playback_instances:
            AudioServer.stop_playback_stream(playback_instance)

PizzaLovers007 added a commit to PizzaLovers007/godot that referenced this pull request May 2, 2025
This is a rewrite of godotengine#105510 that pulls the silence frames logic into a separate AudioStreamPlaybackScheduled class. The rewrite allows both the AudioServer and the player (mostly) to treat it as if it were a generic playback.

Main differences:
- play_scheduled returns an AudioStreamPlaybackScheduled instance, which is tied to the player that created it.
- The start time can be changed after scheduling.
- You can now set an end time for the playback.
- The scheduled playback can be cancelled separately from other playbacks on the player.

Co-authored-by: K. S. Ernest (iFire) Lee <[email protected]>
PizzaLovers007 added a commit to PizzaLovers007/godot that referenced this pull request May 2, 2025
This is a rewrite of godotengine#105510 that moves the silence frames logic into a separate AudioStreamPlaybackScheduled class. The rewrite allows both the AudioServer and the player (mostly) to treat it as if it were a generic playback. It also simplifies the addition of new features.

Main differences:
- play_scheduled returns an AudioStreamPlaybackScheduled instance, which is tied to the player that created it.
- The start time can be changed after scheduling.
- You can now set an end time for the playback.
- The scheduled playback can be cancelled separately from other playbacks on the player.

Co-authored-by: K. S. Ernest (iFire) Lee <[email protected]>
@PizzaLovers007
Copy link
Author

Thanks for the review! I've given this some more thought after trying to implement it (WIP) as well as seeing some of the post-rework discussions in the chat.

I can agree that the scheduling logic should be moved to its own playback. However, I think my main worry about dissociating the playback from the player is that there are some settings like volume and panning that as soon as you schedule the sound, you'd need to interact with AudioServer directly. This doesn't seem too bad for the simple AudioStreamPlayer, but then as soon as you consider the panning/doppler effects from the AudioStreamPlayer2D and AudioStreamPlayer3D, it becomes very apparent that the scheduling API becomes useless there. AudioServer APIs related to playbacks are also not yet bound in GDScript (not hard to do, but is blocking).

Additionally from your example:

  1. player_in_range implies that the sound trails as you get further from the audio player. This would need to be controlled maunally.
  2. "playback_ended" is usually signaled from the player, which pulls its data from the AudioServer. Having the playback emit the signal would mean the AudioServer needs to tell the playback that it's done playing (no polling option since playbacks aren't nodes).

If we want to expand on the playback idea but keep it tied to the player, I came up with few alternatives:

Alternative 1: Add the playback to the player

void AudioStreamPlayer::add_playback(AudioStreamPlayback playback);

Benefits of this would be that the user is free to create any kind of playback from any stream but still reap the benefits of the player frontends. The drawback here would be that the playback itself may have nothing to do with the attached AudioStream. I'm also not sure if there would be many (or any) other AudioStreamPlayback types that would make use of this.

Alternative 2: AudioStreamScheduled

With this, the user can set the start/end time on the player, then call play() to snapshot those values. They can be changed later when retrieving them from the player.

class AudioStreamScheduled : public AudioStream {
  Ref<AudioStream> base_stream;
  uint64_t scheduled_start_frame;
  uint64_t scheduled_end_frame;
}

class AudioStreamPlaybackScheduled : public AudioStreamPlayback {
  Ref<AudioStreamPlayback> base_playback;
  uint64_t scheduled_start_frame;
  uint64_t scheduled_end_frame;
};

On the GDScript side, it'd looks something like:

var scheduled_stream = AudioStreamScheduled.new(original_stream)
player.stream = scheduled_stream

var playbacks = []
for i in range(10):
  scheduled_stream.scheduled_start_time = AudioServer.get_absolute_time() + i + 1
  player.play()  # play() snapshots the start/end settings
  playbacks.append(player.get_stream_playback())

# Settings can be changed later
var scheduled_playback = playbacks[0] as AudioStreamPlaybackScheduled
scheduled_playback.scheduled_start_time = ...
scheduled_playback.scheduled_end_time = ...

This wouldn't require any changes to player code, but the usability of this seems worse.

Alternative 2A: Single playback

Similar to alternative 2, but interacting with it would be more like AudioStreamPlaybackPolyphonic.

class AudioStreamScheduled : public AudioStream {
};

class AudioStreamPlaybackScheduled : public AudioStreamPlayback {
  struct Schedules {
    uint64_t start_frame = 0;
    uint64_t end_frame = 0;
    double from_pos = 0;
  };

  Ref<AudioStream> base_stream;
  LocalVector<Schedules> schedules;

  int add_schedule(double start_time = -1, double from_pos = 0, double end_time = -1);
  void set_start_time(int index, double start_time);
  void set_from_pos(int index, double from_pos);
  void set_end_time(int index, double end_time);
};

I'm not very convinced by any of these options though, and I think your suggestion with start_scheduled_stream_playback + associating the scheduled playback with the player is the one that makes the most sense to me. Let me know what you think!

PizzaLovers007 added a commit to PizzaLovers007/godot that referenced this pull request May 6, 2025
This is a rewrite of godotengine#105510 that moves the silence frames logic into a separate AudioStreamPlaybackScheduled class. The rewrite allows both the AudioServer and the player (mostly) to treat it as if it were a generic playback. It also simplifies the addition of new features.

Main differences:
- play_scheduled returns an AudioStreamPlaybackScheduled instance, which is tied to the player that created it.
- The start time can be changed after scheduling.
- You can now set an end time for the playback.
- The scheduled playback can be cancelled separately from other playbacks on the player.

Co-authored-by: K. S. Ernest (iFire) Lee <[email protected]>
PizzaLovers007 added a commit to PizzaLovers007/godot that referenced this pull request May 6, 2025
This is a rewrite of godotengine#105510 that moves the silence frames logic into a separate AudioStreamPlaybackScheduled class. The rewrite allows both the AudioServer and the player (mostly) to treat it as if it were a generic playback. It also simplifies the addition of new features.

Main differences:
- play_scheduled returns an AudioStreamPlaybackScheduled instance, which is tied to the player that created it.
- The start time can be changed after scheduling.
- You can now set an end time for the playback.
- The scheduled playback can be cancelled separately from other playbacks on the player.

Co-authored-by: K. S. Ernest (iFire) Lee <[email protected]>
@adamscott
Copy link
Member

@PizzaLovers007 We discussed your proposal during the last audio meeting and I had an idea of a counter proposal. I just didn't have the time yet to do so. I'll try to do it today or in the next days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add an absolute time (DSP time) feature to play audio effects at specific intervals