Skip to content

[BUG] Android Only - Invalid Audio Stream Emitted If Stopped Before Hitting Interval Point #292

@techgaun

Description

@techgaun

Environment

  • expo-audio-studio version: 2.18.1
  • Expo SDK version: 52.0.46
  • Platform & OS version: Android 16
  • Device: Pixel 9 Pro

Description

It seems like in android, if you stop the recorder before it hits the interval threshold point, the emitted audio is just about 1068 bytes. This seems very similar to the issue I had created in the past: #212

I am using 30 seconds as the interval. If I let past the 30 seconds, the entire audio data for that 30 seconds chunk is there but if I stop at for example 25 second or any time between 0-29 seconds before hitting interval, the audio stream emitted has 1068 bytes only and does not have audio data.

Cross-Platform Validation

This only happens on Android, iOS seems to work fine.

AudioPlayground Apps

Important: The AudioPlayground app exposes all features of the expo-audio-studio library with full customization:

  • In the Record tab, toggle "Show Advanced Settings" to access all recording parameters
  • In the Trim tab, you can test all audio processing capabilities
  • All visualization and transcription features are available to test
  • Try different audio devices, sample rates, and compression settings

Can this issue be reproduced in the AudioPlayground app?

  • Yes, on all platforms (Web, iOS, Android)
  • Yes, but only on specific platforms (please specify):
  • No, it only happens in my app. I did not see an option to specify recording interval.

Reproduction steps in the AudioPlayground app (including any settings you modified):
1.
2.
3.

Is the behavior consistent across platforms?

  • Yes, the issue occurs the same way on all platforms I tested
  • No, the behavior differs between platforms (please describe the differences)
  • iOS

It only occurs on Android. Looks like this was fixed for iOS in fcamblor@7458be8 but seems to be an issue still on android.

On Android 16, it seems like the event data size is always 1024 for such scenario.

Configuration

const config: RecordingConfig = {
          interval: CHUNK_DURATION_MS,
          keepAwake: true,
          enableProcessing: true,
          sampleRate: 16000,
          channels: 1,
          encoding: 'pcm_16bit',
          output: {
            primary: {
              enabled: false,
            },
            compressed: {
              enabled: false,
            },
          },
          onAudioStream: async (audioData: AudioDataEvent) => {
            // business logic to handle audioData
       }
}

Logs

 DEBUG  start recording with validated config {"channels": 1, "enableProcessing": true, "encoding": "pcm_16bit", "interval": 30000, "keepAwake": true, "onAudioStream": [Function onAudioStream], "output": {"compressed": {"enabled": false}, "primary": {"enabled": false}}, "sampleRate": 16000}
 DEBUG  Enabling audio analysis listener
 LOG  result {"bitDepth": 16, "channels": 1, "compression": null, "fileUri": "null", "mimeType": "audio/wav", "sampleRate": 16000}
 DEBUG  Status: paused: false isRecording: true durationMs: 73 size: 2048 null
 LOG  Recording Session Started Successfully
 DEBUG  [handleAudioAnalysis] Received audio analysis: maxDuration=10000 analysis.dataPoints=1 analysisData.dataPoints=0
 DEBUG  [handleAudioAnalysis] Combined data points before trimming: numberOfSegments=100 visualizationDuration=10000 combinedDataPointsLength=1 vs maxDataPoints=100
 DEBUG  [handleAudioAnalysis] Updated analysis data: durationMs=100 {"dataPoints": 1}
 DEBUG  [handleAudioAnalysis] Received audio analysis: maxDuration=10000 analysis.dataPoints=6 analysisData.dataPoints=1
 DEBUG  [handleAudioAnalysis] Combined data points before trimming: numberOfSegments=100 visualizationDuration=10000 combinedDataPointsLength=7 vs maxDataPoints=100
 DEBUG  [handleAudioAnalysis] Updated analysis data: durationMs=700 {"dataPoints": 7}
 DEBUG  [handleAudioAnalysis] Received audio analysis: maxDuration=10000 analysis.dataPoints=6 analysisData.dataPoints=7
 DEBUG  [handleAudioAnalysis] Combined data points before trimming: numberOfSegments=100 visualizationDuration=10000 combinedDataPointsLength=13 vs maxDataPoints=100
 DEBUG  [handleAudioAnalysis] Updated analysis data: durationMs=1300 {"dataPoints": 13}
 DEBUG  Status: paused: false isRecording: true durationMs: 1110 size: 34816 null
 DEBUG  [handleAudioAnalysis] Received audio analysis: maxDuration=10000 analysis.dataPoints=5 analysisData.dataPoints=13
 DEBUG  [handleAudioAnalysis] Combined data points before trimming: numberOfSegments=100 visualizationDuration=10000 combinedDataPointsLength=18 vs maxDataPoints=100
 DEBUG  [handleAudioAnalysis] Updated analysis data: durationMs=1800 {"dataPoints": 18}
 DEBUG  Status: paused: false isRecording: true durationMs: 2089 size: 66048 null
 DEBUG  [handleAudioAnalysis] Received audio analysis: maxDuration=10000 analysis.dataPoints=6 analysisData.dataPoints=18
 DEBUG  [handleAudioAnalysis] Combined data points before trimming: numberOfSegments=100 visualizationDuration=10000 combinedDataPointsLength=24 vs maxDataPoints=100
 DEBUG  [handleAudioAnalysis] Updated analysis data: durationMs=2400 {"dataPoints": 24}
 DEBUG  [handleAudioAnalysis] Received audio analysis: maxDuration=10000 analysis.dataPoints=6 analysisData.dataPoints=24
 DEBUG  [handleAudioAnalysis] Combined data points before trimming: numberOfSegments=100 visualizationDuration=10000 combinedDataPointsLength=30 vs maxDataPoints=100
 DEBUG  [handleAudioAnalysis] Updated analysis data: durationMs=3000 {"dataPoints": 30}
 DEBUG  Status: paused: false isRecording: true durationMs: 3088 size: 98304 null
 DEBUG  [handleAudioAnalysis] Received audio analysis: maxDuration=10000 analysis.dataPoints=5 analysisData.dataPoints=30
 DEBUG  [handleAudioAnalysis] Combined data points before trimming: numberOfSegments=100 visualizationDuration=10000 combinedDataPointsLength=35 vs maxDataPoints=100
 DEBUG  [handleAudioAnalysis] Updated analysis data: durationMs=3500 {"dataPoints": 35}
 DEBUG  [handleAudioAnalysis] Received audio analysis: maxDuration=10000 analysis.dataPoints=5 analysisData.dataPoints=35
 DEBUG  [handleAudioAnalysis] Combined data points before trimming: numberOfSegments=100 visualizationDuration=10000 combinedDataPointsLength=40 vs maxDataPoints=100
 DEBUG  [handleAudioAnalysis] Updated analysis data: durationMs=4000 {"dataPoints": 40}
 DEBUG  Status: paused: false isRecording: true durationMs: 4089 size: 130560 null
 DEBUG  [handleAudioAnalysis] Received audio analysis: maxDuration=10000 analysis.dataPoints=6 analysisData.dataPoints=40
 DEBUG  [handleAudioAnalysis] Combined data points before trimming: numberOfSegments=100 visualizationDuration=10000 combinedDataPointsLength=46 vs maxDataPoints=100
 DEBUG  [handleAudioAnalysis] Updated analysis data: durationMs=4600 {"dataPoints": 46}
 DEBUG  [handleAudioAnalysis] Received audio analysis: maxDuration=10000 analysis.dataPoints=6 analysisData.dataPoints=46
 DEBUG  [handleAudioAnalysis] Combined data points before trimming: numberOfSegments=100 visualizationDuration=10000 combinedDataPointsLength=52 vs maxDataPoints=100
 DEBUG  [handleAudioAnalysis] Updated analysis data: durationMs=5200 {"dataPoints": 52}
 DEBUG  Status: paused: false isRecording: true durationMs: 5090 size: 162304 null
 DEBUG  [handleAudioAnalysis] Received audio analysis: maxDuration=10000 analysis.dataPoints=6 analysisData.dataPoints=52
 DEBUG  [handleAudioAnalysis] Combined data points before trimming: numberOfSegments=100 visualizationDuration=10000 combinedDataPointsLength=58 vs maxDataPoints=100
 DEBUG  [handleAudioAnalysis] Updated analysis data: durationMs=5800 {"dataPoints": 58}
 DEBUG  [handleAudioAnalysis] Received audio analysis: maxDuration=10000 analysis.dataPoints=6 analysisData.dataPoints=58
 DEBUG  [handleAudioAnalysis] Combined data points before trimming: numberOfSegments=100 visualizationDuration=10000 combinedDataPointsLength=64 vs maxDataPoints=100
 DEBUG  [handleAudioAnalysis] Updated analysis data: durationMs=6400 {"dataPoints": 64}
 DEBUG  Status: paused: false isRecording: true durationMs: 6088 size: 194048 null
 DEBUG  [handleAudioAnalysis] Received audio analysis: maxDuration=10000 analysis.dataPoints=6 analysisData.dataPoints=64
 DEBUG  [handleAudioAnalysis] Combined data points before trimming: numberOfSegments=100 visualizationDuration=10000 combinedDataPointsLength=70 vs maxDataPoints=100
 DEBUG  [handleAudioAnalysis] Updated analysis data: durationMs=7000 {"dataPoints": 70}
 DEBUG  [handleAudioAnalysis] Received audio analysis: maxDuration=10000 analysis.dataPoints=5 analysisData.dataPoints=70
 DEBUG  [handleAudioAnalysis] Combined data points before trimming: numberOfSegments=100 visualizationDuration=10000 combinedDataPointsLength=75 vs maxDataPoints=100
 DEBUG  [handleAudioAnalysis] Updated analysis data: durationMs=7500 {"dataPoints": 75}
 LOG  pauseRecording (in tryAgain)
 LOG  Recording Session Paused
 DEBUG  stoping recording
 DEBUG  [handleAudioEvent] Received audio event: {"compression": null, "deltaSize": 1024, "encodedLength": 1368, "fileUri": "null", "lastEmittedSize": 0, "mimeType": "audio/wav", "position": 0, "streamUuid": "05b4999c-7cd7-4cd2-a652-c8b48e811614", "totalSize": 0}
 LOG  got audio data, eventDataSize: 1024 totalSize: 0 {"compression": undefined, "data": "qP/O/9n/v/+0/7L/qv/N/9//3f/d/+T/zP/I/97/1f/N/8z/0f+3/63/t//B/9z/8//f/9T/3P/W/8f/0//p/8r/2//m/9v/0//n/wQABgBAAFwAQwA9AGUAZgByAJQApACxAM4A0ADZAOEAzgDJAN0AxQDEANoA2AC5AMsA2gDcAKoArQCiAG8AdwB1AFsAYABWACsAEQAGAOn/9v/z/+P/xv/M/+//7f/v//D/6v/m/wIABgAXABsAKwA1AFMAXQBVAEwARgBgAIUArwC7AKAAnACaALcAowC8AMAApQCGAJIAdQBpAIsAkwBzAIEAkQB+AHUAegCHAHgAmACkAIAAbwB/AGcAXgBsAE8ANABUAEsALgA0ADwAGQAKADIAQAAnADoAIwABAPP/CAAHAP///P/h/8r/vP/O/8X/wv+4/6P/kP+g/6D/nf+i/6v/p/+h/7X/qv+S/5D/kv+C/4T/jP+I/47/jP+f/5v/p/+h/xoAqv+d/57/w//T/8//yP/V/9//1//Y/87/0v/b/+H/uv/A/8L/rf+b/6b/qv+f/4X/l/+e/5X/mv+7/7L/k/+R/6r/r/+s/9b/4v/E/77/2f/f/9n/BgApACIAMABQAEgAQQBiAIEAjwCpALkAnwChALYAuADFAMoA0wDKAL4AuAC4AJAAmQC8AMgAsACgAJcAjACXAJ8AmwCPAJAAfQB8AH4AgABgAFIARgAzABMAHgAeAAsACwAGAPP/5f/n/9n/wP+t/6//xf/L/8D/nf+K/3T/a/+C/47/a/9N/zn/Jv8s/zT/If8I//n+5/7w/t7+3P7n/gr/Af8Q/yL/Mf8w/0j/WP9R/2L/dv9+/3//jP+b/6z/vP/n/+j/7P8AABoAHQA9AFMAZgBhAE0ARgBDAE4AVgB+AHcAdACAAJ0AkgCSAI8AmwCbAKcApgCVAIwAjQCaAKAArQCsAJcAcwBvAHAAawBmAGwAZwBbAFYAWgBJAD8APQA4ADcAIwAYACcAOgAxACAAGgACANH/2f/n/+P/3//q/9j/2P/T/8n/uv+6/8T/yv/a/+v/4//c/9j/y//H/8j/wP/B/8L/v//M/93/6f/i/+z/9//n/+P/8//4/+j/6v/p/9D/xv/m/9f/wP/P/9r/zf/D/8b/q/+b/5v/sv+3/8v/tv+9/7v/wv/D/8X/wf/I/8L/1f/y//P/3v/Q/93/1//a/9n/4//k/+v/7//3//H/3P/W/9X/1v+7/9T/1f/U/8j/1v/X/9X/7/8EAAkA8v/q/+v/1f/K/9j/2f/B/7r/tP+0/6r/uv+i/5f/of+N/3n/lf+H/37/k/+g/63/o/+s/7j/qf+h/7//uf+4/9H/6f/S/w==", "eventDataSize": 1024, "fileUri": "null", "position": 0, "totalSize": 0}
 DEBUG  Recording interruption event received: {"isPaused": false, "reason": "recordingStopped"}
 DEBUG  [r1] Received recording interruption event: {"isPaused": false, "reason": "recordingStopped"}
 DEBUG  [r1] recordingConfigRef.current exists: true
 DEBUG  [r1] No recording interruption callback configured
 DEBUG  recording stopped {"analysisData": {"amplitudeRange": {"max": 0.469879150390625, "min": 0}, "bitDepth": 32, "dataPoints": [[Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object], [Object]], "durationMs": 7500, "extractionTimeMs": 0, "numberOfChannels": 1, "rmsRange": {"max": -Infinity, "min": Infinity}, "sampleRate": 44100, "samples": 0, "segmentDurationMs": 100}, "bitDepth": 16, "channels": 1, "compression": null, "createdAt": 1758656421732, "durationMs": 7086, "fileUri": "", "filename": "stream-only", "mimeType": "audio/wav", "sampleRate": 16000, "size": 224768}
 LOG  stopRecording completed
 DEBUG  Status: paused: false isRecording: false durationMs: undefined size: 0 undefined

Expected Behavior

  • All audio data is captured regardless of when the recording is stopped.

Actual Behavior

  • Audio data is always 1024 bytes long and does not have any actual audio for that portion of audio event data.

Additional Context

Checklist

  • I have updated to the latest version of expo-audio-studio
  • I have checked the documentation
  • I have tested in the official AudioPlayground app
  • I have verified microphone permissions are properly set up
  • I have filled out all required fields in this template

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions