AVAudio and Offline Manual Rendering Mode - can't write buffers of higher frequencies into output file

569 views Asked by At

I'm reading an input file and with the offline manual rendering mode, I want to perform amplitude modulation and to write the result to an output file.

For the sake of testing, I produce pure sine waves - this works well for frequencies lower than 6.000 Hz. For higher frequencies (my goal consists in working with ca. 20.000 Hz), the signal (thus listening the output file) is distorted, and the spectrum ends at 8.000 Hz - no pure spectrum anymore with multiple peaks between 0 and 8.000 Hz.

Here's my code snippet:

    let outputFile: AVAudioFile

    do {
        let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
        let outputURL = documentsURL.appendingPathComponent("output.caf")
        outputFile = try AVAudioFile(forWriting: outputURL, settings: sourceFile.fileFormat.settings)
    } catch {
        fatalError("Unable to open output audio file: \(error).")
    }

    var sampleTime: Float32 = 0

    while engine.manualRenderingSampleTime < sourceFile.length {
        do {
            let frameCount = sourceFile.length - engine.manualRenderingSampleTime
            let framesToRender = min(AVAudioFrameCount(frameCount), buffer.frameCapacity)
            
            let status = try engine.renderOffline(framesToRender, to: buffer)
            
            switch status {
            
            case .success:
                // The data rendered successfully. Write it to the output file.
                let sampleRate:Float = Float((mixer.outputFormat(forBus: 0).sampleRate))

                let modulationFrequency: Float = 20000.0
                
                for i in stride(from:0, to: Int(buffer.frameLength), by: 1) {
                    let val = sinf(2.0 * .pi * modulationFrequency * Float(sampleTime) / Float(sampleRate))
                    // TODO: perform modulation later
                    buffer.floatChannelData?.pointee[Int(i)] = val
                    sampleTime = sampleTime + 1.0
                }

                try outputFile.write(from: buffer)
                
            case .insufficientDataFromInputNode:
                // Applicable only when using the input node as one of the sources.
                break
                
            case .cannotDoInCurrentContext:
                // The engine couldn't render in the current render call.
                // Retry in the next iteration.
                break
                
            case .error:
                // An error occurred while rendering the audio.
                fatalError("The manual rendering failed.")
            @unknown default:
                fatalError("unknown error")
            }
        } catch {
            fatalError("The manual rendering failed: \(error).")
        }
    }

My question: is there s.th. wrong with my code? Or has anybody an idea how to produce output files with sine waves of higher frequencies?

I suppose that the manual rendering mode is not fast enough in order to deal with higher frequencies.


Update: in the meantime, I did analyze the output file with Audacity. Above please find the waveform of 1.000 Hz, below the same with 20.000 Hz: enter image description here

When I zoom in, I see the following: enter image description here

Comparing the spectrums of the two output files, I get the following: enter image description here

It's strange that with higher frequencies, the amplitude goes toward zero. In addition, I see more frequencies in the second spectrum.

A new question in conjunction with the outcome is the correctness of the following algorithm:

// Process the audio in `renderBuffer` here
for i in 0..<Int(renderBuffer.frameLength) {
    let val = sinf(1000.0*Float(index) *2 * .pi / Float(sampleRate))
    renderBuffer.floatChannelData?.pointee[i] = val
    index += 1
}

I did check the sample rate, which is 48000 - I know that when the sampling frequency is greater than twice the maximum frequency of the signal being sampled, the original signal can be faithfully reconstructed.

Update 2:

I changed the settings as follows:

    settings[AVFormatIDKey] = kAudioFormatAppleLossless
    settings[AVAudioFileTypeKey] = kAudioFileCAFType
    settings[AVSampleRateKey] = readBuffer.format.sampleRate
    settings[AVNumberOfChannelsKey] = 1
    settings[AVLinearPCMIsFloatKey] = (readBuffer.format.commonFormat == .pcmFormatInt32)
    settings[AVSampleRateConverterAudioQualityKey] = AVAudioQuality.max
    settings[AVLinearPCMBitDepthKey] = 32
    settings[AVEncoderAudioQualityKey] = AVAudioQuality.max

Now the quality of the output signal is better, but not perfect. I get higher amplitudes, but always more than one frequency in the spectrum analyzer. Maybe a workaround could consist in applying a high pass filter?

In the meantime, I did work with a kind of SignalGenerator, streaming the manipulated buffer (with sine waves) directly to the loudspeaker - in this case, the output is perfect. I think that routing the signal to a file causes such issues.

1

There are 1 answers

7
sbooth On

The speed of manual rendering mode is not the issue, as speed in the context of manual rendering is somewhat irrelevant.

Here is skeleton code for manual rendering from a source file to an output file:

// Open the input file
let file = try! AVAudioFile(forReading: URL(fileURLWithPath: "/tmp/test.wav"))

let engine = AVAudioEngine()
let player = AVAudioPlayerNode()

engine.attach(player)

engine.connect(player, to:engine.mainMixerNode, format: nil)

// Run the engine in manual rendering mode using chunks of 512 frames
let renderSize: AVAudioFrameCount = 512

// Use the file's processing format as the rendering format
let renderFormat = AVAudioFormat(commonFormat: file.processingFormat.commonFormat, sampleRate: file.processingFormat.sampleRate, channels: file.processingFormat.channelCount, interleaved: true)!
let renderBuffer = AVAudioPCMBuffer(pcmFormat: renderFormat, frameCapacity: renderSize)!

try! engine.enableManualRenderingMode(.offline, format: renderFormat, maximumFrameCount: renderBuffer.frameCapacity)

try! engine.start()
player.play()

// The render format is also the output format
let output = try! AVAudioFile(forWriting: URL(fileURLWithPath: "/tmp/foo.wav"), settings: renderFormat.settings, commonFormat: renderFormat.commonFormat, interleaved: renderFormat.isInterleaved)

// Read using a buffer sized to produce `renderSize` frames of output
let readBuffer = AVAudioPCMBuffer(pcmFormat: file.processingFormat, frameCapacity: renderSize)!

// Process the file
while true {
    do {
        // Processing is finished if all frames have been read
        if file.framePosition == file.length {
            break
        }

        try file.read(into: readBuffer)
        player.scheduleBuffer(readBuffer, completionHandler: nil)

        let result = try engine.renderOffline(readBuffer.frameLength, to: renderBuffer)

        // Process the audio in `renderBuffer` here

        // Write the audio
        try output.write(from: renderBuffer)
        if result != .success {
            break
        }
    }
    catch {
        break
    }
}

player.stop()
engine.stop()

Here is a snippet showing how to set the same sample rate throughout the engine:

// Replace:
//engine.connect(player, to:engine.mainMixerNode, format: nil)

// With:
let busFormat = AVAudioFormat(standardFormatWithSampleRate: file.fileFormat.sampleRate, channels: file.fileFormat.channelCount)

engine.disconnectNodeInput(engine.outputNode, bus: 0)
engine.connect(engine.mainMixerNode, to: engine.outputNode, format: busFormat)

engine.connect(player, to:engine.mainMixerNode, format: busFormat)

Verify the sample rates are the same throughout with:

NSLog("%@", engine)
________ GraphDescription ________
AVAudioEngineGraph 0x7f8194905af0: initialized = 0, running = 0, number of nodes = 3

     ******** output chain ********

     node 0x600001db9500 {'auou' 'ahal' 'appl'}, 'U'
         inputs = 1
             (bus0, en1) <- (bus0) 0x600001d80b80, {'aumx' 'mcmx' 'appl'}, [ 2 ch,  48000 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]

     node 0x600001d80b80 {'aumx' 'mcmx' 'appl'}, 'U'
         inputs = 1
             (bus0, en1) <- (bus0) 0x600000fa0200, {'augn' 'sspl' 'appl'}, [ 2 ch,  48000 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]
         outputs = 1
             (bus0, en1) -> (bus0) 0x600001db9500, {'auou' 'ahal' 'appl'}, [ 2 ch,  48000 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]

     node 0x600000fa0200 {'augn' 'sspl' 'appl'}, 'U'
         outputs = 1
             (bus0, en1) -> (bus0) 0x600001d80b80, {'aumx' 'mcmx' 'appl'}, [ 2 ch,  48000 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]
______________________________________