I am relatively new to macOS programming but not programming generally. I'd prefer to work in (Objective)C/C++ rather than Swift. I need to open a specific audio device for output and stream a live stream of audio data from the network to the output device. The device has a custom Audio Server plug-in driver that we have source for. I'm feeling really stupid trying to figure out from the Apple documentation what I need to call to do these things. Can anyone help answer the following:
1) What are some of the appropriate APIs to use to do this? I'm thinking I need CoreAudio and AudioQueue, but I'm too ignorant here to be sure. Any references to similar example applications would be appreciated. Book recommendations would be appreciated, too.
2) How do I open my specific, custom driver for output? Does it have something to do with the UUID I see in the driver code, or is the driver identified some other way? I need my program to find the custom driver without any human assistance like picking from a selection list.
3) A dumb question because I haven't seen a clear example in application examples I've looked at: I downloaded the CAPlayThrough application (https://developer.apple.com/library/archive/samplecode/CAPlayThrough/Introduction/Intro.html) and kind of understand it, but I don't understand something in particular. How do I write my "pushed" in-memory data from the network to the output device? Do I need to use some kind of callback that reads from a ring buffer that the network live stream is written to?
ADDENDUM:
3/24/2020 Based on further research, I've answered my main questions but still have an issue that I think is out of scope. I will give my answer below and write up a new question.
The Core Audio API would be fine. See
AudioDeviceCreateIOProcIDandAudioDeviceStartinCoreAudio/AudioHardware.h. For some reason, the Apple documentation site doesn't have the docs for them, so you have to find it in the header file.Or you could use
AVAudioEngine. (But notAVAudioPlayer.)Depending on your other requirements, it might be easier to use an existing program like GStreamer or VLC.
CAPlayThroughcould be an OK place to start, but it usesAUGraph, which is deprecated. I'm not sure whether you'll need the varispeed audio unit it uses to adjust for differences in the sample rates, but you can at least get started without it.This looks it does something similar without using
AUGraph: https://github.com/pje/WavTap/blob/master/App/AudioTee.cppThe driver would publish an audio output device and it would have a property called
kAudioDevicePropertyModelUID(SeeAudioHardwareBase.h.) and should have a constant value for it. You can check it by double clicking on the device in HALLab.Your program could use
AudioObjectGetPropertyDatato get thekAudioHardwarePropertyDevicesproperty of the "audio system object" (kAudioObjectSystemObject) and then get thekAudioDevicePropertyModelUIDproperty of each device.If you're using C++, you might want to use the Public Utility classes to help with that.
The driver is identified by its bundle ID. You can use the
kAudioHardwarePropertyPlugInForBundleIDproperty to the theAudioObjectIDof the audio object that represents the driver (i.e. the plug-in). You can also find the device through the plug-in object.The function you pass to
AudioDeviceCreateIOProcIDwill be called every IO cycle and given a buffer to fill with the samples for that cycle.