Free Trial

Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources.

  • Create BookmarkCreate Bookmark
  • Create Note or TagCreate Note or Tag
  • PrintPrint
Share this Page URL
Help

Part I: Understanding Core Audio > Overview of Core Audio

1. Overview of Core Audio

Core Audio is the engine behind any sound played on a Mac or iPhone OS. Its procedural API is exposed in C, which makes it directly available in Objective-C and C++, and usable from any other language that can call C functions, such as Java with the Java Native Interface, or Ruby via RubyInline. From an audio standpoint, Core Audio is high level because it is highly agnostic. It abstracts away both the implementation details of the hardware and the details of individual audio formats.

To an application developer, Core Audio is suspiciously low level. If you’re coding in C, you’re doing something wrong, or so the saying goes. The problem is, very little sits above Core Audio. Audio turns out to be a difficult problem, and all but the most trivial use cases require more decision making than even the gnarliest Objective-C framework. The good news is, the times you don’t need Core Audio are easy enough to spot, and the tasks you can do without Core Audio are pretty simple (see sidebar “When Not to Use Core Audio”).

When you use Core Audio, you’ll likely find it a far different experience from nearly anything else you’ve used in your Cocoa programming career. Even if you’ve called into other C-based Apple frameworks, such as Quartz or Core Foundation, you’ll likely be surprised by Core Audio’s style and conventions.

This chapter looks at what’s in Core Audio and where to find it. Then it broadly surveys some of its most distinctive conventions, which you’ll get a taste for by writing a simple application to exercise Core Audio’s capability to work with audio metadata in files. This will give you your first taste of properties, which enable you to do a lot of the work throughout the book.

When Not to Use Core Audio

The primary scenario for not using Core Audio is when simply playing back from a file: On a Mac, you can use AppKit’s NSSound, and on iOS, you can use the AVAudioPlayer from the AV Foundation framework. iOS also provides the AVAudioRecorder for recording to a file. The Mac has no equivalent Objective-C API for recording, although it does have QuickTime and QTKit; you could treat your audio as QTMovie objects and pick up some playback, recording, and mixing functionality. However, QuickTime’s video orientation and its philosophy of being an editing API for multimedia documents makes it a poor fit for purely audio tasks. The same can be said of AV Foundation’s AVPlayer and AVCaptureSession classes, which debuted in iOS 4 and became the heir apparent to QuickTime on Mac in 10.7 (Lion).

Beyond the simplest playback and recording cases—and, in particular, if you want to do anything with the audio, such as mixing, changing formats, applying effects, or working directly with the audio data—you’ll want to adopt Core Audio.


  • Safari Books Online
  • Create BookmarkCreate Bookmark
  • Create Note or TagCreate Note or Tag
  • PrintPrint