Mobile devices generally have two separate hardware mechanisms for the transmission of audio. One of these is the speaker used in the transmission of voice data — primarily used for activities like phone calls. The other is a much fuller speaker, used for most any other activity: games, applications, music and video playback, et cetera.
Depending upon the needs of a particular application, AIR developers now have the ability to target a particular hardware speaker on the device. This is accessed by setting the audioPlaybackMode
property of the flash.media.SoundMixer
class to either the AudioPlaybackMode.MEDIA
or AudioPlaybackMode.VOICE
constants set through the flash.media.AudioPlaybackMode
class.
Another interesting addition is the ability to override the default voice behavior through use of the SoundMixer.useSpeakerphoneForVoice Boolean
property.
package { import flash.display.Sprite; import flash.display.StageAlign; import flash.display.StageScaleMode; import flash.events.Event; import flash.media.AudioPlaybackMode; import flash.media.Sound; import flash.media.SoundMixer; import flash.net.URLRequest; import flash.text.TextField; import flash.text.TextFormat; [SWF(backgroundColor="#000000")] public class SoundSpeaker extends Sprite { private var traceField:TextField; private var sound:Sound; private var id3:Boolean; public function SoundSpeaker() { super(); stage.scaleMode = StageScaleMode.NO_SCALE; stage.align = StageAlign.TOP_LEFT; generateDisplayObjects(); setupSoundMixer(); } protected function generateDisplayObjects():void { var defaultFormat:TextFormat = new TextFormat(); defaultFormat.font = "Arial"; defaultFormat.size = 36; defaultFormat.color = 0xFFFFFF; traceField = new TextField(); traceField.backgroundColor = 0x000000; traceField.alpha = 0.7; traceField.width = stage.stageWidth; traceField.height = stage.stageHeight; traceField.wordWrap = true; traceField.multiline = true; traceField.background = true; traceField.defaultTextFormat = defaultFormat; addChild(traceField); } private function setupSoundMixer():void { SoundMixer.audioPlaybackMode = AudioPlaybackMode.MEDIA; //SoundMixer.audioPlaybackMode = AudioPlaybackMode.VOICE; SoundMixer.useSpeakerphoneForVoice = false; setupSoundandLoad(); } private function setupSoundandLoad():void { sound = new Sound(new URLRequest("assets/drowning.mp3")); sound.addEventListener(Event.ID3, id3Loaded); sound.play(); } protected function id3Loaded(event:Event):void { if(!id3){ traceField.appendText("Playing: " + sound.id3.songName + " "); traceField.appendText("From the album: " + sound.id3.album + " "); traceField.appendText("By the artist: " + sound.id3.artist + " "); traceField.appendText("Released in: " + sound.id3.year + " "); traceField.appendText("Audio Playback Mode: " + SoundMixer.audioPlaybackMode); id3 = true; } } } }
The result of this code can be seen in Figure 6-2, running upon an Android device.