|
|||||||||
PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES |
See:
Description
Interface Summary | |
SoundEventType | The SoundEventType interface defines the type of sound event. |
SoundTrackListener | SoundTrackListener is an interface for implementing a listener that receives events from a sound sequencer. |
Class Summary | |
Sound | Sound is a class for keeping sound data. |
SoundPlayer | The SoundPlayer class manages "virtual sound source" tracks capable of being configured inside the sound source of the handset. |
SoundTrack | SoundTrack is a class providing functions for sound data playback. |
Provides sound player-related classes and interfaces.
The VSCL API provides the following two media playing functions.
A sound player is a component capable of overlapping playback of multiple melodies or other sounds, independently and asynchronously, for use with games, music applications and the like. A Java application can use a sound player for simultaneous playback of background music and sound effect melodies, or for playing multi-part music.
It is possible to embed user events in sound data. When the embedded user event is arrived at in the course of playback, the event is notified to the Java application. Using this event notification feature, a Java application can synchronize screen display with sound playback.
The facility for playing back melody data is described here.
Playing a single note requires a facility for emitting a sound of the designated timbre and pitch, called a sound channel. The sound emitting system as a whole, consisting of one or more sound channels, a mixer for combining the output from each channel, and their respective control facilities, is called a sound source. A 16-voice polyphonic specification, for example, means a sound source with 16 sound channels. The number of sound channels in a sound source is referred to as the maximum number of simultaneous voices. Figure 1 shows a model of a 16-voice polyphonic sound source.
Figure 1. A Polyphonic Sound Source
Melody data is musical notation information, consisting of all the information for a musical note. Of this information, the following and other items are recorded as sequence data on a time line.
Playing back this information requires a facility for reading control information in sequence controlled by a timer and for issuing control instructions to sound sources. This facility is called a sequencer. Figure 2 illustrates sequencer processing.
Figure 2. A Sequencer Processing Model (Melody Playback Mode)
In the example in Figure 2, melody sequence data is recorded as "sound channel," "interval," "on/mute" and so on for each time unit (tick). The sequencer reads each of the sequence data, setting the designated processing in the designated sound channel at the designated time. A melody is played by combining a sequencer with the sound generator described above. A set of sequencer and sound generator is here called a track (or sequence track).
Use of a sound source in "sound playback mode" is described here.
In sound playback mode, multiple melodies (called "sounds" here for convenience) are played back independently. This requires multiple tracks. A cell phone, however, normally is equipped with only one sound source. For this reason, sound playback is performed by grouping a number of sound channels in the sound source control facility, treating each as an independent sound source. As a result, multiple virtual tracks are created. Figure 3 shows a system in which a sound source having 16 maximum simultaneous voices is divided into four virtual sound sources of four voices each for sound playback.
Figure 3. Sound Source and Sequencer (in Sound Playback Mode)
The number of maximum simultaneous voices in each virtual track is up to the implementation. Sound data must be created based on this condition. The VSCL API makes it possible for a running Java application to find out the number of virtual tracks available for use.
The assignment of sound channels to each virtual track cannot be changed dynamically. Instead, two or more tracks can be combined as if they were one track, and played in synchronization with each other.
Figure 4. Simultaneous Playback of Multiple Tracks
In Figure 4, tracks 0, 1, and 2 are combined into a 12-voice track, which along with track 3 results in two tracks. When multiple tracks are combined, one track is called a "master track" and the others "slave tracks" for convenience. The slave tracks are played back in synchronization with the master track. In Figure 4, track 0 is the master track and tracks 1 and 2 are slave tracks. Playing track 0 in this state, tracks 1 and 2 are played back at the same time.
It is not possible to assign the same track to different combinations at once; for example; combining tracks 0, 1, and 2 along with a combination of tracks 2 and 3 is not allowed. As long as such duplication is avoided, however, any combination of tracks is possible. Even combining all tracks into a single track in sound playback mode is possible.
Sound data must be set independently in each track even when multiple tracks are combined.
A media player uses a sound source in melody playback mode. Melody playback mode is a mode in which one melody is played. In this case only one track is needed. Figure 2 shows a configuration example in melody playback mode. A sound player, on the other hand, uses the sound source in sound playback mode. Figure 5 shows an example of sound source use by a sound player.
In these ways, a media player and sound player differ in their use of the sound source. Since an ordinary handset hardware configuration does not allow for both kinds of playback modes together, simultaneous media player and sound player use is not supported.
The main classes and interfaces making up the "sound player" are shown in Figure 5.
Figure 5. Main Sound Player Classes
The procedure for playing back sound data is outlined here.
Call the SoundPlayer static function getPlayer() to get a SoundPlayer object.
Create a Sound object, designating in the constructor the sound data to be played.
Call the SoundPlayer getTrack() to get a SoundTrack object.
Set a Sound object using setSound() in the obtained SoundTrack object, and call setEventListener() to register the listener implemented using the SoundTrackListener interface. If multiple tracks are to be played in synchronization, call setSubjectTo() to set the mapping between the master SoundTrack and slave SoundTrack objects.
Once sound data has been set for a SoundTrack object, it can be played by calling play(). In addition to the normal play() version, there is a version available that allows repetition to be designated. Either during play or before playing, the volume and panpot (left-right position) can be set using setVolume() and setPanpot(), respectively; or mute() can be called to mute the sound. Synchronous playback is achieved by calling play() for the SoundTrack designated as master track.
A SoundTrack in playing state is stopped by calling stop(), while pause() and resume() are used to pause and resume playback. Synchronous playback is stopped by issuing stop() for the SoundTrack designated as master track.
While playback of a SoundTrack is stopped, removeSound() can be used to clear its current sound setting. Then setSound() can be used to make a new sound setting.
When a SoundTrack is no longer needed, it is returned to the sound player by calling the SoundPlayer disposeTrack() method.
Within the scope of physically available tracks, the above steps from 4 to 8 can be used to create multiple SoundTrack objects and each can be played independently. Moreover, steps 5 to 8 can be repeated for each individual SoundTrack to play different sound data.
|
|||||||||
PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES |