View source on GitHub |
Class that performs audio classification on audio data.
mp.tasks.audio.AudioClassifier(
graph_config: mp.calculators.core.constant_side_packet_calculator_pb2.mediapipe_dot_framework_dot_calculator__pb2.CalculatorGraphConfig
,
running_mode: mp.tasks.audio.RunningMode
,
packet_callback: Optional[Callable[[Mapping[str, packet_module.Packet]], None]] = None
) -> None
This API expects a TFLite model with mandatory TFLite Model Metadata that contains the mandatory AudioProperties of the solo input audio tensor and the optional (but recommended) category labels as AssociatedFiles with type TENSOR_AXIS_LABELS per output classification tensor.
Input tensor | |
---|---|
(kTfLiteFloat32)
|
At least one output tensor with: (kTfLiteFloat32)
[1 x N]
array withN
represents the number of categories.- optional (but recommended) category labels as AssociatedFiles with type
TENSOR_AXIS_LABELS, containing one label per line. The first such
AssociatedFile (if any) is used to fill the
category_name
field of the results. Thedisplay_name
field is filled from the AssociatedFile (if any) whose locale matches thedisplay_names_locale
field of theAudioClassifierOptions
used at creation time ("en" by default, i.e. English). If none of these are available, only theindex
field of the results will be filled.
Raises | |
---|---|
ValueError
|
The packet callback is not properly set based on the task's running mode. |
Methods
classify
classify(
audio_clip: mp.tasks.components.containers.AudioData
) -> List[mp.tasks.audio.AudioClassifierResult
]
Performs audio classification on the provided audio clip.
The audio clip is represented as a MediaPipe AudioData. The method accepts
audio clips with various length and audio sample rate. It's required to
provide the corresponding audio sample rate within the AudioData
object.
The input audio clip may be longer than what the model is able to process in a single inference. When this occurs, the input audio clip is split into multiple chunks starting at different timestamps. For this reason, this function returns a vector of ClassificationResult objects, each associated ith a timestamp corresponding to the start (in milliseconds) of the chunk data that was classified, e.g:
ClassificationResult #0 (first chunk of data): timestamp_ms: 0 (starts at 0ms) classifications #0 (single head model): category #0: category_name: "Speech" score: 0.6 category #1: category_name: "Music" score: 0.2 ClassificationResult #1 (second chunk of data): timestamp_ms: 800 (starts at 800ms) classifications #0 (single head model): category #0: category_name: "Speech" score: 0.5 category #1: category_name: "Silence" score: 0.1
Args | |
---|---|
audio_clip
|
MediaPipe AudioData. |
Returns | |
---|---|
An AudioClassifierResult object that contains a list of
classification result objects, each associated with a timestamp
corresponding to the start (in milliseconds) of the chunk data that was
classified.
|
Raises | |
---|---|
ValueError
|
If any of the input arguments is invalid, such as the sample
rate is not provided in the AudioData object.
|
RuntimeError
|
If audio classification failed to run. |
classify_async
classify_async(
audio_block: mp.tasks.components.containers.AudioData
,
timestamp_ms: int
) -> None
Sends audio data (a block in a continuous audio stream) to perform audio classification.
Only use this method when the AudioClassifier is created with the audio
stream running mode. The input timestamps should be monotonically increasing
for adjacent calls of this method. This method will return immediately after
the input audio data is accepted. The results will be available via the
result_callback
provided in the AudioClassifierOptions
. The
classify_async
method is designed to process auido stream data such as
microphone input.
The input audio data may be longer than what the model is able to process in a single inference. When this occurs, the input audio block is split into multiple chunks. For this reason, the callback may be called multiple times (once per chunk) for each call to this function.
The result_callback
provides:
- An
AudioClassifierResult
object that contains a list of classifications. - The input timestamp in milliseconds.
Args | |
---|---|
audio_block
|
MediaPipe AudioData. |
timestamp_ms
|
The timestamp of the input audio data in milliseconds. |
Raises | |
---|---|
ValueError
|
If any of the followings:
1) The sample rate is not provided in the |
close
close() -> None
Shuts down the mediapipe audio task instance.
Raises | |
---|---|
RuntimeError
|
If the mediapipe audio task failed to close. |
create_audio_record
create_audio_record(
num_channels: int, sample_rate: int, required_input_buffer_size: int
) -> audio_record.AudioRecord
Creates an AudioRecord instance to record audio stream.
The returned AudioRecord instance is initialized and client needs to call the appropriate method to start recording.
Note that MediaPipe Audio tasks will up/down sample automatically to fit the sample rate required by the model. The default sample rate of the MediaPipe pretrained audio model, Yamnet is 16kHz.
Args | |
---|---|
num_channels
|
The number of audio channels. |
sample_rate
|
The audio sample rate. |
required_input_buffer_size
|
The required input buffer size in number of float elements. |
Returns | |
---|---|
An AudioRecord instance. |
Raises | |
---|---|
ValueError
|
If there's a problem creating the AudioRecord instance. |
create_from_model_path
@classmethod
create_from_model_path( model_path: str ) -> 'AudioClassifier'
Creates an AudioClassifier
object from a TensorFlow Lite model and the default AudioClassifierOptions
.
Note that the created AudioClassifier
instance is in audio clips mode, for
classifying on independent audio clips.
Args | |
---|---|
model_path
|
Path to the model. |
Returns | |
---|---|
AudioClassifier object that's created from the model file and the
default AudioClassifierOptions .
|
Raises | |
---|---|
ValueError
|
If failed to create AudioClassifier object from the provided
file such as invalid file path.
|
RuntimeError
|
If other types of error occurred. |
create_from_options
@classmethod
create_from_options( options:
mp.tasks.audio.AudioClassifierOptions
) -> 'AudioClassifier'
Creates the AudioClassifier
object from audio classifier options.
Args | |
---|---|
options
|
Options for the audio classifier task. |
Returns | |
---|---|
AudioClassifier object that's created from options .
|
Raises | |
---|---|
ValueError
|
If failed to create AudioClassifier object from
AudioClassifierOptions such as missing the model.
|
RuntimeError
|
If other types of error occurred. |
__enter__
__enter__()
Return self
upon entering the runtime context.
__exit__
__exit__(
unused_exc_type, unused_exc_value, unused_traceback
)
Shuts down the mediapipe audio task instance on exit of the context manager.
Raises | |
---|---|
RuntimeError
|
If the mediapipe audio task failed to close. |