I'm making an application for sound analysis and spotlight control. The colors of the spotlight change to the beat of the music. I use the TarsosDSP library for this, additionally downloaded the FFmpeg-Kit library to convert audio to WAV format, PCM 16L to work with audioDispatcher. The problem is that when audio is transmitted in the correct format, dispatcher starts and immediately ends. The boolean Process method is not executed, but the process Finished() method is executed. I found out that the stream starts, the file is not empty, it is converted to the correct format, BUT the getFrameLength() method, when interacting with the AudioStream to which I pass the filePath, returns the file path value -1, that is, in fact, it is not filled in. I've already searched through everything, and the github library code, and all the neural networks, I don't know how to solve this issue. The problem is with AudioDispatcher and AudioDispatcherFactory.from Pipe()?
private void playAndAnalyzeAudio(String filePath, Uri uri)
{
if (mediaPlayer != null)
mediaPlayer.release();
mediaPlayer = MediaPlayer.create(requireContext(), uri);
new Thread(() -> {
extractAudio(inputFilePath, outputFilePath);
getActivity().runOnUiThread(() -> {
mediaPlayer = MediaPlayer.create(requireContext(), uri);
if (mediaPlayer != null) {
mediaPlayer.start(); // Start music after analyze
startSendingData(); // Start data sending
}
});
}).start();
}
private void analyzeAudio(String filePath)
{
try {
AudioDispatcher audioDispatcher = AudioDispatcherFactory.fromPipe(filePath, 44100, 1024, 0);
MFCC mfcc = new MFCC(1024, 44100, 13, 50, 20, 10000);
audioDispatcher.addAudioProcessor(mfcc);
Log.d("AUDIO_ANALYSIS", "Начинаем анализ аудиофайла..." + audioDispatcher);
audioDispatcher.addAudioProcessor(new AudioProcessor() {
@Override
public boolean process(AudioEvent audioEvent) {
Log.d("AUDIO_ANALYSIS", "Обрабатываем аудио...");
float[] amplitudes = audioEvent.getFloatBuffer();
Log.d("AUDIO_ANALYSIS", "Размер буфера: " + amplitudes.length);
float[] mfccs = mfcc.getMFCC();
if (mfccs == null) {
Log.e("AUDIO_ANALYSIS", "MFCC не сгенерировался!");
return true;
}
float currentBass = mfccs[0] + mfccs[1];
float totalEnergy = 0;
for (float amp : amplitudes) {
totalEnergy += Math.abs(amp);
}
Log.d("AUDIO_ANALYSIS", "Bass Energy: " + currentBass + ", Total Energy: " + totalEnergy);
if (currentBass > BASS_THRESHOLD || totalEnergy > ENERGY_THRESHOLD) {
changeColor();
Log.d("SONG", "Color wac changed on a : " + currentColor);
brightness = MAX_BRIGHTNESS;
} else {
brightness *= 0.9f;
}
return true;
}
@Override
public void processingFinished() {
getActivity().runOnUiThread(() -> Toast.makeText(requireContext(), "Анализ завершён", Toast.LENGTH_SHORT).show());
}
});
File file = new File(filePath);
if (!file.exists() || file.length() == 0) {
Log.e("AUDIO_ANALYSIS", "Error: file is empty! " + filePath);
return;
} else {
Log.d("AUDIO_ANALYSIS", "File is, size: " + file.length() + " byte.");
}
Log.d("AUDIO_ANALYSIS", "Start of analyzing: " + filePath);
File ffmpegFile = new File(getContext().getCacheDir(), "ffmpeg");
if (!ffmpegFile.setExecutable(true)) {
Log.e("AUDIO_ANALYSIS", "You don't have any roots for ffmpeg!");
}
else
Log.e("AUDIO_ANALYSIS", "You have roots for ffmpeg!");
new Thread(() -> {
Log.d("AUDIO_ANALYSIS", "Start dispatcher...");
audioDispatcher.run();
Log.d("AUDIO_ANALYSIS", "Dispatcher end.");
}).start();
} catch (Exception e) {
e.printStackTrace();
Toast.makeText(requireContext(), "Error of analyzing", Toast.LENGTH_SHORT).show();
}
}
public void extractAudio(String inputFilePath, String outputFilePath) {
File outputFile = new File(outputFilePath);
if (outputFile.exists()) {
outputFile.delete(); // Удаляем существующий файл
}
// Строим команду для извлечения аудио
String command = "-i " + inputFilePath + " -vn -acodec pcm_s16le -ar 44100 -ac 2 " + outputFilePath;
// Используем FFmpegKit для выполнения команды
FFmpegKit.executeAsync(command, session -> {
if (session.getReturnCode().isSuccess()) {
Log.d("AUDIO_EXTRACT", "Аудио извлечено успешно: " + outputFilePath);
analyzeAudio(outputFilePath); // Продолжаем анализировать аудио
} else {
Log.e("AUDIO_EXTRACT", "Ошибка извлечения аудио: " + session.getFailStackTrace());
}
});
}
Sorry about the number of lines, i tried to describe the problem very detailed. I tried to change AudioDispatcherFactory.fromPipe() on a AudioDispatcherFactory.fromFile(), but this method don't available in Android, only in Java, how i see the error "Javax.sound..., unexpected error, method don't available" I tried to change String command in executeAudio() method, to change arguments of fromPipe() method, but in did not to bring success. I want that my audio file will be correct analyze with audiodispatcher and then, that data from analyze will be transfered to arduino. Now in Logs I see "Color: null, value: 0.0.