AES E-Library

A Scalable AI Architecture for Audio and Multimodal Analysis on Mobile Devices: A Case of Environmental Monitoring

The increasing need for real-time environmental monitoring has made Artificial Intelligence (AI) and Machine Learning (ML) models for audio classification and multimodal sensing essential for detecting and analyzing pollution-related sounds. In cases where mobile devices are used for capturing and processing audiovisual content, such models offer significant potential for research, storytelling, and public awareness. This study proposes a scalable and modular architecture that enables direct and flexible access to AI models for multimodal and audio processing. A CNN-LSTM hybrid model is trained for real-time environmental sound classification and deployed as a service. Building on prior 1D CNN-based approaches and incorporating temporal dependencies, the new model achieves an AUC of 0.91, demonstrating improved accuracy and generalization. The system leverages a REST API and Docker-based containerization, allowing deployment of independent AI services and supporting mobile and IoT use cases. The architecture accommodates both pre-trained and custom models, accessible from any device via a unified interface, confirming that a hybrid CNN-LSTM topology can support effective real-time sound classification, within a modular, containerized framework.

 

Author (s):
Affiliation: (See document for exact affiliation information.)
Publication Date:
Session subject:
Permalink: https://aes2.org/publications/elibrary-page/?id=23002


(681KB)


Download Now

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.

Type:
E-Libary location:
16938
Choose your country of residence from this list:










Skip to content