Aucher: Multi-modal queries on live audio streams in real-time

Zeyi Wen, Mingyu Liang, Bingsheng He, Zexin Xia, Bo Li

Research output: Chapter in Book/Conference paperConference paperpeer-review

1 Citation (Scopus)


This paper demonstrates a real-time search system called Aucher for live audio streams. Audio streaming services (e.g., Mixlr, Ximalaya, Lizhi and Facebook Live Audio) have become increasingly popular with the wide use of smart phones. Because of the popularity of audio broadcasting, the data volume of live audio streams is also ever increasing. Searching and indexing these audio streams is an important and challenging problem. Aucher is a system prototype which can support both voice search and keyword search on audio streams. We achieve the real-time response for queries by our novel index which exploits log structured merge-trees and supports multi-modal search. Moreover, our system can handle insertion about four times faster and more memory efficient than the state-of-the-art solution. We plan to demonstrate searching live audio streams by keywords and voice, illustrate the trade-off of freshness, popularity and relevance on query results, perform searching hot terms, and show the ability of searching live audio streams in real-time.

Original languageEnglish
Title of host publicationProceedings - 2019 IEEE 35th International Conference on Data Engineering, ICDE 2019
Place of PublicationUSA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Number of pages4
ISBN (Electronic)9781538674741
Publication statusPublished - 1 Apr 2019
Externally publishedYes
Event35th IEEE International Conference on Data Engineering, ICDE 2019 - Macau, China
Duration: 8 Apr 201911 Apr 2019

Publication series

NameProceedings - International Conference on Data Engineering
ISSN (Print)1084-4627


Conference35th IEEE International Conference on Data Engineering, ICDE 2019


Dive into the research topics of 'Aucher: Multi-modal queries on live audio streams in real-time'. Together they form a unique fingerprint.

Cite this