Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

portfolio

publications

Pipelined implementation of high radix adaptive CORDIC as a coprocessor

Published in 2015 International Conference on Computing and Network Communications (CoCoNet)

The Coordinate Rotational Digital Computer (CORDIC) algorithm allows computation of trigonometric, hyperbolic, natural log and square root functions. This iterative algorithm uses only shift and add operations to converge. Multiple fixed radix variants of the algorithm have been implemented on hardware. These have demonstrated faster convergence at the expense of reduced accuracy. High radix adaptive variants of CORDIC also exist in literature. These allow for faster convergence at the expense of hardware multipliers in the datapath without compromising on the accuracy of the results. This paper proposes a 12 stage deep pipeline architecture to implement a high radix adaptive CORDIC algorithm. It employs floating point multipliers in place of the conventional shift and add architecture of fixed radix CORDIC. This design has been synthesised on a FPGA board to act as a coprocessor. The paper also studies the power, latency and accuracy of this implementation.

Citation: S. S. Oza, A. P. Shah, T. Thokala and S. David, "Pipelined implementation of high radix adaptive CORDIC as a coprocessor," 2015 International Conference on Computing and Network Communications (CoCoNet), Trivandrum, 2015, pp. 333-342. [Link/PDF]

Experiments on DCASE Challenge 2016 Acoustic Scene Classification and Sound Event Detection in Real Life Recording

Published in IEEE AASP Challenge: Detection and Classification of Acoustic Scenes and Events.

In this paper we present our work on Task 1 Acoustic Scene Classification and Task 3 Sound Event Detection in Real Life Recordings. Among our experiments we have low-level and high-level features, classifier optimization and other heuristics specific to each task. Our 14 performance for both tasks improved the baseline from DCASE: for Task 1 we achieved an overall accuracy of 78.9% compared to the baseline of 72.6% and for Task 3 we achieved a Segment-Based Error Rate of 0.76 compared to the baseline of 0.91'

Citation: Elizalde, Benjamin, Anurag Kumar, Ankit Shah, Rohan Badlani, Emmanuel Vincent, Bhiksha Raj, and Ian Lane. "Experimentation on the DCASE challenge 2016: Task 1—Acoustic scene classification and task 3—Sound event detection in real life audio." IEEE AASP Challenge: Detection and Classification of Acoustic Scenes and Events (2016). [Link/PDF]

An Approach for Self Training Audio Event Detectors using Web Data

Published in 25th European Signal Processing Conference (EUSIPCO)

Audio Event Detection (AED) aims to recognize sounds within audio and video recordings. AED employs machine learning algorithms commonly trained and tested on annotated datasets. However, available datasets are limited in number of samples and hence it is difficult to model acoustic diversity. Therefore, we propose combining labeled audio from a dataset and unlabeled audio from the web to improve the sound models. The audio event detectors are trained on the labeled audio and ran on the unlabeled audio downloaded from YouTube. Whenever the detectors recognized any of the known sounds with high confidence, the unlabeled audio was use to re-train the detectors. The performance of the re-trained detectors is compared to the one from the original detectors using the annotated test set. Results showed an improvement of the AED, and uncovered challenges of using web audio from videos

Citation: Ankit Shah, Rohan Badlani, Anurag Kumar, Benjamin Elizalde, Bhiksha Raj; An Approach for Self-Training Audio Event Detectors Using Web Data",in 25th European Signal Processing Conference (EUSIPCO), 2017 [Link/PDF]

DCASE 2017 challenge setup: tasks, datasets and baseline system

Published in Detection and Classification of Acoustic Scenes and Events 2017 Workshop

DCASE 2017 Challenge consists of four tasks: acoustic scene classification, detection of rare sound events, sound event detection in real-life audio, and large-scale weakly supervised sound event detection for smart cars. This paper presents the setup of these tasks: task definition, dataset, experimental setup, and baseline system results on the development dataset. The baseline systems for all tasks rely on the same implementation using multilayer perceptron and log mel-energies, but differ in the structure of the output layer and the decision making process, as well as the evaluation of system output using task specific metrics

Citation: A. Mesaros, T. Heittola, A. Diment, B. Elizalde, A. Shah, E. Vincent, B. Raj, and T. Virtanen, “DCASE 2017 challenge setup: Tasks, datasets and baseline system,” in Proceedings of the Detection and Classification of Acoustic Scenes and Events 2017 Workshop (DCASE2017) , November 2017 [Link/PDF]

Content-based Representations of audio using Siamese neural networks

Published in IEEE International Conference on Acoustics , Speech and Signal Processing, 2018

In this paper, we focus on the problem of content-based retrieval for audio, which aims to retrieve all semantically similar audio recordings for a given audio clip query. We propose a novel approach which encodes the audio into a vector representation using Siamese Neural Networks. The goal is to obtain an encoding similar for files belonging to the same audio class, thus allowing retrieval of semantically similar audio. We used two similarity measures, Cosine similarity and Euclidean distance, to show that our method is effective in retrieving files similar in audio content. Our results indicate that our neural network-based approach is able to retrieve files similar in content and semantics

Citation: Manocha, Pranay, Rohan Badlani, Anurag Kumar, Ankit Shah, Benjamin Elizalde, and Bhiksha Raj. "Content-based Representations of audio using Siamese neural networks." arXiv preprint arXiv:1710.10974 (2017). [Link/PDF]

Framework for evaluation of sound event detection in web videos

Published in IEEE International Conference on Acoustics, Speech and Signal Processing, 2018

The largest source of sound events is web videos. Most videos lack sound event labels at segment level, however, a significant number of them do respond to text queries, from a match found to their metadata by the search engine. In this paper we explore the extent to which a search query could be used as the true label for the presence of sound events in the videos. For this, we developed a framework for large-scale sound event recognition on web videos. The framework crawls videos using search queries corresponding to 78 sound event labels drawn from three datasets. The datasets are used to train three classifiers, which were then run on 3.7 million video segments. We evaluated performance using the search query as the true label and compare it (on a subset) with human labeling. Both types exhibited close performance, to within 10%, and similar performance trends as the number of evaluated segments increased. Hence, our experiments show potential for using search query as a preliminary true label for sound events in web videos.

Citation: Badlani, Rohan, Ankit Shah, Benjamin Elizalde, Anurag Kumar, and Bhiksha Raj. "Framework for evaluation of sound event detection in web videos." arXiv preprint arXiv:1711.00804 (2017). [Link/PDF]

NELS-Never-Ending Learner of Sounds

Published in Neural Information Processing Systems (NIPS 2017)

Sounds are essential to how humans perceive and interact with the world. These 10 sounds are captured in recordings and shared on the Internet on a minute-by- 11 minute basis. These recordings, which are predominantly videos, constitute the largest archive of sounds we’ve ever seen. However, most of these recordings have undescribed content making necessary methods for automatic audio content analysis, indexing and retrieval. These methods have to address multiple challenges, such as the relation between sounds and language, numerous and diverse sound classes, and large-scale evaluation. We propose a system that continuously learns from the web relations between sounds and language, improves sound recognition models over time and evaluates its learning competency in the large-scale without references. We introduce the Never-Ending Learner of Sounds (NELS), a project for continuously learning of sounds and their associated knowledge, available on line in nels.cs.cmu.edu

Citation: Elizalde, Benjamin, Rohan Badlani, Ankit Shah, Anurag Kumar, and Bhiksha Raj. "NELS-Never-Ending Learner of Sounds." [Link/PDF]

A Closer Look at Weak Label Learning for Audio Events

Published in Preprint

Audio content analysis in terms of sound events is an important research problem for a variety of applications. Recently, the development of weak labeling approaches for audio or sound event detection (AED) and availability of large scale weakly labeled dataset have finally opened up the possibility of large scale AED. However, a deeper understanding of how weak labels affect the learning for sound events is still missing from literature. In this work, we first describe a CNN based approach for weakly supervised training of audio events. The approach follows some basic design principle desirable in a learning method relying on weakly labeled audio. We then describe important characteristics, which naturally arise in weakly supervised learning of sound events. We show how these aspects of weak labels affect the generalization of models. More specifically, we study how characteristics such as label density and corruption of labels affects weakly supervised training for audio events. We also study the feasibility of directly obtaining weak labeled data from the web without any manual label and compare it with a dataset which has been manually labeled. The analysis and understanding of these factors should be taken into picture in the development of future weak label learning methods. Audioset, a large scale weakly labeled dataset for sound events is used in our experiments.

Citation: Ankit Shah,Anurag Kumar, Alexander Hauptmann, Bhiksha Raj, "A Closer Look at Weak Label Learning for Audio Events", ArXiv e-prints, 2018 [Link/PDF]

talks

A Framework towards Large scale Learning of Sound Events

Published:

A significant portion of internet’s multimedia data is videos, which contain sounds that often possess a meaning. Hence, automatic analysis of audio content for sound events is crucial. Current literature consists of small scale audio only datasets and with no audio from the web apart from AudioSet since annotating audio events is time-consuming. Videos have no tags or labels for sound events at the segment level adding to the challenges for evaluation of sound recognition on a large scale. We introduce a framework for continuous large-scale sound event recognition on web videos consisting of three modules - Crawl, Hear, and Feedback. The modular design allows our framework to scale and evolve as required. The framework has processed 3.5 million video segments, and humans inspected a subset of segments to evaluate the performance of web audio. Poster

volunteer

Teaching Assistant

Elements of Electronics and Communication - EC 110, National Institute of Technology Karnataka Surathkal, Department of Electronics and Communication, 2013

Teaching assistant for course as part of the peer mentoring program at NITK. Taught Elements of Electronics and Communication to peers between 2013-2014.

Teaching Assistant

Data Structures and Algorithms - EC 232, National Institute of Technology Karnataka Surathkal, Department of Electronics and Communication, 2014

Teaching assistant for course as part of the peer mentoring program at NITK. Taught Data Structures and Algorithms to peers between 2014-2015.

Mentor at Junior Academy

Global STEM Alliance, ARM, 2017

  • Unique opportunity to participate in a fast paced programme to develop research driven solution addressing pressing challenges at a global scale.
  • Mentored a young team of students on wearables challenge implementing an innovative water filtration system.
  • Demonstrated and presented the idea with working prototype eventually winning the innovation challenge.

Contributor/Organizer for Task 4 DCASE 2017 Challenge

IEEE-DCASE 2017 challenge - Task 4 - Large Scale weakly supervised sound event detection for smart cars, Carnegie Mellon University, 2017

Organizer of Task 4 “Large-scale weakly labeled semi-supervised sound event detection in domestic environments”. Accountable for code development, audio annotation, evaluation of papers and system submissions as well as providing technical support to participants via email and DCASE forum.

Organizer for Task 4 DCASE 2018 Challenge

IEEE-DCASE 2018 challenge - Task 4 - Large-scale weakly labeled semi-supervised sound event detection in domestic environments, Carnegie Mellon University, 2018

Organizer of Task 4 “Large-scale weakly labeled semi-supervised sound event detection in domestic environments”. Accountable for code development, audio annotation, evaluation of papers and system submissions as well as providing technical support to participants via email and DCASE forum. Code - DCASE Challenge Code - Task 4 DCASE Challenge