[go: up one dir, main page]

Skip to content

Audio classification, denoising & generation of new samples on UrbanSound8K dataset (UniPD)

License

Notifications You must be signed in to change notification settings

werefin/UrbanSound8K-Classification-Denoising-Generation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 

Repository files navigation

UrbanSound8k classification & denoising generation

About this project

Cognition & Computation (UniPD): final project.

Key: use Mel-Frequency Cesptral Coefficients (MFCCs) to turn the audio points into a spatial representation.

Dataset: UrbanSound8K.

Audio classification on UrbanSound8K dataset

CNN model which can classify 10 different sounds with a fairly high accuracy.

Audio denoising & generation using CVAE

Convolutional Variational Autoencoder (CVAE) implemented to denoise MFCCs vectors and generation of new data samples.

Dependencies and setup

This requires the use of Jupyter Notebook. You can use either the Anaconda version or Google Colab to run this. Note that if you are using the local machine Anaconda version, you do need to install the necessary modules/dependencies.

Tools

About

Audio classification, denoising & generation of new samples on UrbanSound8K dataset (UniPD)

Topics

Resources

License

Stars

Watchers

Forks