Pyroomacoustics is a package for audio signal processing for indoor applications. It was developed as a fast prototyping platform for beamforming algorithms in indoor scenarios.

Overview

Pyroomacoustics logo


https://travis-ci.org/LCAV/pyroomacoustics.svg?branch=pypi-release Documentation Status Test on mybinder

Summary

Pyroomacoustics is a software package aimed at the rapid development and testing of audio array processing algorithms. The content of the package can be divided into three main components:

  1. Intuitive Python object-oriented interface to quickly construct different simulation scenarios involving multiple sound sources and microphones in 2D and 3D rooms;
  2. Fast C++ implementation of the image source model and ray tracing for general polyhedral rooms to efficiently generate room impulse responses and simulate the propagation between sources and receivers;
  3. Reference implementations of popular algorithms for STFT, beamforming, direction finding, adaptive filtering, source separation, and single channel denoising.

Together, these components form a package with the potential to speed up the time to market of new algorithms by significantly reducing the implementation overhead in the performance evaluation step. Please refer to this notebook for a demonstration of the different components of this package.

Room Acoustics Simulation

Consider the following scenario.

Suppose, for example, you wanted to produce a radio crime drama, and it so happens that, according to the scriptwriter, the story line absolutely must culminate in a satanic mass that quickly degenerates into a violent shootout, all taking place right around the altar of the highly reverberant acoustic environment of Oxford's Christ Church cathedral. To ensure that it sounds authentic, you asked the Dean of Christ Church for permission to record the final scene inside the cathedral, but somehow he fails to be convinced of the artistic merit of your production, and declines to give you permission. But recorded in a conventional studio, the scene sounds flat. So what do you do?

—Schnupp, Nelken, and King, Auditory Neuroscience, 2010

Faced with this difficult situation, pyroomacoustics can save the day by simulating the environment of the Christ Church cathedral!

At the core of the package is a room impulse response (RIR) generator based on the image source model that can handle

  • Convex and non-convex rooms
  • 2D/3D rooms

The core image source model and ray tracing modules are written in C++ for better performance.

The philosophy of the package is to abstract all necessary elements of an experiment using an object-oriented programming approach. Each of these elements is represented using a class and an experiment can be designed by combining these elements just as one would do in a real experiment.

Let's imagine we want to simulate a delay-and-sum beamformer that uses a linear array with four microphones in a shoe box shaped room that contains only one source of sound. First, we create a room object, to which we add a microphone array object, and a sound source object. Then, the room object has methods to compute the RIR between source and receiver. The beamformer object then extends the microphone array class and has different methods to compute the weights, for example delay-and-sum weights. See the example below to get an idea of what the code looks like.

The Room class also allows one to process sound samples emitted by sources, effectively simulating the propagation of sound between sources and microphones. At the input of the microphones composing the beamformer, an STFT (short time Fourier transform) engine allows to quickly process the signals through the beamformer and evaluate the output.

Reference Implementations

In addition to its core image source model simulation, pyroomacoustics also contains a number of reference implementations of popular audio processing algorithms for

We use an object-oriented approach to abstract the details of specific algorithms, making them easy to compare. Each algorithm can be tuned through optional parameters. We have tried to pre-set values for the tuning parameters so that a run with the default values will in general produce reasonable results.

Datasets

In an effort to simplify the use of datasets, we provide a few wrappers that allow to quickly load and sort through some popular speech corpora. At the moment we support the following.

For more details, see the doc.

Quick Install

Install the package with pip:

pip install pyroomacoustics

A cookiecutter is available that generates a working simulation script for a few 2D/3D scenarios:

# if necessary install cookiecutter
pip install cookiecutter

# create the simulation script
cookiecutter gh:fakufaku/cookiecutter-pyroomacoustics-sim

# run the newly created script
python <chosen_script_name>.py

We have also provided a minimal Dockerfile example in order to install and run the package within a Docker container. Note that you should increase the memory of your containers to 4 GB. Less may also be sufficient, but this is necessary for building the C++ code extension. You can build the container with:

docker build -t pyroom_container .

And enter the container with:

docker run -it pyroom_container:latest /bin/bash

Dependencies

The minimal dependencies are:

numpy
scipy>=0.18.0
Cython
pybind11

where Cython is only needed to benefit from the compiled accelerated simulator. The simulator itself has a pure Python counterpart, so that this requirement could be ignored, but is much slower.

On top of that, some functionalities of the package depend on extra packages:

samplerate   # for resampling signals
matplotlib   # to create graphs and plots
sounddevice  # to play sound samples
mir_eval     # to evaluate performance of source separation in examples

The requirements.txt file lists all packages necessary to run all of the scripts in the examples folder.

This package is mainly developed under Python 3.5. We try as much as possible to keep things compatible with Python 2.7 and run tests and builds under both. However, the tests code coverage is far from 100% and it might happen that we break some things in Python 2.7 from time to time. We apologize in advance for that.

Under Linux and Mac OS, the compiled accelerators require a valid compiler to be installed, typically this is GCC. When no compiler is present, the package will still install but default to the pure Python implementation which is much slower. On Windows, we provide pre-compiled Python Wheels for Python 3.5 and 3.6.

Example

Here is a quick example of how to create and visual the response of a beamformer in a room.

import numpy as np
import matplotlib.pyplot as plt
import pyroomacoustics as pra

# Create a 4 by 6 metres shoe box room
room = pra.ShoeBox([4,6])

# Add a source somewhere in the room
room.add_source([2.5, 4.5])

# Create a linear array beamformer with 4 microphones
# with angle 0 degrees and inter mic distance 10 cm
R = pra.linear_2D_array([2, 1.5], 4, 0, 0.1)
room.add_microphone_array(pra.Beamformer(R, room.fs))

# Now compute the delay and sum weights for the beamformer
room.mic_array.rake_delay_and_sum_weights(room.sources[0][:1])

# plot the room and resulting beamformer
room.plot(freq=[1000, 2000, 4000, 8000], img_order=0)
plt.show()

More examples

A couple of detailed demos with illustrations are available.

A comprehensive set of examples covering most of the functionalities of the package can be found in the examples folder of the GitHub repository.

Authors

  • Robin Scheibler
  • Ivan Dokmanić
  • Sidney Barthe
  • Eric Bezzam
  • Hanjie Pan

How to contribute

If you would like to contribute, please clone the repository and send a pull request.

For more details, see our CONTRIBUTING page.

Academic publications

This package was developed to support academic publications. The package contains implementations for DOA algorithms and acoustic beamformers introduced in the following papers.

  • H. Pan, R. Scheibler, I. Dokmanic, E. Bezzam and M. Vetterli. FRIDA: FRI-based DOA estimation for arbitrary array layout, ICASSP 2017, New Orleans, USA, 2017.
  • I. Dokmanić, R. Scheibler and M. Vetterli. Raking the Cocktail Party, in IEEE Journal of Selected Topics in Signal Processing, vol. 9, num. 5, p. 825 - 836, 2015.
  • R. Scheibler, I. Dokmanić and M. Vetterli. Raking Echoes in the Time Domain, ICASSP 2015, Brisbane, Australia, 2015.

If you use this package in your own research, please cite our paper describing it.

R. Scheibler, E. Bezzam, I. Dokmanić, Pyroomacoustics: A Python package for audio room simulations and array processing algorithms, Proc. IEEE ICASSP, Calgary, CA, 2018.

License

Copyright (c) 2014-2018 EPFL-LCAV

Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Comments
  • Advise for inter-microphone distance when wavelength is too small

    Advise for inter-microphone distance when wavelength is too small

    Do you happen to know how to choose the distance between the microphones in the array when the signals of interest have a relatively small wavelength? For example, a 22kHz signal would have an approximate wavelength of 1.6cm (343 / 22kHz) and since it is generally advised to space the microphones with a distance of less than the wavelength of the signal of interest this would mean a distance between each microphone of 0.8cm.

    However, this distance is realistically not feasible when for example the physical microphones alone have a diameter of 2.5cm which implies that the minimum distance between the same microphones is 1.25cm (the radius) plus some offset that the microphones do not influence other when the membrane oscillates. So, is there a way to still get good results from the DOA algorithms when the inter-microphone distance is not ideal?

    opened by L0g4n 18
  • Same API for all analysis/process/synthesis use cases of STFT class

    Same API for all analysis/process/synthesis use cases of STFT class

    Different cases include:

    1. real-time: we know the size of our input and memory allocation is important so user should specify the 'num_frames' parameter to the STFT constructor a) single frame STFT --> 'num_frames' = 1 b) multiple frame STFT --> 'num_frames' =
    2. non-real time: maybe we don't know the number of frames in our input and memory allocation is not critical so we can simply create the STFT object without the 'num_frames' parameter and let the STFT object allocate memory "on the fly" when we call analysis/synthesis/process
    opened by ebezzam 17
  • Singular matrix error

    Singular matrix error

    Hi While running for linear array..many times it gives singular matrix error in almost all of the algorithms.. what care should we take while recording..

    opened by pathasreedhar 15
  • Missing wheels for Windows x86_64

    Missing wheels for Windows x86_64

    Hi

    The following error has occurred.

    File "", line 7, in File "pyroomacoustics/init.py", line 120, in from . import libroom as libroom ImportError: cannot import name libroom

    I think packages'download is succes because the folder named "pyroomacoustics" is created. Do you have solution and reason about this problem.

    #enviroment

    version: python 3.7.6(64bit) package: pyroomacoustics-0.4.2-cp37-cp37m-win_amd64.whl OS: Windows Other:I'm using Anaconda.

    If there is lack of inromation, please ask me.

    Sincerely,

    opened by tyauyade 13
  • Subspace noise removal

    Subspace noise removal

    This PR includes work from this mini-project by @GimenaSegrelles and Mathieu Lecoq for the EPFL audio course.


    Thanks for sending a pull request (PR), we really appreciate that! Before hitting the submit button, we'd be really glad if you could make sure all the following points have been cleared.

    Please also refer to the doc on contributing for more details. Even if you can't check all the boxes below, do not hesitate to go ahead with the PR and ask for help there.

    • [x] Are there docstrings ? Do they follow the numpydoc style ?
    • [ ] Have you run the tests by doing nosetests or py.test at the root of the repo ?
    • [x] Have you checked that the doc builds properly and that any new file has been added to the repo ? How to do that is covered in the documentation.
    • [x] Is there a unit test for the proposed code modification ? If the PR addresses an issue, the test should make sure the issue is fixed.
    • [x] Last but not least, did you document the proposed change in the CHANGELOG file ? It should go under "Unreleased".

    Happy PR :smiley:

    I ran into an error when running nosetests on the DFT unit tests... @fakufaku have you had this?

    malloc: *** error for object 0x7fddad8f9748: incorrect checksum for freed object - object was probably modified after being freed.
    
    opened by ebezzam 13
  • Segfault in `simul_ray()`

    Segfault in `simul_ray()`

    The following snippet throws a segfault (sometimes) on a few linux machines that I've tried:

    import pyroomacoustics as pra
    
    room_dim = [30, 30]
    source = [2, 3]
    mic_array = [[8], [8]]
    
    room = pra.ShoeBox(
        room_dim,
        ray_tracing=True,
        materials=pra.Material(energy_absorption=.1, scattering=.2),
        air_absorption=False,
        max_order=0,
    )
    room.add_microphone_array(mic_array)
    room.add_source(source)
    room.set_ray_tracing(n_rays=10_000)
    room.compute_rir()
    
    

    Strangely, I'm getting a segfault perhaps 1/3 of the time. It works and runs fine otherwise. I tried building from source (a few days ago) and observed the same segfault.

    Perhaps my snippet is doing something wrong? Thanks!

    CC @nateanl


    Env: Just ran conda create -n pra python=3.9 and pip install pyroomacoustics

    (pra) ➜  ~ pip freeze
    certifi @ file:///croot/certifi_1665076670883/work/certifi
    Cython==0.29.32
    numpy==1.23.5
    pybind11==2.10.1
    pyroomacoustics==0.7.2
    scipy==1.9.3
    (pra) ➜  ~ python --version
    Python 3.9.15
    
    

    gdb backtrace (not super useful as-is, all it says it that the segfault happened in Room<2ul>::simul_ray):

    (pra) ➜  ~ gdb python
    GNU gdb (GDB) Fedora 11.1-5.fc34
    Copyright (C) 2021 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
    Type "show copying" and "show warranty" for details.
    This GDB was configured as "x86_64-redhat-linux-gnu".
    Type "show configuration" for configuration details.
    For bug reporting instructions, please see:
    <https://www.gnu.org/software/gdb/bugs/>.
    Find the GDB manual and other documentation resources online at:
        <http://www.gnu.org/software/gdb/documentation/>.
    
    For help, type "help".
    Type "apropos word" to search for commands related to "word"...
    Reading symbols from python...
    
    
    (gdb) run pra.py
    Starting program: /home/nicolashug/.miniconda3/envs/pra/bin/python pra.py
    [Thread debugging using libthread_db enabled]
    Using host libthread_db library "/lib64/libthread_db.so.1".
    Missing separate debuginfo for /home/nicolashug/.miniconda3/envs/pra/lib/python3.9/site-packages/numpy/core/../../numpy.libs/libgfortran-040039e1.so.5.0.0
    Try: dnf --enablerepo='*debug*' install /usr/lib/debug/.build-id/5b/be74eb6855e0a2c043c0bec2f484bf3e9f14c0.debug
    Missing separate debuginfo for /home/nicolashug/.miniconda3/envs/pra/lib/python3.9/site-packages/numpy/core/../../numpy.libs/libquadmath-96973f99.so.0.0.0
    Try: dnf --enablerepo='*debug*' install /usr/lib/debug/.build-id/54/9b4c82347785459571c79239872ad31509dcf4.debug
    [New Thread 0x7fffe77e6640 (LWP 141388)]
    [New Thread 0x7fffe6fe5640 (LWP 141389)]
    [New Thread 0x7fffe27e4640 (LWP 141390)]
    [New Thread 0x7fffdffe3640 (LWP 141391)]
    [New Thread 0x7fffdd7e2640 (LWP 141392)]
    [New Thread 0x7fffdafe1640 (LWP 141393)]
    [New Thread 0x7fffd87e0640 (LWP 141394)]
    [New Thread 0x7fffd313d640 (LWP 141395)]
    [New Thread 0x7fffd293c640 (LWP 141396)]
    [New Thread 0x7fffce13b640 (LWP 141397)]
    [New Thread 0x7fffcb93a640 (LWP 141398)]
    [New Thread 0x7fffc9139640 (LWP 141399)]
    [New Thread 0x7fffc6938640 (LWP 141400)]
    [New Thread 0x7fffc4137640 (LWP 141401)]
    
    Thread 1 "python" received signal SIGSEGV, Segmentation fault.
    0x00007fffbe4146be in Room<2ul>::simul_ray(float, float, Eigen::Matrix<float, 2, 1, 0, 2, 1>, float) () from /home/nicolashug/.miniconda3/envs/pra/lib/python3.9/site-packages/pyroomacoustics/libroom.cpython-39-x86_64-linux-gnu.so
    Missing separate debuginfos, use: dnf debuginfo-install glibc-2.33-21.fc34.x86_64
    
    
    (gdb) backtrace
    #0  0x00007fffbe4146be in Room<2ul>::simul_ray(float, float, Eigen::Matrix<float, 2, 1, 0, 2, 1>, float) ()
       from /home/nicolashug/.miniconda3/envs/pra/lib/python3.9/site-packages/pyroomacoustics/libroom.cpython-39-x86_64-linux-gnu.so
    #1  0x00007fffbe414c08 in Room<2ul>::ray_tracing(unsigned long, Eigen::Matrix<float, 2, 1, 0, 2, 1>) ()
       from /home/nicolashug/.miniconda3/envs/pra/lib/python3.9/site-packages/pyroomacoustics/libroom.cpython-39-x86_64-linux-gnu.so
    #2  0x00007fffbe427937 in pybind11::cpp_function::initialize<pybind11::cpp_function::initialize<void, Room<2ul>, unsigned long, Eigen::Matrix<float, 2, 1, 0, 2, 1>, pybind11::name, pybind11::is_method, pybind11::sibling>(void (Room<2ul>::*)(unsigned long, Eigen::Matrix<float, 2, 1, 0, 2, 1>), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&)::{lambda(Room<2ul>*, unsigned long, Eigen::Matrix<float, 2, 1, 0, 2, 1>)#1}, void, Room<2ul>*, unsigned long, Eigen::Matrix<float, 2, 1, 0, 2, 1>, pybind11::name, pybind11::is_method, pybind11::sibling>(pybind11::cpp_function::initialize<void, Room<2ul>, unsigned long, Eigen::Matrix<float, 2, 1, 0, 2, 1>, pybind11::name, pybind11::is_method, pybind11::sibling>(void (Room<2ul>::*)(unsigned long, Eigen::Matrix<float, 2, 1, 0, 2, 1>), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&)::{lambda(Room<2ul>*, unsigned long, Eigen::Matrix<float, 2, 1, 0, 2, 1>)#1}&&, void (*)(Room<2ul>*, unsigned long, Eigen::Matrix<float, 2, 1, 0, 2, 1>), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&)::{lambda(pybind11::detail::function_call&)#3}::_FUN(pybind11::detail::function_call&) ()
       from /home/nicolashug/.miniconda3/envs/pra/lib/python3.9/site-packages/pyroomacoustics/libroom.cpython-39-x86_64-linux-gnu.so
    #3  0x00007fffbe3fc238 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) ()
       from /home/nicolashug/.miniconda3/envs/pra/lib/python3.9/site-packages/pyroomacoustics/libroom.cpython-39-x86_64-linux-gnu.so
    #4  0x0000000000507457 in cfunction_call (func=0x7fffbe489590, args=<optimized out>, kwargs=<optimized out>)
        at /usr/local/src/conda/python-3.9.15/Objects/methodobject.c:543
    #5  0x00000000004f068c in _PyObject_MakeTpCall (tstate=0x76bc30, callable=0x7fffbe489590, args=<optimized out>, 
        nargs=<optimized out>, keywords=0x0) at /usr/local/src/conda/python-3.9.15/Objects/call.c:191
    #6  0x0000000000505390 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x7fffbe33f1e0, 
        callable=0x7fffbe489590, tstate=0x76bc30) at /usr/local/src/conda/python-3.9.15/Include/cpython/abstract.h:116
    #7  _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x7fffbe33f1e0, callable=0x7fffbe489590, 
        tstate=0x76bc30) at /usr/local/src/conda/python-3.9.15/Include/cpython/abstract.h:103
    #8  method_vectorcall (method=<optimized out>, args=0x7fffbe33f1e8, nargsf=<optimized out>, kwnames=0x0)
        at /usr/local/src/conda/python-3.9.15/Objects/classobject.c:53
    #9  0x00000000004ec4d4 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x7fffbe33f1e8, 
    --Type <RET> for more, q to quit, c to continue without paging--
        callable=0x7fffbe48aa40, tstate=0x76bc30) at /usr/local/src/conda/python-3.9.15/Include/cpython/abstract.h:118
    #10 PyObject_Vectorcall (kwnames=0x0, nargsf=<optimized out>, args=0x7fffbe33f1e8, callable=0x7fffbe48aa40)
        at /usr/local/src/conda/python-3.9.15/Include/cpython/abstract.h:127
    #11 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0x76bc30)
        at /usr/local/src/conda/python-3.9.15/Python/ceval.c:5077
    #12 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=0x7fffbe33f040, throwflag=<optimized out>)
        at /usr/local/src/conda/python-3.9.15/Python/ceval.c:3489
    #13 0x00000000004f7f33 in _PyEval_EvalFrame (throwflag=0, f=0x7fffbe33f040, tstate=0x76bc30)
        at /usr/local/src/conda/python-3.9.15/Include/internal/pycore_ceval.h:40
    #14 function_code_fastcall (tstate=0x76bc30, co=<optimized out>, args=<optimized out>, nargs=<optimized out>, 
        globals=0x7fffbe398a40) at /usr/local/src/conda/python-3.9.15/Objects/call.c:330
    #15 0x00000000004e7e29 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x15db1e0, 
        callable=0x7fffbe337c10, tstate=0x76bc30) at /usr/local/src/conda/python-3.9.15/Include/cpython/abstract.h:118
    #16 PyObject_Vectorcall (kwnames=0x0, nargsf=<optimized out>, args=0x15db1e0, callable=0x7fffbe337c10)
        at /usr/local/src/conda/python-3.9.15/Include/cpython/abstract.h:127
    #17 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0x76bc30)
        at /usr/local/src/conda/python-3.9.15/Python/ceval.c:5077
    #18 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=0x15daf20, throwflag=<optimized out>)
        at /usr/local/src/conda/python-3.9.15/Python/ceval.c:3506
    #19 0x00000000004f7f33 in _PyEval_EvalFrame (throwflag=0, f=0x15daf20, tstate=0x76bc30)
        at /usr/local/src/conda/python-3.9.15/Include/internal/pycore_ceval.h:40
    #20 function_code_fastcall (tstate=0x76bc30, co=<optimized out>, args=<optimized out>, nargs=<optimized out>, 
        globals=0x7fffbe398a40) at /usr/local/src/conda/python-3.9.15/Objects/call.c:330
    #21 0x00000000004e7e29 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=<optimized out>, args=0x7c9230, 
        callable=0x7fffbe337ca0, tstate=0x76bc30) at /usr/local/src/conda/python-3.9.15/Include/cpython/abstract.h:118
    #22 PyObject_Vectorcall (kwnames=0x0, nargsf=<optimized out>, args=0x7c9230, callable=0x7fffbe337ca0)
        at /usr/local/src/conda/python-3.9.15/Include/cpython/abstract.h:127
    #23 call_function (kwnames=0x0, oparg=<optimized out>, pp_stack=<synthetic pointer>, tstate=0x76bc30)
        at /usr/local/src/conda/python-3.9.15/Python/ceval.c:5077
    #24 _PyEval_EvalFrameDefault (tstate=<optimized out>, f=0x7c90c0, throwflag=<optimized out>)
        at /usr/local/src/conda/python-3.9.15/Python/ceval.c:3506
    #25 0x00000000004e689a in _PyEval_EvalFrame (throwflag=0, f=0x7c90c0, tstate=0x76bc30)
        at /usr/local/src/conda/python-3.9.15/Include/internal/pycore_ceval.h:40
    --Type <RET> for more, q to quit, c to continue without paging--
    #26 _PyEval_EvalCode (tstate=<optimized out>, _co=<optimized out>, globals=<optimized out>, locals=<optimized out>, 
        args=<optimized out>, argcount=<optimized out>, kwnames=0x0, kwargs=0x0, kwcount=<optimized out>, kwstep=2, 
        defs=0x0, defcount=<optimized out>, kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0)
        at /usr/local/src/conda/python-3.9.15/Python/ceval.c:4329
    #27 0x00000000004e6527 in _PyEval_EvalCodeWithName (_co=<optimized out>, globals=<optimized out>, 
        locals=<optimized out>, args=<optimized out>, argcount=<optimized out>, kwnames=<optimized out>, kwargs=0x0, 
        kwcount=0, kwstep=2, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0)
        at /usr/local/src/conda/python-3.9.15/Python/ceval.c:4361
    #28 0x00000000004e64d9 in PyEval_EvalCodeEx (_co=<optimized out>, globals=<optimized out>, locals=<optimized out>, 
        args=<optimized out>, argcount=<optimized out>, kws=<optimized out>, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, 
        closure=0x0) at /usr/local/src/conda/python-3.9.15/Python/ceval.c:4377
    #29 0x000000000059329b in PyEval_EvalCode ([email protected]=0x7fffea5fb5b0, [email protected]=0x7fffea66e040, 
        [email protected]=0x7fffea66e040) at /usr/local/src/conda/python-3.9.15/Python/ceval.c:828
    #30 0x00000000005c0ad7 in run_eval_code_obj (tstate=0x76bc30, co=0x7fffea5fb5b0, globals=0x7fffea66e040, 
        locals=0x7fffea66e040) at /usr/local/src/conda/python-3.9.15/Python/pythonrun.c:1221
    #31 0x00000000005bcb00 in run_mod (mod=<optimized out>, filename=<optimized out>, globals=0x7fffea66e040, 
        locals=0x7fffea66e040, flags=<optimized out>, arena=<optimized out>)
        at /usr/local/src/conda/python-3.9.15/Python/pythonrun.c:1242
    #32 0x00000000004566f4 in pyrun_file (fp=<optimized out>, filename=<optimized out>, start=<optimized out>, 
        globals=<optimized out>, locals=<optimized out>, closeit=<optimized out>, flags=0x7fffffffdba8)
        at /usr/local/src/conda/python-3.9.15/Python/pythonrun.c:1140
    #33 0x00000000005b67e2 in pyrun_simple_file (flags=0x7fffffffdba8, closeit=1, filename=0x7fffea5a1df0, fp=0x76b8a0)
        at /usr/local/src/conda/python-3.9.15/Python/pythonrun.c:450
    #34 PyRun_SimpleFileExFlags (fp=0x76b8a0, filename=<optimized out>, closeit=1, flags=0x7fffffffdba8)
        at /usr/local/src/conda/python-3.9.15/Python/pythonrun.c:483
    #35 0x00000000005b3d5e in pymain_run_file (cf=0x7fffffffdba8, config=0x76c7a0)
        at /usr/local/src/conda/python-3.9.15/Modules/main.c:379
    #36 pymain_run_python (exitcode=0x7fffffffdba0) at /usr/local/src/conda/python-3.9.15/Modules/main.c:604
    #37 Py_RunMain () at /usr/local/src/conda/python-3.9.15/Modules/main.c:683
    #38 0x0000000000587349 in Py_BytesMain (argc=<optimized out>, argv=<optimized out>)
        at /usr/local/src/conda/python-3.9.15/Modules/main.c:1129
    #39 0x00007ffff7c91b75 in __libc_start_main () from /lib64/libc.so.6
    #40 0x00000000005871fe in _start ()
    (gdb) 
    
    opened by NicolasHug 10
  • no file 'libroom.cp37-win_amd64.pyx'?

    no file 'libroom.cp37-win_amd64.pyx'?

    I found you use build_rir and libroom for c compile, only find 'build_rir.pyx', but no 'libroom.pyx', Is the file missing? or did I misunderstand the mixed way?

    opened by pokjnb 10
  • [WIG] Support source directivities

    [WIG] Support source directivities

    Work in Progress

    This branch contains work towards support source directivities in pyroomacoustics.

    TODO

    [ ] Python API to provide directivities to sources [ ] Compute source emission angle in ISM shoebox [ ] Compute source emission angle in ISM polyhedra [X] Spherical distributions: uniform pyroomacoustics.random.uniform_spherical [X] Spherical distributions: power pyroomacoustics.random.power_spherical [ ] Spherical distributions: mixtures [X] Usual patterns (figure-of-eight, cardioid, etc) [ ] File based (e.g. CLF: common loudspeaker format) [ ] Match power between ISM and ray tracing

    Final checks

    • [ ] Are there docstrings ? Do they follow the numpydoc style ?
    • [ ] Have you run the tests by doing nosetests or py.test at the root of the repo ?
    • [ ] Have you checked that the doc builds properly and that any new file has been added to the repo ? How to do that is covered in the documentation.
    • [ ] Is there a unit test for the proposed code modification ? If the PR addresses an issue, the test should make sure the issue is fixed.
    • [ ] Last but not least, did you document the proposed change in the CHANGELOG file ? It should go under "Unreleased".

    Happy PR :smiley:

    enhancement work-in-progress 
    opened by fakufaku 10
  • Scattering fix

    Scattering fix

    Thanks for sending a pull request (PR), we really appreciate that! Before hitting the submit button, we'd be really glad if you could make sure all the following points have been cleared.

    Please also refer to the doc on contributing for more details. Even if you can't check all the boxes below, do not hesitate to go ahead with the PR and ask for help there.

    • [ ] Are there docstrings ? Do they follow the numpydoc style ?
    • [ ] Have you run the tests by doing nosetests or py.test at the root of the repo ?
    • [ ] Have you checked that the doc builds properly and that any new file has been added to the repo ? How to do that is covered in the documentation.
    • [ ] Is there a unit test for the proposed code modification ? If the PR addresses an issue, the test should make sure the issue is fixed.
    • [ ] Last but not least, did you document the proposed change in the CHANGELOG file ? It should go under "Unreleased".

    Happy PR :smiley:

    opened by ebezzam 10
  • DOA using real data

    DOA using real data

    Hello,

    I am attempting to utilize the DOA algorithms for real world data sets from a mic array, and am curious as to what changes I would need to make or what I need to keep in mind when I am doing this.

    opened by Merlin7864 10
  • Issue with generate_rirs()

    Issue with generate_rirs()

    Hi, I'm having an issue with the generate_rirs() function. It throws the following error:

    generate_rirs(room) Traceback (most recent call last):

    File "", line 1, in room.generate_rirs()

    File "", line 6, in generate_rirs self.compute_rir()

    File "C:\Users\abc\Documents\wham_room.py", line 44, in compute_rir h.append(source.get_rir(mic, self.visibility[s][m], self.fs, self.t0)[:self.max_rir_len])

    File "C:\Users\abc\Anaconda3\lib\site-packages\pyroomacoustics\soundsource.py", line 254, in get_rir fast_rir_builder(ir, time, alpha, visibility.astype(np.int32), Fs, fdl)

    File "pyroomacoustics\build_rir.pyx", line 53, in pyroomacoustics.build_rir.fast_rir_builder

    AssertionError


    Any help would be apreciated!

    opened by suhasbn 9
  • Colatitude angle usage 360 degree(colatitude <= np.pi and colatitude >= 0)

    Colatitude angle usage 360 degree(colatitude <= np.pi and colatitude >= 0)

    Hi, I'm trying to generate audio 360 degree azimuth and elevation. But pyroomacoustics can support max 180 deg elevation. Is there any way to change elevation(colatitude) angle to support 360 degree. example: dir_obj = CardioidFamily(orientation=DirectionVector(azimuth=-90, colatitude=-90, degrees=True), pattern_enum=DirectivityPattern.HYPERCARDIOID)

    Thanks

    opened by ClearVoiceM 0
  • Demo code changes

    Demo code changes

    Hi, I would like to report that a piece of code might be misleading in the notebooks/stft.ipynb.

    In the code here (https://github.com/LCAV/pyroomacoustics/blob/master/notebooks/stft.ipynb), we have:

    frame_len = 512
    X = pra.stft(audio, L=frame_len, hop=frame_len, transform=np.fft.rfft)
    

    But actually I have to modify it into the following to yield the same result:

    frame_len = 512
    analyzer = pra.stft.STFT(N=frame_len, hop=frame_len, transform=np.fft.rfft)
    X = analyzer.analysis(audio)
    

    or equivalently

    X = pra.stft.analysis(audio, L=frame_len, hop=frame_len)
    

    Thank you for your time.

    opened by IandRover 0
  • Add noise objects to create different type of noise

    Add noise objects to create different type of noise

    The idea is to have a simple interface to define background noise. We follow the API of SoundSource and MicrophoneArray where objects are added to the room via the add method.

    We add an abastrct object type Noise with a unified interface for all types of noise onto the microphone signals after the propagation has been simulated.

    Here is an example of the type of code we want to have

    room = ShoeBox(...)
    room.add_source(...)
    room.add_microphone(...)
    room.add(WhiteNoise(snr=10.0))
    
    noisy_mix, premix, noise = room.simulate(full_output=True)
    
    # noisy_mix == premix.sum(axis=0) + noise with SNR = 10 dB
    assert np.allclose(noisy_mix, premix.sum(axis=0) + noise)
    

    The current implementation (for white noise only) is an snr parameter to the simulate method. The implementation is also not very good and has unrequested scaling of the source signals. Better to have some modular implementation that is extensible by sub-classing the Noise class. For that, we will need to change slightly the interface

    TODO

    • [X] Noise abstract class interface
    • [X] White noise
    • [X] Diffuse noise spherical
    • [ ] Diffuse noise cylindrical
    • [ ] Warning/error when diffuse noise is used with directional microphones
    • [ ] Wind noise
    • [ ] Modify simulate method (arguments, return values)
    • [ ] Add deprecation warnings for the current parameters of simulate method

    Checks

    I am still working on this and will update the status

    • [ ] Are there docstrings ? Do they follow the numpydoc style ?
    • [ ] Have you run the tests by doing nosetests or py.test at the root of the repo ?
    • [ ] Have you checked that the doc builds properly and that any new file has been added to the repo ? How to do that is covered in the documentation.
    • [X] Is there a unit test for the proposed code modification ? If the PR addresses an issue, the test should make sure the issue is fixed.
    • [ ] Last but not least, did you document the proposed change in the CHANGELOG file ? It should go under "Unreleased".

    Happy PR :smiley:

    opened by fakufaku 0
  • Obtaining the source positions from implemented raytracing algorithm

    Obtaining the source positions from implemented raytracing algorithm

    Hi! Thanks for your contribution!

    I'm wondering whether there's a way to find out the sources' directions of arrival that were generated by the raytracing algorithm. For image source it seems to be quite simple as the image source positions are available, however, there seems to be no metadata available for ray tracing.

    I guess one of the options is to use implemented DOA estimation algorithms, but I wonder if there is any way to obtain the original metadata for each source.

    Thanks!

    opened by alanpawlak 0
  • Sensor noise renormalizes the signals

    Sensor noise renormalizes the signals

    Hi,

    when calling room.simulate(..., snr=VALUE), the following code is called:

    elif snr is not None:
                # Normalize all signals so that
                denom = np.std(premix_signals[:, reference_mic, :], axis=1)
                premix_signals /= denom[:, None, None]
                signals = np.sum(premix_signals, axis=0)
    
                # Compute the variance of the microphone noise
                self.sigma2_awgn = 10 ** (-snr / 10) * S
    

    Which normalizes each source signal by its std. Is this the desired behaviour? If I wanted one source to have a lower volume than another, wouldn't this undo their different volumes? Why is this normalization necessary?

    Thank you and sorry for bothering,

    Kind regards,

    Eric

    opened by egrinstein 1
  • How to add diffuse noise with desired SNR in a room?

    How to add diffuse noise with desired SNR in a room?

    I'm trying to creat a room with serval speakers and a stable background noise. In this case, I hope adding diffuse noise in this room rather than adding point source noise in one position. Could you please tell me how to implement it? Thank you very much!

    opened by WangRui-debug 2
Releases(v0.7.2)
  • v0.7.2(Nov 15, 2022)

    Added

    • Added the AnechoicRoom class. @duembgen
    • Added FastMNMF2 (Fast Multichannel Nonnegative Matrix Factorization 2) to bss subpackage. @sekiguchi92
    • Randomized image source method for removing sweeping echoes in shoebox rooms. @orchidas
    • Adds the cart2spher method in pyroomacoustics.doa.utils to convert from cartesian to spherical coordinates.
    • Example room_complex_wall_materials.py
    • CI for python 3.10
    • Appveyor builds for compiled wheels for win32/win64 x86

    Changed

    • Cleans up the plot_rir function in Room so that the labels are neater. It also adds an extra option kind that can take values "ir", "tf", or "spec" to plot the impulse responses, transfer functions, or spectrograms of the RIR.
    • Refactored the implementation of FastMNMF. @sekiguchi92
    • Modified the document of init.py in doa subpackage.
    • End of Python 3.6 support.
    • Removed the deprecated realtime sub-module.
    • Removed the deprecated functions pyroomacoustics.transform.analysis, pyroomacoustics.transform.synthesis, pyroomacoustics.transform.compute_synthesis_window. They are replaced by the equivalent functions in pyroomacoustics.transform.stft sub-module.
    • The minimum required version of numpy was changed to 1.13.0 (use of np.linalg.multi_dot in doa sub-package see #271) @maldil

    Bugfix

    • Fixes missing import statement in room.plot for 3D rooms (PR #286) @usernamenoahfoster
    • On win64, bss.fastmnmf would fail due to some singular matrix. 1) protect solve with try/except and switch to pseudo-inverse if necessary, 2) change eps 1e-7 -> 1e-6
    • Fixed pypi upload for windows wheels
    • Fixed most warnings in the tests
    • Fixed bug in examples/adaptive_filter_stft_domain.py

    Thanks to

    • @sekiguchi92 for the implementation of fastmnmf2
    • @orchidas for the implementation of the randomized image source method
    • @duembgen for the implementation of the AnechoicRoom class
    • @usernamenoahfoster @maldil @hutauf @conkeur @popcornell for bug fixes, reports, and improvements
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Dec 1, 2021)

    Added

    • New DOA method: MUSIC with pseudo-spectra normalization. Thanks @4bian! Normalizes MUSIC pseudo spectra before averaging across frequency axis.

    Bugfix

    • Issue #235 : fails when set_ray_tracing is called, but no mic_array is set
    • Issue #236 : general ISM produces the wrong transmission coefficients
    • Removes an unncessery warning for some rooms when ray tracing is not needed

    Misc

    • Unify code format by using Black
    • Add code linting in continuous integration
    • Drop CI support for python 3.5
    Source code(tar.gz)
    Source code(zip)
Owner
Audiovisual Communications Laboratory
The mission of the Audiovisual Communications laboratory is to perform basic and applied research in signal processing for communications.
Audiovisual Communications Laboratory
Python library for audio and music analysis

librosa A python package for music and audio analysis. Documentation See https://librosa.org/doc/ for a complete reference manual and introductory tut

librosa 5.6k Jan 06, 2023
Gateware for the Terasic/Arrow DECA board, to become a USB2 high speed audio interface

DECA USB Audio Interface DECA based USB 2.0 High Speed audio interface Status / current limitations enumerates as class compliant audio device on Linu

Hans Baier 16 Mar 21, 2022
Muzic: Music Understanding and Generation with Artificial Intelligence

Muzic is a research project on AI music that empowers music understanding and generation with deep learning and artificial intelligence.

Microsoft 2.6k Dec 30, 2022
A music player designed for a University Project.

A music player designed for a University Project. Very flexibe and easy to use, a real life working application with user friendly controls. Hope u enjoy!!

Aditya Johorey 1 Nov 19, 2021
SolidMusic rewrite version, need help

Telegram Streamer Bot This is rewrite version of solidmusic, but it can't be deployed now, help me to make this bot running fast and good. If anyone w

Shohih Abdul 63 Jan 06, 2022
Royal Music You can play music and video at a time in vc

Royals-Music Royal Music You can play music and video at a time in vc Commands SOON String STRING_SESSION Deployment 🎖 Credits • 🇸ᴏᴍʏᴀ⃝🇯ᴇᴇᴛ • 🇴ғғɪ

2 Nov 23, 2021
Implementation of "Slow-Fast Auditory Streams for Audio Recognition, ICASSP, 2021" in PyTorch

Auditory Slow-Fast This repository implements the model proposed in the paper: Evangelos Kazakos, Arsha Nagrani, Andrew Zisserman, Dima Damen, Slow-Fa

Evangelos Kazakos 57 Dec 07, 2022
Python tools for the corpus analysis of popular music.

CATCHY Corpus Analysis Tools for Computational Hook discovery Python tools for the corpus analysis of popular music recordings. The tools can be used

Jan VB 20 Aug 20, 2022
Some utils for auto speech recognition

About Some utils for auto speech recognition. Utils Util Description Script Reset audio Reset sample rate, sample width, etc of audios.

1 Jan 24, 2022
IDing the songs played on the do you radio show

IDing the songs played on the do you radio show

Rasmus Jones 36 Nov 15, 2022
Sequencer: Deep LSTM for Image Classification

Sequencer: Deep LSTM for Image Classification Created by Yuki Tatsunami Masato Taki This repository contains implementation for Sequencer. Abstract In

Yuki Tatsunami 111 Dec 16, 2022
Sync Toolbox - Python package with reference implementations for efficient, robust, and accurate music synchronization based on dynamic time warping (DTW)

Sync Toolbox - Python package with reference implementations for efficient, robust, and accurate music synchronization based on dynamic time warping (DTW)

Meinard Mueller 66 Jan 02, 2023
Conferencing Speech Challenge

ConferencingSpeech 2021 challenge This repository contains the datasets list and scripts required for the ConferencingSpeech challenge. For more detai

73 Nov 29, 2022
Powerful, simple, audio tag editor for GNU/Linux

puddletag puddletag is an audio tag editor (primarily created) for GNU/Linux similar to the Windows program, Mp3tag. Unlike most taggers for GNU/Linux

341 Dec 26, 2022
:speech_balloon: SpeechPy - A Library for Speech Processing and Recognition: http://speechpy.readthedocs.io/en/latest/

SpeechPy Official Project Documentation Table of Contents Documentation Which Python versions are supported Citation How to Install? Local Installatio

Amirsina Torfi 870 Dec 27, 2022
SU Music Player — The first open-source PyTgCalls based Pyrogram bot to play music in voice chats

SU Music Player — The first open-source PyTgCalls based Pyrogram bot to play music in voice chats Note Neither this, or PyTgCalls are fully

SU Projects 58 Jan 02, 2023
Python wrapper around sox.

pysox Python wrapper around sox. Read the Docs here. This library was presented in the following paper: R. M. Bittner, E. J. Humphrey and J. P. Bello,

Rachel Bittner 446 Dec 07, 2022
Algorithmic and AI MIDI Drums Generator Implementation

Algorithmic and AI MIDI Drums Generator Implementation

Tegridy Code 8 Dec 30, 2022
MusicBrainz Picard

MusicBrainz Picard MusicBrainz Picard is a cross-platform (Linux/Mac OS X/Windows) application written in Python and is the official MusicBrainz tagge

MetaBrainz Foundation 3k Dec 31, 2022
SinGlow: Generative Flow for SVS tasks in Tensorflow 2

SinGlow is a part of my Singing voice synthesis system. It can extract features of sound, particularly songs and musics. Then we can use these features (or perfect encoding) for feature migrating tas

Haobo Yang 8 Aug 22, 2022