PFFDTD is an open-source FDTD simulator for 3D room acoustics

Related tags

Deep Learningpffdtd
Overview

PFFDTD (pretty fast FDTD)

PFFDTD Screenshot

PFFDTD is an implementation of finite-difference time-domain (FDTD) simulation for 3D room acoustics, which includes an accompanying set of tools for simulation setup and processing of input/output signals. This software is intended for research use with powerful workstations or single-node remote servers with one or more Nvidia GPUs (using CUDA). PFFDTD was designed be "pretty fast" when run on GPUs – at least for FDTD simulations (the name is mostly intended as a pun).

Features

  • Multi-GPU-accelerated execution (with CUDA-enabled Nvidia GPUs)
  • Multi-threaded CPU execution
  • Python-based voxelization with CPU multiprocessing
  • Energy conservation to machine precision (numerical stability)
  • 7-point Cartesian and 13-point face-centered cubic (FCC) schemes
  • Frequency-dependent impedance boundaries
  • Works with non-watertight models
  • Stability safeguards for single-precision operation
  • A novel staircase boundary-surface-area correction scheme
  • 3D visualization

Installation

PFFDTD is designed to run on a Linux system (e.g. Ubuntu/Centos/Arch).

Installation (Python)

PFFDTD requires at least Python 3.9 to run, with additional required packages in pip_requirements.txt (for pip) or conda_pffdtd.yml (for conda). Conda (or miniconda) is recommended to get started with a PFFDTD-specific conda environment (see .yml file).

Installation (C/CUDA)

To compile, run 'make all' in the c_cuda folder.

You will need the CUDA toolkit, which you can install from this link.

You will also need HDF5 runtime and development files. In Ubuntu it suffices to install the libhdf5-dev package. For Centos install hdf5-devel. You can also link to the shared libraries which you can download here. Check the Makefile for environment variables to set.

Examples

There are two models provided with the code, which you can start to run from the four test scripts in this root folder.

Starting a project from scratch

To start a project (a model) from scratch, follow the provided examples, but essentially you will need to:

  1. Build a Sketchup model, set source/receiver locations in CSV files, and export to a JSON file.
  2. Fit absorption/impedance data to the passive boundary impedance model used [BHBS16]. Only a simple routine is provided to fit to 11 octave-band absorption coefficients (16Hz to 16kHz).
  3. Fill out a setup script to configure the model and simulation.
  4. Run your setup script, which in turns runs a 'voxelization' routine and sets up input/output grid positions and signals.
  5. Simulate your model from the exported .h5 data files with either the CPU-based Python code (which includes energy + visualization), or the C/CPU-based engine, or the C/CUDA/GPU-based engine.
  6. Post-process the output files from output .h5 files (if not just a run for visualization purposes).

The setup phase (signals + voxelization) can be carried out on a different machine to the simulation itself. This is useful if you're using GPUs on a remote server. There are GZIP compression options in the HDF5 exporting if network uploading is a bottleneck (you can also repack HDF5 files with 'h5repack'). Note, however, the voxelization phase is compute-intensive. It is best to have a many-core CPU server or workstation for this, or least a powerful laptop.

Sketchup

After building a Sketchup model of your room/scene, you can export it using the provided Sketchup plugin (.rbz file under ruby_SU folder), which exports the model and source/receiver positions (defined in separate CSV files – see examples) to a JSON file. Walls should be labelled with Sketchup Materials (which you can rename as necessary), and you should pay attention to the orientation of faces. Unlabelled materials are taken to be rigid. It is important to only label the 'active' side of a surface in order to save on computation time and memory in the FDTD scheme (non-rigid boundary nodes require extra state for internal ODEs). It is possible to have two-sided materials if needed, but both sides must be the same material. The model does not need to be closed (watertight) but it is good practice to have that. The exported model is expected to have at least four triangles. It only exports visible entities, and only Face entities (not Groups or Components – explode them first). Layers (Tags) are not taken into account.

The rest of the code works from the JSON export, so it's possible to export to this JSON format from other CAD software (in theory, but you would need to develop plugins for that). There is a Three.JS based viewer in a separate repository to double check the export (and orientation of faces).

Python

Follow the test script examples to set up your simulation. You need to choose input-signal types, link up materials to impedance data files (.h5 HDF5 format files), and choose your grid resolution. It helps to know some basics of Python/Numpy and FDTD here. When you run the script you will get a bunch of info related to the scheme, and it will give you an estimate of memory requirements for the simulation, and save some necessary data in .h5 files.

Simulation engine

Once you have your .h5 files ready in some folder, you can run the Python FDTD engine (pointing to the folder), or run the compiled C binaries from that folder. You will of course need Nvidia GPUs to run the CUDA version of the FDTD engine. The different engine implementations produce identical results (to within machine accuracy). Generally, the Python version is for visualization purposes and correctness checking (comparing outputs or calculating system energy), whereas the CUDA code is meant for larger simulations. Single-precision GPU execution will generally be the fastest (and cheapest) to run.

Post-processing

When the simulation has completed there will be a 'sim_outs.h5' file which has the raw signal read from grid locations. You can process these signals to get final output files, which do need some cleanup to remove, e.g., high frequencies with too much error (numerical dispersion) or to resample to a standard audio rate like 48kHz. There are also filters to apply air absorption to the output responses [Ham21a,Ham21b]. PFFDTD is only designed to output monoaural RIRs, but you can build arrays of outputs and feed those into frequency-domain microphone-array processing tools (e.g.,) for spatial audio, or encode Ambisonics directly in the time-domain [BPH19] (a similar approach can be used for directive sources [BAH19]).

Enhancements

This code largely implements algorithms that have already been published (see key reference list below) but it does feature some enhancements that have not appeared in articles that are worth mentioning below.

Operation in single precision

Instabilities can occur with FDTD simulations in finite-precision if you wait long enough, and that length will depend on the precision chosen and the particular simulation. You should not experience instabilities in double precision (indeed, the code conserves energy to machine precision). Single precision is generally faster and uses less memory, but operating in single precision can be give rise to instabilities (due to rounding errors) (see, e.g., [HBW14]).

If you choose to use single precision, this code has a few safeguards to mitigate/delay long-time instabilities. First, the eigenvalues of the underlying finite-difference Laplacian operator are perturbed to prevent DC-mode instabilities. In view of a matrix operator, off-diagonal elements are rounded towards zero (RTZ), while diagonal elements are shifted (on the order of machine epsilon) such that the finite-difference Laplacian operator remains negative semi-definite (a condition for stability). These DC-mode safeguards are only active in the single-precision CUDA version (they aren't needed in double precision). Secondly, the input signal to the scheme (e.g., an impulse) must be differentiated (using 'diff_source' option for sim_setup.py) to remove any DC input. At post-processing, the output will be integrated with a combined integrator/high-pass Butterworth filter [HPB20], which integrates while suppressing any leftover problematic DC modes.

Staircasing in FDTD

Staircase effects in regular-grid FDTD schemes is a known issue [BHBS16]. In essence, surfaces areas of boundaries are incorrectly estimated, leading to an overestimation of decay times. A novel correction scheme is provided in this code, based on the idea of an effective surface area. This concept is simple – it amounts to a weighting factor on the boundary-surface based on the inner product between the target boundary surface and a voxel boundary surface. Surface area errors before and after correction are tabulated in the voxelizer, and for refined grids the corrections brings errors down from as high as 50% to generally below 1% (going to zero in the limit of small cells). This helps provide more consistent estimates of decay times, and makes the simulation more robust to changes in grid resolution or scene rotations, while keeping the boundary updates efficient to implement and carry out on GPUs.

FCC scheme

For efficiency the 13-point FCC scheme is recommended over the 7-point Cartesian scheme, as it is typically requires ~5x less memory than the Cartesian scheme for a 1%-2% levels of dispersion error [HW13,Ham16]. However, the FCC scheme is tricky to implement due to its setting on a non-Cartesian grid. One solution to this has been compress the FCC grid like an accordion so it fits on a Cartesian grid, but there's a better solution implemented in this code. Namely, the FCC subgrid is folded onto itself across one dimension such that the stencil operation is uniform throughout (the old solution has some branching involved). Only the C and C/CUDA versions have this. The Python engine version uses a straightforward, yet redundant, Cartesian grid (aka using the CCP scheme).

Performance benchmarks

See (TODO:) for some performance benchmark results using single-node Nvidia GPUs servers, with GPU architectures ranging from Kepler to Ampere. This software has been tested with up to 30b nodes on the FCC grid (~250GB). For even larger, multi-node (MPI-based) FDTD simulations, see ParallelFDTD and [SCM18].

License

This software is released under the MIT license. See the LICENSE file for details. The Sketchup models found in the data/models folder are released under different licenses; see the README files in their corresponding folders.

Credits

The development of this code was not funded by any body or institution, but some credit can be given to grants ERC-StG-2011-279068-NESS and ERC-PoC-2016-WRAM, which funded many of the underlying (published) simulation algorithms developed within the Acoustics & Audio Group at the University of Edinburgh (see, e.g., list of references below). Some of the performance benchmarks of this code were carried out on GPUs (K80 / GTX1080Ti) paid for by those grants. Also, the Sketchup models provided were initially created by Heather Lai [LH20] and Nathaniel Fletcher [HWFB16] .

Citing this work

If PFFDTD contributes to an academic publication, please cite it as:

  @misc{hamilton2021pffdtd,
    title = {PFFDTD Software},
    author = {Brian Hamilton},
    note = {https://github.com/bsxfun/pffdtd},
    year = {2021}
  }

Third-party credits

PFFDTD relies on a number of open-source projects to work properly:

The above list is non-exhaustive. Use of the third-party software, libraries or code referred to above may be governed by separate licenses.

Some background references

[HW13] B. Hamilton and C. J. Webb. Room acoustics modelling using GPU-accelerated finite difference and finite volume methods on a face-centered cubic grid. In Proc. Digital Audio Effects (DAFx), pages 336–343, Maynooth, Ireland, September 2013.

[HBW14] B. Hamilton, S. Bilbao, and C. J. Webb. Revisiting implicit finite difference schemes for 3-D room acoustics simulations on GPU. In Proc. Digital Audio Effects (DAFx), pages 41–48, Erlangen, Germany, September 2014.

[BHBS16] S. Bilbao, B. Hamilton, J. Botts, and L. Savioja. Finite volume time domain room acoustics simulation under general impedance boundary conditions. IEEE/ACM Trans. Audio, Speech, Lang. Process., 24(1):161–173, 2016.

[Ham16] B. Hamilton. Finite Difference and Finite Volume Methods for Wave-based Modelling of Room Acoustics. Ph.D. thesis, University of Edinburgh, 2016.

[HWFB16] B. Hamilton, C. J. Webb, N. D. Fletcher, and S. Bilbao. Finite difference room acoustics simulation with general impedance boundaries and viscothermal losses in air: Parallel implementation on multiple GPUs. In Proc. Int. Symp. Music & Room Acoust., La Plata, Argentina, September 2016.

[SCM18] J. Saarelma, J. Califa, and R. Mehra. Challenges of distributed real-time finite-difference time-domain room acoustic simulation for auralization. In AES Int. Conf. Spatial Reproduction, July 2018.

[BPH19] S. Bilbao, A. Politis, and B. Hamilton. Local time-domain spherical harmonic spatial encoding for wave-based acoustic simulation. IEEE Signal Process. Lett., 26(4):617–621, 2019.

[BAH19] S. Bilbao, J. Ahrens, and B. Hamilton. Incorporating source directivity in wave-based virtual acoustics: Time-domain models and fitting to measured data. J. Acoust. Soc. Am., 146(4):2692–2703, 2019.

[LH20] H. Lai and B. Hamilton. Computer modeling of barrel-vaulted sanctuary exhibiting flutter echo with comparison to measurements. Acoustics, 2(1):87–109, 2020.

[HPB20] I. Henderson, A. Politis, and S. Bilbao. Filter design for real-time ambisonics encoding during wave-based acoustic simulations. In Proc. e-Forum Acusticum, Lyon, France, December 2020.

[Ham21a] B. Hamilton. Adding air attenuation to simulated room impulse responses: A modal approach. In Proc. Int. Conf. Immersive & 3D Audio, Bologna, Italy, September 2021.

[Ham21b] B. Hamilton. Air absorption filtering method based on approximate Green’s function for Stokes’ equation. In Proc. Digital Audio Effects (DAFx), Vienna, Austria, September 2021.

Owner
Brian Hamilton
Brian Hamilton
Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation

Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation Requirements This repository needs mmsegmentation Training To train

Adelaide Intelligent Machines (AIM) Group 7 Sep 12, 2022
Image super-resolution (SR) is a fast-moving field with novel architectures attracting the spotlight

Revisiting RCAN: Improved Training for Image Super-Resolution Introduction Image super-resolution (SR) is a fast-moving field with novel architectures

Zudi Lin 76 Dec 01, 2022
Continuous Augmented Positional Embeddings (CAPE) implementation for PyTorch

PyTorch implementation of Continuous Augmented Positional Embeddings (CAPE), by Likhomanenko et al. Enhance your Transformer positional embeddings with easy-to-use augmentations!

Guillermo Cámbara 26 Dec 13, 2022
[Link]mareteutral - pars tradg wth M []

pairs-trading-with-ML Jonathan Larkin, August 2017 One popular strategy classification is Pairs Trading. Though this category of strategies can exhibi

Jonathan Larkin 134 Jan 06, 2023
TSIT: A Simple and Versatile Framework for Image-to-Image Translation

TSIT: A Simple and Versatile Framework for Image-to-Image Translation This repository provides the official PyTorch implementation for the following p

Liming Jiang 255 Nov 23, 2022
Source code for our paper "Improving Empathetic Response Generation by Recognizing Emotion Cause in Conversations"

Source code for our paper "Improving Empathetic Response Generation by Recognizing Emotion Cause in Conversations" this repository is maintained by bo

Yuhan Liu 24 Nov 29, 2022
Transfer style api - An API to use with Tranfer Style App, where you can use two image and transfer the style

Transfer Style API It's an API to use with Tranfer Style App, where you can use

Brian Alejandro 1 Feb 13, 2022
Code for the Shortformer model, from the paper by Ofir Press, Noah A. Smith and Mike Lewis.

Shortformer This repository contains the code and the final checkpoint of the Shortformer model. This file explains how to run our experiments on the

Ofir Press 138 Apr 15, 2022
Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments (CoRL 2020)

Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments [Project website] [Paper] This project is a PyTorch

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 49 Nov 28, 2022
Semantic similarity computation with different state-of-the-art metrics

Semantic similarity computation with different state-of-the-art metrics Description • Installation • Usage • License Description TaxoSS is a semantic

6 Jun 22, 2022
Global-Local Context Network for Person Search

Global-Local Context Network for Person Search Abstract: Person search aims to jointly localize and identify a query person from natural, uncropped im

Peng Zheng 15 Oct 17, 2022
Pyramid R-CNN: Towards Better Performance and Adaptability for 3D Object Detection

Pyramid R-CNN: Towards Better Performance and Adaptability for 3D Object Detection

61 Jan 07, 2023
This repository contains the implementation of the paper Contrastive Instance Association for 4D Panoptic Segmentation using Sequences of 3D LiDAR Scans

Contrastive Instance Association for 4D Panoptic Segmentation using Sequences of 3D LiDAR Scans This repository contains the implementation of the pap

Photogrammetry & Robotics Bonn 40 Dec 01, 2022
Find-Lane-Line - Use openCV library and Python to detect the road-lane-line

Find-Lane-Line This project is to use openCV library and Python to detect the road-lane-line. Data Pipeline Step one : Color Selection Step two : Cann

Kenny Cheng 3 Aug 17, 2022
Multi-Modal Machine Learning toolkit based on PyTorch.

简体中文 | English TorchMM 简介 多模态学习工具包 TorchMM 旨在于提供模态联合学习和跨模态学习算法模型库,为处理图片文本等多模态数据提供高效的解决方案,助力多模态学习应用落地。 近期更新 2022.1.5 发布 TorchMM 初始版本 v1.0 特性 丰富的任务场景:工具

njustkmg 1 Jan 05, 2022
ICLR 2021: Pre-Training for Context Representation in Conversational Semantic Parsing

SCoRe: Pre-Training for Context Representation in Conversational Semantic Parsing This repository contains code for the ICLR 2021 paper "SCoRE: Pre-Tr

Microsoft 28 Oct 02, 2022
PyTorch implementaton of our CVPR 2021 paper "Bridging the Visual Gap: Wide-Range Image Blending"

Bridging the Visual Gap: Wide-Range Image Blending PyTorch implementaton of our CVPR 2021 paper "Bridging the Visual Gap: Wide-Range Image Blending".

Chia-Ni Lu 69 Dec 20, 2022
An official PyTorch Implementation of Boundary-aware Self-supervised Learning for Video Scene Segmentation (BaSSL)

An official PyTorch Implementation of Boundary-aware Self-supervised Learning for Video Scene Segmentation (BaSSL)

Kakao Brain 72 Dec 28, 2022
Keras implementation of AdaBound

AdaBound for Keras Keras port of AdaBound Optimizer for PyTorch, from the paper Adaptive Gradient Methods with Dynamic Bound of Learning Rate. Usage A

Somshubra Majumdar 132 Sep 23, 2022
This is an official pytorch implementation of Lite-HRNet: A Lightweight High-Resolution Network.

Lite-HRNet: A Lightweight High-Resolution Network Introduction This is an official pytorch implementation of Lite-HRNet: A Lightweight High-Resolution

HRNet 675 Dec 25, 2022