Miller Puckette obtained a B.S. in Mathematics from MIT (1980) and a PhD in Mathematics from Harvard (1986) where he was a Putnam Fellow. He was a member of MIT's Media Lab from its inception until 1987, and then a researcher at IRCAM, founded by composer and conductor Pierre Boulez. At IRCAM he wrote Max, a widely used computer music software environment, released commercially by Opcode Systems in 1990 and now available from Cycling74.com. Puckette joined the music department of the University of California, San Diego in 1994, where he is now professor. From 2000 to 2011 he was Associate Director of UCSD's Center for Research in Computing and the Arts (CRCA). He is currently developing Pure Data ("Pd"), an open-source real-time multimedia arts programming environment. Puckette has collaborated with many artists and musicians, including Philipe Manoury (whose Sonus ex Machina cycle was the first major work to use Max), and Rand Steiger, Vibeke Sorensen, and Juliana Snapper. Since 2004 he has performed with the Convolution Brothers. In 2008 Puckette received the SEAMUS Lifetime Achievement Award.
Julius O. Smith teaches a music signal-processing course sequence and supervises related research at the Center for Computer Research in Music and Acoustics (CCRMA). He is formally a professor of music and (by courtesy) electrical engineering at Stanford University. In 1975, he received his BS/EE degree from Rice University, where he got a solid grounding in the field of digital signal processing and modeling for control. In 1983, he received the PhD/EE degree from Stanford University, specializing in techniques for digital filter design and system identification, with application to violin modeling. His work history includes the Signal Processing Department at Electromagnetic Systems Laboratories, Inc., working on systems for digital communications, the Adaptive Systems Department at Systems Control Technology, Inc., working on research problems in adaptive filtering and spectral estimation, and NeXT Computer, Inc., where he was responsible for sound, music, and signal processing software for the NeXT computer workstation. Prof. Smith is a Fellow of the Audio Engineering Society and the Acoustical Society of America. He is the author of four online books and numerous research publications in his field.
Avery Wang is co-founder and Chief Scientist at Shazam Entertainment, and principal inventor of the Shazam search algorithm. He holds BS and MS degrees in Mathematics and MS and PhD degrees in Electrical Engineering, all from Stanford University. As a graduate student he received an NSF Graduate Fellowship to study computational neuroscience. He also received a Fulbright Scholarship to study at the Institut für Neuroinformatik at the Ruhr-Universität Bochum under Christoph von der Malsburg, focusing on auditory perception and the cocktail party effect. Upon returning to Stanford, he studied under Julius O. Smith, III at CCRMA, with a thesis titled "Instantaneous and Frequency-Warped Signal Processing Techniques for Auditory Source Separation”. He was about to do a post-doc at UCSF in auditory neuroscience when he was recruited by Chromatic Research working on high-performance multimedia DSP algorithms and hardware. He has over 40 issued patents.
David Berners is Chief Scientist of Universal Audio Inc., a hardware and software manufacturer for the professional audio market. At UA, Dr. Berners leads research and development efforts in audio effects processing, including dynamic range compression, equalization, distortion and delay effects, and specializing in modeling of vintage analog equipment. He is also an adjunct professor at CCRMA at Stanford University, where he teaches a graduate class in audio effects processing. Dr. Berners has held positions at the Lawrence Berkeley National Laboratory, NASA Jet Propulsion Laboratory, and Allied Signal. He received his Ph.D. from Stanford University, M.S. from Caltech, and his S.B. from MIT, all in electrical engineering.
Brian Hamilton is a Postdoctoral Research Fellow in the Acoustics and Audio group at the University of Edinburgh. His research focusses on numerical methods for large-scale 3-D room acoustics simulations and spatial audio. He received B.Eng. (Hons) and M.Eng. degrees in Electrical Engineering from McGill University in Montréal, QC, Canada, in 2009 and 2012, respectively, and his Ph.D. from the University of Edinburgh in 2016.
Jean-Marc Jot is a Distinguished Fellow at Magic Leap. Previously, at Creative Labs, he led the design and development of SoundBlaster audio processing algorithms and architectures, including OpenAL/EAX technologies for game 3D audio authoring and rendering. Before relocating to Califonia in the late 90s, he conducted research at IRCAM in Paris, where he designed the Spat software suite for immersive audio creation and performance. He is a Fellow of the AES and has authored numerous patents and papers on spatial audio signal processing and coding. His current research interests include immersive audio for virtual and augmented reality in wearable devices and domestic or automotive environments.
Julian Parker is a researcher and designer working in the area of musical signal processing. He started his academic career studying Natural Sciences at the University of Cambridge, before moving on to study for the MSc in Acoustics & Music Technology at the University of Edinburgh. In 2013, he completed his doctoral degree at Aalto University, Finland, concentrating on methods for modelling the audio-range behaviour of mechanical springs used for early artificial reverberation. Since graduating he has been employed at Native Instruments GmbH, where he now heads up DSP development and research. He has published on a variety of topics including reverberation, physical modelling of both mechanical and electrical systems, and digital filter design.