Have a question?
Message sent Close

Electronic Engineering and Computer Science: Digital Music Technology for Audio and Music Synthesis

Instructor
UNITAR-GSLDC
0
0 reviews
  • Description
  • Reviews

Course 20: Electronic Engineering and Computer Science: Digital Music Technology for Audio and Music Synthesis

I. Course Description

This course will explore the fundamentals of sound transmission and the knowledge of physics. We will learn about the concepts of wave and signal, such as amplitude, frequency, period, phase, and the relationship of wavelength and speed of sound velocity. In addition, the course will introduce the basic electrical quantities (e. g., voltage, current, resistance / impedance) and their correspondence with acoustic and mechanical systems, as well as the concept of decibels.

We will delve into the basis of the human auditory system and perception, as well as the types and patterns of sensors, including microphones and electromagnetic speakers. The course will also cover sound positioning, dual-ear microphone technology and more. For signal processing, we will learn the time domain and frequency domain representation of sound, the concept of periodic waves and timbre, and the basic knowledge of filtering and modulation. Finally, the course will cover speech synthesis, music synthesis and other content, including expressive speech synthesis, speech cloning and personalization technology and evaluation methods. Through this course, students will gain a deep understanding and practical experience of sound-related technologies and fields.

II. Professor Introduction

Music & Technology - School of Music - Carnegie Mellon UniversityThomas Sullivan – Tenured professor at Carnegie Mellon University

Professor Thomas Sullivan is a professor in the Department of Electronics and Computer Engineering and a guest lecturer in bachelor and Masters courses in Music Technology at the School of Music. He was very interested in the field of signal processing for audio and music systems and in the creation of new musical instruments. He teaches courses in the fields of electronic circuits, signal processing and electronic acoustics and is participates in the development of audio engineering courses through his numerous students at Carnegie Mellon University. He is also responsible for supervising the students independent research projects in these areas. The professor taught recording courses at the Recording Industry Department at Central Tennessee State University and co-taught them at Carnegie Mellon University during the summer vacation.

As an amateur rock and jazz guitarist, the professor is very interested in electric guitar gadgets, especially the hexatone treatment of the instrument. The professor received a masters degree from the Music and Cognition Group at the MIT Media Lab (then), his doctoral thesis on microphone array processing for front-end signal processing in automated speech recognition systems, and his PhD from Carnegie Mellon University.

III. Syllabus

  1. Basic principles of sound propagation; waves and signals
  2. Basic electrical quantity and the corresponding quantities in acoustic and mechanical systems
  3. Basic knowledge of the human auditory system and perception; sensors
  4. Foundation of signal processing; the time domain and frequency domain of sound
  5. analog conversion; quantization problem; digital-analog conversion
  6. Signal flow in the recording / production system; sound input / output
  7. Basic sound synthesis algorithms and techniques
  8. Speech synthesis: speech synthesis, speech cloning, and personalization
  9. Music synthesis I: Overview of synthesis technology, sampling-based synthesis
  10. Music synthesis II: synthesizer programming
Enroll Program

Enroll in our program to unlock expert knowledge, hands-on training, and personalized support.