The Radio Phonics Laboratory

Fastradioburst23 here to let you know about a brand new book from Imaginary Stations contributor Justin Patrick Moore. The Radio Phonics Laboratory: Telecommunications, Speech Synthesis, and the Birth of Electronic Music is a radiocentric look at the origins of electronica. Radioheads will find much to enjoy in the pages of this tome including:

  • Elisha Gray’s Musical Telegraph, arguably the world’s first synthesizer that used telegraph wires to send music down the line to distant listeners.
  • Lee De Forest’s Audion Piano. Radio pioneer Lee De Forest used his invention of the triode vacuum tube, or audion, to make an electronic musical instrument, perhaps his least contentious invention!
  • The radio work and espionage activities of Leon Theremin, who worked as an engineer at a distant station deep within the Soviet Union where he discovered the principles to make his famous antenna-based instrument.
  • The avant-garde antics of the Lost Generation composer George Antheil and his collaboration with actress Hedy Lamarr that led to the development of the spread spectrum suite of transmission techniques that now permeate our everyday life wherever there is WiFi.

But that’s not all! At the heart of this narrative is the evolution of speech synthesis. Spanning the groundbreaking work of Homer Dudley at Bell Laboratories with his work on the voder and vocoder to the dual discovery of Linear Predictive Coding from the research done by Fumitada Itakura at Nippon Telegraph and Telephone in Japan to the parallel discoveries in the same field made by Manfred Schroeder and Bishnu S. Atal at Bell Labs. Linear Predictive Coding gets put to work whenever someone picks up a cell phone to make a call, or when they get on their DMR radio to join a net with their fellow ham radio friends across the world. Linear Predictive Coding was later put to work in the compositions of early computer music pioneer Paul Lansky at Princeton.

Tracing the early use of the vocoder in enciphered radio transmissions between Churchill and Roosevelt in World War II to its use by Robert Moog and Wendy Carlos, this is the story of how investigations into the nature of speech generated a tool to be used by the music makers who merged their voices with the voice of the machine.

But wait, there’s more! The creative use of these phonic frequencies really took hold when radio stations and radio companies spearheaded the creation of the first electronic music studios. These laboratories include:

  • Halim El-Dabh’s use of wire recorders loaned from Radio Cairo to create the first pieces of what was later called musique concrète, where raw sounds were manipulated to create a new kind of music.
  • Pierre Schaeffer’s creation of the Groupe de Recherches Musicales (GRM) under the auspices of the Radiodiffusion Nationale station in France, leading to the subsequent spread of musique concrète.
  • The genesis of the Studio for Electronic Music of the West German Radio (Westdeutscher Rundfunk) born out of early developments in elektrische music made by the countries experimental instrument builders.
  • The subsequent building of an electronic music studio at NHK in Japan.
  • The story behind the “sound-houses” of the legendary BBC Radiophonic Workshop, and its pioneers Daphne Oram and Delia Derbyshire.
  • The development of the Columbia-Princeton Electronic Music studio made in conjunction with the RCA company and the building of their gargantuan instrument, The RCA Mark II Synthesizer.
  • And further explorations in the work being done at Bell Labs where the computers made music under the guidance of Max Matthews leading to creative breakthroughs from composers Don Slepian and Laurie Spiegel.

Of particular interest in this realm to the radio buff is the work of John Chowning, a composer who worked out the principles of FM synthesis, essentially figuring out how to do frequency modulation in the audio domain. He went on to create the Center for Computer Research in Music and Acoustics at Stanford, which became a model for the kind of sound laboratory later implemented in France at IRCAM, Institute for Research and Coordination in Acoustics/Music.

This the story of how electronic music came to be, told through the lens of the telecommunications scientists and composers who transformed the dits and dahs of Morse code into the bleeps and blips that have captured the imagination of musicians and dedicated listeners around the world.

The Radio Phonics Laboratory is available directly from Velocity Press here in the UK and Europe. North American readers can find it on Bookshop.org here , Amazon.com here and fine bookstores everywhere.

Spread the radio love

9 thoughts on “The Radio Phonics Laboratory

  1. mangosman

    1964 Moog analog Synthesiser produced the LP ‘Switched on Bach’ which was a best seller. Wendy Carlos creates music https://youtu.be/4SBDH5uhs4Q Interview between Wendy and Bob Moog. https://youtu.be/wTaZxI1x3rI
    One of the other first uses of this synthesiser was to produce heart sounds to train Doctors and Cardiologists!
    In 1979 the Fairlight Computer Musical Instrument was the first digital synthesiser. It was produced in a day before the mouse was invented, so they used a light detecting pen to pick menu items in the DOS era. Computer memory was tiny and the speed slow compared to now. Digital to Analog converters which produce the signal for the speaker have also drastically improved. I wonder how it sounds now.
    https://en.wikipedia.org/wiki/Fairlight_CMI has more of the technical details as well as where it has been used.
    The best doco on the Fairlight CMI history https://youtu.be/jkiYy0i8FtA
    ‘Tutorial’ on how to use the Fairlight CMI https://petervogelinstruments.com.au/ios/video-reviews/

    The competition https://youtu.be/JcbpRMZIQ8g The Japanese company Roland. The main difference is that they cannot use an original live sound, rather use oscillators with sine, triangle and square wave outputs which are mixed and filtered. This video is 1h12 min long and is very detailed, but contains plenty of sound examples. It must have been made after 2017.

    The Musical Instrument Digital Interface enables now almost all digital musical instruments to talk to each other by using a standardised series of commands. MIDI 1.0 was published in 1983. It also allows a computer to be programmed to work like a musical ‘word’ processor and then a synthesiser could play the sound. The reverse is also true where a musical keyboard can be played and the computer will display it on the screen on a score.
    The specification organisation is https://midi.org/

    Reply
    1. Richard Langley

      From https://en.wikipedia.org/wiki/Fraunhofer_Society:

      “The Fraunhofer Society was founded in Munich on March 26, 1949, by representatives of industry and academia, the government of Bavaria, and the nascent Federal Republic.

      “In 1952, the Federal Ministry for Economic Affairs declared the Fraunhofer Society to be the third part of the non-university German research landscape (alongside the German Research Foundation (DFG) and the Max Planck Institutes).”

      Research institute does not necessarily mean it’s a university or in a university. It could be affiliated with a university, however. A prime requirement to be recognized as a university is that the outfit has students and offers degrees.

      Reply
  2. Robert Gulley

    Simply a fascinating read. Justin has done exemplary work, meticulous and engaging. The book includes first-hand information from some of the pioneers of the field through his interviews with them. I am proud to call Justin a close friend, but that does not detract in any way my respect for his excellent history of this engaging topic.

    One cannot help but be impressed by the vast contributions of so many amazing people, and whose inventions, experimentations, and explorations make our modern communication capabilities what they are today. I think any radio hobbyist will be amply rewarded for the time spent reading this book.
    Cheers! Robert

    Reply
  3. mangosman

    More modern history
    https://www.nfsa.gov.au/latest/fairlight-instrument-invented-sampling#:~:text=It%20was%20invented%20by%20two,Ryrie's%20grandmother's%20Point%20Piper%20garage.

    The current research into sound is being driven by the need to reduce the data rates when digitising sound for distribution and storage
    https://www.iis.fraunhofer.de/en/ff/amm/broadcast-streaming/xheaac.html download EXTENDED HE-AAC – BRIDGING THE GAP BETWEEN SPEECH AND AUDIO CODING read the last page About Fraunhofer IIS which is a technical university in Germany.

    Reply
    1. Justin Patrick Moore

      Thanks for this Mangosman…I didn’t get into the Fairlight, but I also cover the other first digital synthesizer, the Hal Alles machine created at Bell Labs.

      My book covers roughly the 1880s to the 1980s and doesn’t go much past that in terms of sound research.

      Hope you are well!

      Reply
    2. Justin Patrick Moore

      Hi Mangosman,

      Thanks for your interest. The Fairlight is a neat synth, not one I covered in my book, but I did cover the Hal Alles machine and you might like that.

      I don’t much about the work being done with Speech Synthesis beyond the 1980s. My book roughly covered the 1880s to the 1980s. In any case, thanks for the lead.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.