Sound Synthesis
Tom R. Halfhill, Features Editor
Synthesized computer music is a recent development, but inventors have been working on "synthesizers" for decades. Today's home computers and microchips are now starting to open a new world of music and sound for everyone.
Hal Chamberlin, a leading authority on computerized music, remembers the days when adventuresome programmers used transistor radios and even line printers to squeeze music from their early computers.
"People used to tune a little AM radio to an open frequency and hold it next to their spacebars and listen to the sound of [program] loops," recalls Chamberlin, vice president of engineering for Micro Technology Unlimited in Raleigh, North Carolina.
The method worked because pulses flowing through the computer's logic circuits would emit radio frequencies which "leaked" from the computer into the radio's receiver. The programmers on these early IBMs – fiddling around when the boss wasn't looking – soon learned they could play different notes and tones by writing little machine language programs with carefully timed loops.
"They even used to make music by ‘playing’ the printer," says Chamberlin. "They found out they could control the little hammers in the printhead with a machine language program. So they wrote programs to fire the printhead hammers in a certain pattern to create rhythms.
"Of course," he adds, "it wasn't so great on the printheads."
Such experiments seem crude, even quaint, in this day of computerized music synthesis and home computers with built-in, multiple-voice sound synthesizers on a chip. But these early efforts illustrate that today's "modern" sound devices are really the result of years of research, inventing, and just plain fooling around.
In fact, people have been working on sound synthesizers since the 19th century. And although today's computerized synthesizers seem incredibly advanced in comparison, the leading experimenters in the field believe electronic music is only starting to make itself heard.
Telharmoniums, Theremins, And Rhythmicons
The first music "synthesizer" was built between 1896 and 1906 by American inventor Thaddeus Cahill. He called it a "Telharmonium." The Telharmonium is to modern synthesizers what ENIAC is to modern computers. The Telharmonium weighed more than 200 tons, and moving it to New York from Cahill's lab required several rail-road fla tcars.
Since the Telharmonium was a pre-electronic instrument, it functioned by means of electric drive motors, pulleys, belts, and gears. Yet it was similar in basic concept to today's synthesizers. It was polyphonic (as opposed to monophonic), meaning it could play more than one note at a time and thus create chords. It was equipped with a standard music keyboard, but the controls were so complicated that it took two people to play the thing.
The loudspeakers worked mechanically, and the machinery required to generate enough current to drive the speakers was so noisy that part of the Telharmonium had to be housed separately from the listening room. Unfortunately, after ten years of Cahill's work, the Telharmonium was a commercial failure.
For one thing, it was obsolete soon after it was finished. The diode tube was invented in 1904, followed by the triode tube in 1915, which made electronic amplifiers possible. It wasn't long before tube-powered electronic instruments began appearing.
The most successful of these was an instrument invented between 1920 and 1924 by Leon Theremin, originally called an "Etherophone" or "Thereminovox" but now known simply as a "Theremin." This odd instrument was played without being touched – the musician passed his or her hands through the air near two antennas which controlled the pitch and volume. To say the least, this made a Theremin very hard to play, since there were no pre-defined notes like the keys on a piano or the frets on a guitar. Still, Theremins became popular in the late 1920s.
Leon Theremin invented another electronic instrument in 1931 – the "Rhythmicon," the first electronic rhythm instrument. The Rhythmicon was quite sophisticated with features which have appeared on rhythm synthesizers only recently.
The most popular electronic instrument of the past half-century was invented in 1935 by Laurens Hammond – the Hammond organ, still widely used.
But although these devices were electronic instruments, music historians trace the origin of electronic music back to Paris in 1947-48. Acoustical engineer Pierre Schaeffer and composer Pierre Henry began experimenting with new sounds by using electronic filtering, speed changes on tape recorders, and other manipulation tricks done in studios. Their technique became known as musique concrète, and was quickly picked up by tinkerers elsewhere. By 1952, the first concert of electronic music was sponsored by Columbia University at the New York Museum of Modern Art.
The problem with these techniques was that it took many hours of tedious tape splicing and other tricks to produce only a few brief minutes of sound. And musicians couldn't even hear the results until they were done. That's why there was a lot of interest during the late '50s and early '60s in instruments which could produce electronic music directly. Even the old Theremins from the '20s – updated with transistors – were resurrected.
Toward A New Form Of Music
Robert A. Moog – whose name is virtually synonymous with sound synthesis – was selling kits for transistorized Theremins in the early '60s when he was inspired to invent his own electronic instrument. The result was the Moog Synthesizer, first built in the summer of 1964.
Although recognized by electronic musicians as an important development, the Moog Synthesizer was practically unknown to the general public until a few years later, when it was featured on a record album entitled Switched-On Bach. The album was a collection of Bach compositions performed entirely on a Moog Synthesizer by musician Walter Carlos. Almost instantly, Switched-On Bach catapulted up the charts like a pop record, and became the biggest-selling classical record of all time. It was especially popular with teen-agers, who astounded their parents by playing electronic Bach along with their Beatles and Rolling Stones records.
However, a few classical music devotees, stunned by the album's popularity, dismissed the electronic interpretations as "artificial." Some critics, although they are decreasing in number, argue that music which is synthesized by purely electronic means is somehow artificial or unnatural when compared to conventional instruments.
Today, Moog counters these arguments with: "The fact is, you don't find musical instruments in nature. The only 'natural' musical instrument is a human voice. The fact that a synthesizer produces its music by electronic means doesn't mean it's 'artificial' in any sense. It's no more artificial than taking a bunch of wood and gluing it together into a box and stretching some strings over it to produce sounds."
Electronic musicians, of course, never had any doubt that their instruments deserved equal billing with violins and woodwinds. In fact, years ago they recognized synthesizers as a rare historical opportunity to open a new world in music. Although synthesizers are often used to mimic "conventional" instruments, the most exciting electronic music takes advantage of the synthesizer's power to create totally new sounds. This provides the possibility of entirely new forms of music.
For example, would rock 'n' roll have happened without electric guitars? Did the invention of a musical instrument with a totally fresh sound spur the rise of a new genre of music? For the members of a whole generation, rock has become the dominant musical style. Synthesizers are now used in virtually every form of music, but even Moog isn't sure if they will "liberate" themselves and spark a new form which could replace rock. "Musicians are moving in so many different directions these days that it's hard to say if a new musical form will emerge."
It may be too early yet for the birth of a dominant musical form based on synthesizers, since the instruments themselves are changing so rapidly. Not only are they advancing technologically almost day by day, but the rising use of microchips is just beginning to make them affordable for everyone. To return to the rock 'n' roll analogy, it would have been difficult for the teen-age groups of the '50s and '60s to arise if electric guitars had cost thousands of dollars. Or if radical new advances were constantly rendering three-year-old guitars obsolete.
Synthesizers, on the other hand, are still passing through important phases in their development. Moog foresees a trend away from analog sound synthesis to digital, or at least to digitally controlled analog instruments. "There's so much more you can do with digital sound synthesis, especially in small keyboard instruments like the little Casios or Yamahas you can buy very inexpensively."
About a year ago, Moog set up a new company – Big Briar, Inc. – and relocated to a small town in rural North Carolina to work on such developments. Among his frequently used tools, he says, is an Apple II microcomputer. Mindful of the baffling array of controls on modern synthesizers, he's experimenting with new types of control devices aimed at making synthesizers easier to play. But he warns that the complex instruments will never be a cinch.
"Musical instruments will never be easy to play," he says. "If it's too easy to play, most musicians would say it's not a musical instrument, because you usually can't do much with an instrument that's ‘easy to play.’ It has too many limitations."
Still, as synthesizers get easier to manage and less expensive, they become more accessible to the average musician – and thereby more widely heard and appreciated by music listeners. Pop music historians may recognize this trend as the same sort of breeding ground for rock created by 45 rpm records in the '50s.
SID: Synthesizer On A Chip
One important way in which synthesizers are becoming accessible to people is within home computers. Virtually every new model introduced in recent years has featured more sophisticated sound capabilities.
Unfortunately, up to now, the sound capabilities have attracted less attention than the often more glamorous feature: graphics.
"Well, in terms of the human senses, sound inherently takes a backseat to sight," notes Frank Covitz, a New Jersey research scientist whose sideline is computer music. "Sight is the more important sense, so computer graphics naturally gets more attention."
For instance, very little has been written on the Atari computer's sound capabilities, although the built-in four-voice sound chip has represented the state-of-the-art in home computers for the last couple of years. Almost all the attention has been focused on the Atari's graphics. Perhaps this will change now that a computer with even more advanced sound has appeared on the market – the Commodore 64 with its SID (Sound Interface Device) chip.
The new SID chip is generating lots of interest among computer music enthusiasts. It may well be a herald of the sound capabilities of tomorrow's home computers. "I think machines of that class [home computers such as the 64] in the future will be expected to have sound chips, just as they are expected to have the BASIC built into them now," says Chamberlin, the MTU engineer. "For one thing, the sound chips are relatively cheap in large quantities, so there's no real reason not to."
SID is a hybrid digital/analog device with programmable attack, decay, sustain, and release for each of its three voices, a master volume control, a choice of four waveforms, 16-bit frequency resolution over a nine-octave range, and programmable high-, low-, band-, and notch-pass filters.
"The SID chip is basically a synthesizer on a chip," says its designer, Bob Yannes. "I played with synthesizers for years, so I'm quite familiar with them. I tried to put it all on a chip with the SID chip."
Yannes designed SID while an engineer for MOS Technology, which is owned by Commodore. He recently left Commodore to form his own company, Peripheral Visions, Inc. Although he won't say for sure what new products his company will introduce, it seems likely that computerized sound devices will be among them. He says chips such as SID are the key.
"There's no reason we can't take music systems being sold now for $4000 and bring them out for consumers for around $400 or $500 – a ten to one cost reduction. I consider the [Commodore] 64 to be only the first step. In the future I'd like to see something totally digital. I think that's the way to go.... I pretty much got the features that I wanted out of the SID chip in the 64, but not the performance I wanted. But now that I've done it once, I think I have a better idea about how to go about it next time."
Yannes says he was given specifications by Commodore only to develop a "sound chip," and then he decided to make it as much like a synthesizer as possible. But he had to work within the limitations of marketing considerations. For example, although SID allows each voice's envelope to be individually programmed, all three voices share the same volume control.
"I had to put separate envelope controls for each oscillator [voice] into the SID chip in order to satisfy the video game/sound effects marketing demands. If I had my way, the three oscillators would work in unison to create one voice. Anyway, that's why there're separate envelope controls for each oscillator but only one peak amplitude [volume] control – it was designed to function as one voice. You could vary the attack of the different oscillators, for example, to get a brassy sort of sound that way."
But Yannes bestowed SID with yet another feature to compensate for this limitation – an input line. It's possible to feed an outside sound source into a computer equipped with SID, process it through the chip's filters and volume controls, and output the extra source as a "fourth voice" in accompaniment with SID's regular three voices. In the case of the Commodore 64, for instance, the outside source would be routed through the RF modulator to the TV speaker – or a stereo system.
What kind of outside sources can be fed into SID? "You name it," says Yannes. "Tape recorders, radios, electric guitars, even another SID chip."
Note that last item: another SID chip. "One thing I thought you might be able to do is chain a bunch of SID chips together to get even better sound, without having to use external hardware," explains Yannes. "I designed the SID chip as a standard 6502 peripheral chip with all the proper bus signals. You could put some SID chips in a cartridge and plug it into the 64, or the VIC-20, or the Atari – any 6502, 6809, or even 68000 system, even the Radio Shack Color Computer. It only requires 32 address locations, and the chips are pretty cheap, so there's not much to stop you."
It's an exciting prospect, but Commodore controls the SID chip, not Yannes. And for now, Commodore needs virtually all the SID chips it can make to meet demand for the new 64, plus the upcoming Max Machine, P Series, B Series, and BX Series computers soon to hit the market.
Still, a few SID chips have reached private hands, and the results are fulfilling their creator's hopes.
The Synthesizers Of Tomorrow
Chamberlin, the MTU engineer, got four SID chips from his friend, Yannes. Chamberlin used them to make a prototype sound board for the MTU-130, a high-end personal computer for which he designed most of the circuitry. He then passed the board and SID chips along to another friend, Frank Covitz, the New Jersey research scientist. Covitz added four more SID chips to the board, for a total of 24 individually programmable voices. The board is plugged into an MTU-130 equipped with an organ keyboard which, in turn, is controlled by its own 6502 microprocessor.
The instrument made its first public appearance recently when Covitz's son, Philip, gave a performance at the Personal Computer in the Arts Festival in Philadelphia.
Ironically, Covitz says he didn't play his own invention at the festival because he's not a good enough musician. But he's working on software which not only will exploit the instrument's souped-up capabilities, but which also will make it playable by mediocre musicians. This is called non-realtime playing.
Musical instruments are usually played in what's known as realtime: the music is heard instantaneously as the musician plays the instrument. When an instrument is played in non-realtime, the keying of notes is a separate event from the playing of the music. Notes are entered (the computer instrument is programmed), and then played back (the program is run).
An example of this on home computers is the Atari Music Composer cartridge. Essentially, it does for music composition what word processing does for writing. Notes are entered on the computer keyboard and plotted on staffs drawn on the screen. The notes, measures, and phrases can be edited and arranged at will, then played back at the touch of a key. Similar composition programs are available for other personal computers.
"One of the things that computers can do is change music from a physical endeavor to a programming endeavor," says Chamberlin. "That's one of the reasons why I got into computer music – my total lack of dexterity. Even if you're a total butterfingers like me, you can experiment with computer music."
Covitz is striving to push the concept even further. He's added four special keys to his prototype board: Record, Play, Fast Forward, and Rewind. But don't mistake it for a conventional tape recorder – the keys are similar in function, but not in method.
When the Record button is pressed, the computer will "remember" whatever music is played. But no recording tape is involved. Instead, each keypress on the organ keyboard generates information coded in four bytes: which key was pressed, the velocity (how hard the note was played), and the exact moment the key was pressed, accurate to a split-second. Another four bytes of information are generated when the key is released, for a total of eight bytes per note. All this information is stored in memory so the music can be reconstructed later.
After a musical part is "recorded," the Play button can be used to play it back – in accompaniment with a matching musical part played by the musician on the organ keyboard. And this duet, in turn, can be "recorded" in memory by a second unit. Using just two of these "memory recorders," the process can be repeated again and again, layering sound upon sound.
While the same thing can be done with conventional tape recorders, the sound would deteriorate with each generation of re-recording. Tape hiss and other defects would soon overcome the music. But since Covitz's instrument "records" the sound digitally, there is no degradation whatsoever. Beyond that, the music can be "edited." If a note is missed, the musician can correct it by rewriting the correct values into memory.
"This is what I see as the ultimate system," says Covitz. "Right now, this software doesn't exist anywhere except in my mind. I'm in the process of working on this now, and it's all being done in machine language."
The brief history of home computing – and indeed, home computing itself – indicates that advanced technology eventually works its way down to the personal level. It's not hard to envision the day when plug-in organ keyboards and cartridges with add-on synthesizer chips will transform home computers into the kind of instruments Covitz is experimenting with now.
"Seeing what the SID chip can do, and do digitally, I expect you'll see an explosion of that sort of thing," says Covitz. "There has to be. It doesn't require very much hardware. There definitely will be an explosion in complexity."