Landmarks in piano history
If those answers do not fully address your question, please ask a new question. The different tonality of a note in different instruments stems from the different mixes of amplitudes in the harmonic frequencies that the instrument provides. To be more concrete and keeping to a slightly simplified view , you play the A note Hz and then you have the harmonic frequencies , , , Each of the frequencies will have an amplitude or "volume" contributing to the sound produced by an instrument.
Thus a particular set of amplitudes gives the instrument its tonality. This is highly related to the concept of Fourier analysis used in many areas of physics. It's not just a pure single frequency of sound that is being transmitted by an instrument. Just like with light, if you ask the frequency of the sun's emission, the answer would be that it's a whole broad spectrum hence its ability to produce a rainbow, or allow objects to reflect colours other than yellow but its peak frequency is yellow. You can ask for a distribution of the colours or light frequencies that it transmits, and you'll get a plot of intensity vs light frequency this is also known as the Fourier Transform of the plot of the actual amplitude of light waves travelling from the sun You will see something similar for a sound note.
If you look at the Fourier transform of the middle C played by a piano string approximately Hz , you will see a plot with a bunch of hills and valleys, the tallest hill peaking at Hz, the second tallest at Hz, the third tallest at Hz, etc note that they are integer multiples of the note itself but those hills will have some shape to them meaning that other frequencies outside of the peak note are represented in the note itself.
It's the shape of those hills as well as the ratio of the peak of those hills to the following integer multiple peaks that determine the style of the sound. New scientific papers on the voice, wind instruments and transfer functions. See Publications Featured projects Introduction to the acoustics of musical instruments. Introductions with plenty of illustrations and sound files to brass instruments , clarinet , didjeridu , flute , guitar , saxophone , violin and the voice. Physclips Waves and Sound : On our introductory physics website.
The Neville Fletcher archive honours an outstanding researcher.
Vocal tract tuning by sopranos : how do they sing so loudly? However, a contra bass six times as large as a violin would be 3. So, in the construction of the new family of violins, the body size and string lengths were scaled to fit human proportions, and the RSs and PRs required for the lower registers were obtained by adjusting the thickness of the body plates, the mass of the strings and the tension of the strings Benade, Similarly, the dimensions of the f holes were adjusted to attain the required air resonance frequencies.
So for the string family, the law of similarity is actually a law of similarity of shape; the spatial scale factors are smaller than would be required by a strict law of similarity; they have to be augmented by mass and thickness scaling to produce the formant ratios characteristic of the string family in an instrument with a large RS. The law of similarity also applies to brass instruments, and with similar constraints. Luce and Clark analyzed acoustic spectra from a variety of brass instruments and showed that the spectral envelopes of the trumpet, trombone, open French horn and tuba were essentially scaled versions of one another, and Fletcher and Rossing report that the size of the cup scales roughly with the size of the instrument.
However, the instrument makers adjust the shape of the bell beyond what would be indicated by strict spatial scaling to produce a series of harmonic resonances and to improve tone quality. So the notes of brass instruments would be expected to differ mainly in PR and RS as dictated by the law of similarity, with differences in bell shape having a smaller effect. In summary, scaling the spatial dimensions of an instrument would shift the frequencies of the resonances in a way that would preserve formant frequency ratios and produce a family of instruments with the same timbre in a range of registers.
- Header Menu.
- Module Details?
- Basics in Music Acoustics.
- Volume and Surface Integrals Used in Physics.
- WIND INSTRUMENTS;
- Counselling Skills For Nurses, Midwives and Health Visitors;
For practical reasons, instrument makers achieve the desired RS for the extreme members of a family with a combination of spatial dimension scaling and scaling of other properties like mass and thickness. Thus, if listeners were asked to estimate the spatial size of instruments from sounds scaled by STRAIGHT, we might expect, given their experience with natural instruments, that they would produce estimates that are less extreme than the resonance scaling would produce if it were entirely achieved by increasing the spatial dimensions of the instrument.
This means that the experiments in this paper are strictly speaking about the perception of acoustic scale in musical instruments. However, listeners do not have a distinct concept of scale separate from size, and they associate changes in acoustic scale with changes in spatial size, and so the experiments are about source size in the sense that people experience it. We will draw attention to the distinction between acoustic scale and size at points where it is important.
The purpose of this experiment was to determine the just-noticeable difference JND for a change in the resonance scale of an instrument over a large range of PR and RS. The experiment is limited to relative judgments about RS, and so the distinction between acoustic scale and source size does not arise; there is a one-to-one mapping between acoustic scale and source size in this experiment.
The musical notes for the experiments were taken from an extensive, high-fidelity database of musical sounds from 50 instruments recorded by Real World Computing RWC Goto et al. This database provided individual sustained notes for four families of instruments strings, wood-wind, brass and voice and for several members within each family. We chose these specific instrument families for two reasons: 1 They produce sustained notes, and so there is little to distinguish the instruments in their temporal envelopes.
In the database, individual notes were played at semitone intervals over the entire range of the instrument. For the stringed instruments, the total range of notes was recorded for each string. The notes were also recorded at three sound levels forte, mezzo, piano ; the current experiments used the mezzo level. The first experiment focused on the baritone member of each instrument family: for the string family, it is the cello; for the woodwind family, the tenor saxophone; for the brass family, the French horn, and for the human voice, the baritone.
Each note was extracted with its initial onset and a total duration of ms.
Harmonies at Work: Musical Instruments and the Transfer of Knowledge in Early Acoustics | MPIWG
The onset of the recorded instrument was included to preserve the dynamic timbre cues of the instrument. A cosine-squared amplitude function was applied at the end of the wave form 50 ms offset to avoid offset clicks. It is actually a sophisticated speech processing package designed to dissect and analyze an utterance at the level of individual glottal cycles.
It segregates the glottal-pulse rate and spectral envelope information vocal-tract shape information and vocal-tract length information , and stores them separately, so that the utterance can be resynthesized later with arbitrary shifts in glottal-pulse rate and vocal-tract length. Utterances recorded from a man can be transformed to sound like a woman or a child. The advantage of STRAIGHT is that the spectral envelope of the speech that carries the vocal-tract information is smoothed as it is extracted, to remove the harmonic structure associated with the original glottal-pulse rate, and the harmonic structure associated with the frame rate of the Fourier analysis window.
For speech, the resynthesized utterances are of extremely high quality, even when the speech is resynthesized with PRs and vocal-tract lengths beyond the normal range of human speech. Assmann and Katz compared the recognition performance for vowels vocoded by STRAIGHT with performance for natural vowels and vowels from a cascade formant synthesizer. There are audio file examples available on our website to demonstrate the naturalness of the mucoded notes. The experiment was performed with short melodies instead of single notes to preclude listeners performing the task on the basis of a shift in a single spectral peak.
The notes shown in this table indicate the octave and key of the tonal melodies presented to the listeners. The stimuli were presented over headphones at a level of approximately 60 dB SPL to listeners seated in a sound attenuated booth. The abscissa is pulse rate in musical notation; the ordinate is the factor by which the resonance scale was modified. The arrows show the direction in which the JNDs were measured. Demonstrations of the five standard sounds are presented on our website 3 for the four instruments in the experiment cello, sax, French horn and baritone voice.
The spectral and temporal profiles of the images provide summaries of the PR information from the RS information. Figure 5 b shows the auditory image for the corresponding French horn note. The auditory image is constructed from the sound in four stages: First, a loudness contour is applied to the input signal to simulate the transfer function from the sound field to the oval window of the cochlea Glasberg and Moore, Then a spectral analysis is performed with a dynamic, compressive, gammachirp auditory filterbank Irino and Patterson, to simulate the filtering properties of the basilar partition.
Then each of the filtered waves is converted into a neural activity pattern NAP that simulates the aggregate firing of all of the primary auditory nerve fibres associated with that region of the basilar membrane Patterson, a. The array of time-interval histograms one for each channel of the filter-bank is the auditory image; see Patterson et al. The auditory image is similar to an autocorrelogram Meddis and Hewitt, but strobed temporal integration involves far less computation and it preserves the temporal asymmetry of pulse resonance sounds which autocorrelation does not Patterson and Irino, Auditory images of the sustained portion of the original note for the baritone voice left panel and French horn right panel.
The four panels show how the auditory image changes when the pulse rate and resonance scale are changed to the combinations presented by the outer four points in Fig. The profile to the right of each auditory image is the average activity across time interval; it simulates the tonotopic distribution of activity in the cochlea or the auditory nerve, and it is similar to an excitation pattern.
The peaks in the spectral profile of the voice show the formants of the vowel.
Acoustics of Musical Instruments
The profile below each auditory image shows the activity averaged across channel, and it is like a summary autocorrelogram Yost et al. Comparison of the time-interval profiles for the two auditory images shows that they have the same PR, and thus the same temporal pitch G 2. Comparison of the spectral profiles shows that the voice is characterized by three distinct peaks, or formants, whereas the horn is characterized by one broad region of activity. Comparison of the auditory image of the original baritone note in Fig. The panels in the right-hand column, show the images when the PR has been increased by an octave; the main ridge and the main peak now occur at 5 rather than 10 ms.
Comparison of the time-interval profile for the original French horn note in Fig.
https://daydissofenpo.ga Together the figures illustrate that the pitch of pulse resonance sounds is represented by the position of the main vertical ridge of activity in the auditory image itself, and by the main peak in the time-interval profile beyond about 1. For example, the second formant has shifted from about 0. In the lower row, the RS has been increased, with the result that the pattern of activity in the image, and the spectral profile, move down in frequency.
Comparison of the original French horn note in Fig. Moreover, a detailed examination shows that the patterns move the same amount for the two instruments.
Together the figures illustrate that the RS information provided by the body resonances is represented by the vertical position of the pattern in the auditory image. The auditory images and spectral profiles of the baritone voice notes in Fig. That is, the notes in each column of each figure have the same pitch, so if the notes were equated for loudness, then the remaining perceptual differences would be timbre differences, according to the usual definition.
- Acoustics of Sound-holes in Musical Instruments.
There are two components to the timbre in the current example, instrument family which distinguishes the voice notes from the horn notes, and instrument size which distinguishes the note in the upper row from the note in the bottom row, in each case.