Image Image Image Image Image Image Image Image Image Image

Bartlett School of Architecture, UCL

Scroll to top

Top

Quantum Quantz

Quantum Quantz

Musicians and makers are always experimenting with the ability to extend their instruments. With additive manufacturing technology, it is now possible to extend the sounding range of a smallpipe without disturbing its original playing style. As a result of this extension, new musical compositions are required for smallpipe performance. Composers have played with algorithmic compositional techniques for centuries. Can machine learning algorithms generate new musical scores for these new extended instruments? Using MusicVAE, a machine learning model for generating musical scores, Mrs MacLeod of Raasay from Niel Gow collection is artificially reconstructed. Music pieces are more often written in a few keys, because musicians and composers prefer them for the playability and instrument’s capability. Without any of the instrument’s limitations influencing the creation of the score, could the generated compositions challenge the players to play differently? This project explores a range of historically informed musical compositions for a range of newly extended small-pipes.

The first part is the instrument itself, while makers extend the range, the tone is another big effector.

The results of acoustic analysis of historical musical instruments depend on a number of significant factors. These include structure, material and playing technique. In the case of reed-driven instruments, the choice of reeds is also hugely influential.

The instrument case study is about probably the most important small-pipes during the Lowland and Border revival. This is the set known as the Montgomery Scottish small-pipes. These pipes bear an inscription noting the date (1757) and on the occasion of the election of Archibald Montgomery, the Earl of Eglinton, to become a Lieutenant Colonel. The history of the owner and the instrument is very well documented. They are the earliest surviving historical Scottish small-pipes we know of. The chanter of the instrument was chosen for the investigation described in this paper. The aim was to assess the factors already mentioned, applying and recording variables in a very controlled way. Additive manufacturing methods were chosen to reduce unintended variables. Such instruments are conventionally made by individual makers in small workshops. The absolute consistency in the circumstances is virtually impossible.

The data used as the basis of these experiments was found in a small-pipes plan published in Common Stock which is an international journal of the Lowland and Border Pipers Society in 1991. The instrument had been examined and documented by a respected maker Julian Goodacre. This plan has already been used by him to produce a number of successful copies now in use by many professional musicians in different contexts. It demonstrates the reliability of the data.

 

The author converted the 2D hand drawings into engineering 3D models in a CAD software, Autodesk Fusion 360. The models were sliced in different CAM programs Magics 24, Cura and PrusaSlicer. The converted G-code files were loaded on a digital portable data storage device. The final physical objects were based on the G-code files and were fabricated from three additive manufacturing method SLS (selective laser sintering) and FDM (fused filament fabrication).

Invitations to participate in the survey were posted on specialised Facebook groups for those interested in these instruments whose members include many specialised players and makers

The Google Forms contained requests for both objective and subjective responses. The objective information was the single-choice question on the preferences. The subjective information was based on responses from observers using their own vocabulary to describe the sound.

Once participants’ responses were collated and analysed, a further important task began. The correlation between responses and acoustic analysis from MATLAB and Adobe Audition.

Subjective feedback from the observers about their preferences.


Results from different drone stock shapes

The second part is bout composing new content for the extended instrument

Traditional dance tunes are usually more than two parts. When the variation comes in, the parts are like the following:

Due to the structure of the algorithm, machine learning result would like this:

So manually re-arrange the tune to make it logical to a musician or listener.

The direction of exploration:

1.Temperature (the randomness of the variation )

2.Bias (sample bias due to the training data)

3.Rhythm (time signature is fixed in 4/4 in the algorithm)

The machine learning bias is introduced from the designer on the algorithm or the sample. Composers and musicians have their preferred keys and phrases. It might be easier for performance or the mood they want to express. Whist machine learning music is not embodied, the performance is unfamiliar to the player. What the ML music can bring is the path to investigate the possibility to extend the music without a prebuilt mindset.

Analysis from data scientist Kenny Ning, the most popular key on Spotify, collecting from 30 million tracks.