
Featured in chapter 11 in my book “Intuitive Machine Learning and Explainable AI”, available here.
It is common these days to read stories about the sound of black holes, deep space or the abyss. But what if you could turn your data into music? There are a few reasons one might want to do this. First, it adds extra dimensions, in top of those displayed in a scatter plot or a video of your data. Each observation in the sound track may have its own frequency, duration, and volume. That’s three more dimensions. With stereo sound, that’s six dimensions. Add sound texture, and the possibilities are limitless.
Then, sound may allow the human brain to identify new patterns in your data set, not noticeable in scatterplots and other visualizations. This is similar to scatterplots allowing you to see patterns (say clusters) that tabular data is unable to render. Or to data videos, allowing you to see patterns that static visualizations are unable to render. Also, people with vision problems may find sounds more useful than images, to interpret data.
Finally, another purpose of this article is to introduce you to sound processing in Python, and to teach you how to generate sound and music. This basic introduction features some of the fundamental elements. Hopefully, enough to get you started if you are interested to further explore this topic.
From Data Visualizations to Data Videos to Sounds
We are all familiar with static data visualizations. Animated gifs such as this one brings a new dimension, but they are not new. Then, data represented as videos is something rather new, discussed in some of my recent articles, here and here. However, I am not aware of any dataset represented as a melody. This article may very well feature the first example.
As in data videos, time is a main component. The concept is well suited to time series. In particular, here I generated two time series each with n = 300 observations, equally spaced in time. It represents pure, uncorrelated noises: the first one is Gaussian and represented by the sound frequencies; the second one is uniform and represented by the duration of the musical notes. Each note corresponds to one observation. I used the most standard musical scale, and avoided half-tones [Wiki] — the black keys on a piano — to produce a pleasant melody. To listen to it, click on the box below. Make sure your speakers are on. You may even play it in your office, as it is work-related after all.
Since it represents noise, the melody never repeats itself and has no memory. Yet it seems to exhibit patterns, the patterns of randomness. Random data is actually the most pattern-rich data, since if large enough, it contains all the patterns that exist. If you plot random points in a square, some will appear clustered, some areas will look sparse, some points will look aligned. The same is true in random musical notes. This will be the topic of a future article, entitled “The Patterns of Randomness”.
The next step is to create melodies for real life data sets, exhibiting auto-correlations and other peculiarities. The bivariate time series used here is pictured below: the red curve is the scaled Gaussian noise linked to note frequencies in the audio; the blue curve is the scaled uniform noise linked to the note durations. As for myself, I plan to create melodies for famous functions in number theory (the Riemann function) and blend the sound with the silent videos that I have produced so far, for instance here.

References
The musical scale used in my Python code is described in Wikipedia, here. An introduction to sound generation in Python can be found on StackOverFlow, here. For stereo sounds in Python, see here. A more comprehensive article featuring known melodies with all the bells and whistles, is found here (part 1) and here (part 2). However, I was not able to make the code work. See also here if you are familiar with Python classes.
I think my very short code (see next section) offers the best bang for the buck. In particular, it assumes no music knowledge and does not use any library other than Numpy and Scipy.
Python Code
In a WAV file, sounds are typically recorded as waves. These waves are produced by the get_sine_wave
function, one wave per musical note. The base note has a 440 frequency. Each octave contains 12 notes including five half-tones. I skipped those to avoid dissonances. The frequencies double from one octave to the next one. I only included audible notes that can be rendered by a standard laptop, thus the instruction in range(40,65)
in the code.
The last line of code turns wave values into integers, and save the whole melody as sound.wav
. Now you can write your own code to listen to your data! Or you can use the code to test large sequences of random notes, to find if some short extracts might be good and original enough to integrate into your own music. You may also try non-sinusoidal waves. For instance, a mixture of waves to emulate harmonic pitches (two or more notes at the same time) and instruments other than piano.
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import wavfile
def get_sine_wave(frequency, duration, sample_rate=44100, amplitude=4096):
t = np.linspace(0, duration, int(sample_rate*duration))
wave = amplitude*np.sin(2*np.pi*frequency*t)
return wave
# Create the list of musical notes
scale=[]
for k in range(40,65):
note=440*2**((k-49)/12)
if k%12 != 0 and k%12 != 2 and k%12 != 5 and k%12 != 7 and k%12 != 10:
scale.append(note) # add musical note (skip half tones)
M=len(scale) # number of musical notes
# Generate the data
n=300
np.random.seed(101)
x=np.arange(n)
y=np.random.normal(0,1,size=n)
z=np.random.uniform(0.100,0.300,size=n)
min=min(y)
max=max(y)
y=0.999*M*(y-min)/(max-min)
plt.plot(x,y,color='red',linewidth=0.6)
plt.plot(x,15*z,color='blue',linewidth=0.6)
plt.show()
# Turn the data into music
wave=[]
for t in x: # loop over dataset observations, create one note per observation
note=int(y[t])
duration=z[t]
frequency=scale[note]
new_wave = get_sine_wave(frequency, duration=duration, amplitude=2048)
wave=np.concatenate((wave,new_wave))
wavfile.write('sound.wav', rate=44100, data=wave.astype(np.int16))
To not miss future articles, sign-up to our newsletter, here.
About the Author
Vincent Granville is a pioneering data scientist and machine learning expert, co-founder of Data Science Central (acquired by TechTarget in 2020), former VC-funded executive, author and patent owner. Vincent’s past corporate experience includes Visa, Wells Fargo, eBay, NBC, Microsoft, CNET, InfoSpace. Vincent is also a former post-doc at Cambridge University, and the National Institute of Statistical Sciences (NISS).
Vincent published in Journal of Number Theory, Journal of the Royal Statistical Society (Series B), and IEEE Transactions on Pattern Analysis and Machine Intelligence. He is also the author of multiple books, available here. He lives in Washington state, and enjoys doing research on stochastic processes, dynamical systems, experimental math and probabilistic number theory.
You must log in to post a comment.