Skip to content Accessibility statement

AI-generated music inferior to human-composed works, according to study

Posted on 4 April 2023

Researchers at the University of York have found that current AI-generated music is inferior to human-composed music.

The researchers have produced guidelines for comparative evaluation of machine learning systems

They also showed that there are faults with the algorithms used in AI music generation that could infringe on copyright, and developed guidelines to help others evaluate the systems they are using.

In the study, 50 participants with a high level of musical knowledge, were played excerpts of music - some from real human-composed works, and others generated by deep learning (DL), a type of artificial neural network, and non-DL algorithms.

The study recruited participants who had experience in analysing note content and stylistic success in music so that results were not just focused on expression in music.

Musical critera

The listeners were asked to rate the excerpts along six musical criteria: stylistic success, aesthetic pleasure, repetition or self-reference, melody, harmony, and rhythm, but were not told the identity of what they are listening to - human-composed or computer-generated.

Co-author, Dr Tom Collins, from the School of Arts and Creative Technologies at the University of York, said: “On analysis, the ratings for human-composed excerpts are significantly higher and stylistically more successful than those for any of the systems responsible for computer-generated excerpts.”

The study also provided findings that raise concerns about the potential ethical violations of direct copying with deep learning methods. A popular type of DL architecture called transformer (the same type of architecture as behind OpenAI's ChatGPT) was shown to copy large chunks of training data in its output.

Legal and ethical

Dr Collins explained: “If Artist X uses an AI-generated excerpt, the algorithm that generates the excerpt may happen to copy a chunk of a song in the training (input) data by Artist Y. Unwittingly, if Artist X releases their song, they are infringing the copyright of Artist Y. 

“It is a concerning finding and perhaps suggests that organisations who develop the algorithms should be being policed in some way or should be policing themselves. They know there are issues with these algorithms, so the focus should be on rectifying this so that AI-generated content can continue to be produced, but in an ethical and legal way."

The researchers in the study have provided seven guidelines for conducting a comparative evaluation of machine learning systems. The findings could help to improve the development of AI-generated music, address current ethical issues, and avoid future legal dilemmas around copyright infringement.

Further information:

Example of human-composed music: Joseph Haydn's String quartet in F minor op.20 no.5 (Hob.III:35), 1st movement bars 35-48.

Example of music generated by Deep Learning algorithm: Music Transformer (Huang et al., 2018).

Example of music generated by non-Deep Learning algorithm: MAIA Markov algorithm (Collins & Laney, 2017).

Example of music generated by another Deep Learning algorithm: Listen to Transformer, constituting a near-exact copy of a well-known piece called "Carol of the bells."

Explore more news

Media enquiries

Caitlin Hazell
Press officer (maternity cover)

Tel: +44 (0)1904 323918

About this research

The research article, called "Deep learning's shallow gains: A comparative evaluation of algorithms for automatic music generation", is published in the journal Machine Learning.

Explore more research

Related research themes