#HAI #2022
#Robotic Arm Generative Painting Through Real-time Analysis of Music Performance
#Richard Savery
MCCALL, Macquarie University
Macquarie Park, NSW, Australia
##Anna Savery
University of Technology, Sydney, NSW, Australia
Justin Baird
Tesseract.art
Singapore, Singapore
ABSTRACT
This paper describes a prototype audio-visual performance of a Ufactory Uarm Swift and a live musician. In this setting, the robotic arm was used as an AI agent to create a visual representation of a musical work in real-time. An A4 white canvas was gradually flled with a mixture of black, blue, red and yellow paints across the span of approximately eight minutes. The musician, performing on an acoustic violin, ftted with a custom built audio interface, performed multiple versions of an improvisatory work developed specifcally for the prototype performance. The following sections discuss our technical approach to programming and implementing the Ufactory Uarm Swift as a painting arm, refections of the musical process and propose future directions for this project.
CCS CONCEPTS
• Computer systems organization → Robotic control; • Human- centered computing → Collaborative interaction; • Applied computing → Sound and music computing.
KEYWORDS
music, painting, artwork, audio, sound, human-robot interaction
(HAI ’22), December 5–8, 2022, Christchurch, New Zealand. ACM, New York, NY, USA, 3 pages.
https://doi.org/10.1145/3527188.3563913
1 INTRODUCTION
The use of robotics for creative tasks has been explored in many domains, such as musical instrument performance [10], rapping robots [9], dance [6], and painting [5]. These projects can focus on a range of ideas, from robot painting for art therapy [3] to collab- oration with a robotic agent for Chinese landscape paintings [2]. In this work we propose a new area for creative robotics, focus- ing on cross-domain collaboration between a painting robot and a musician.
Figure 1: Robot Painting
The vast majority of research into painting and robotics focuses on industrial applications, which have been explored since the early 1970s [4], primarily in factory settings with spray guns. Neverthe- less, multiple robots have been developed to paint creative work, such as from existing images [7] or observational portraits [11]. Far less work has been done that incorporates robotic generation of the artwork itself, and to our knowledge none that incorporates live music into a new artwork generated in real-time. The links between music and painting have been broadly explored in the past [1], however there is a lack of defning research in the area. One such existing intersection between music and painting is the Music Paint Machine, which translates a musician’s movements to a digital canvas [8].
In this paper we describe a new system for painting by an AI agent, in real-time, based on the input of a collaborating musician and describe the frst prototype performance. Our focus is on a collaborative creation, where the fnal artwork is directly inspired by the musician’s performance. This has considerable challenges, particularly as music is a temporal art form moving through time, while paintings are a fxed medium creating an issue between short-
term art style and the overall generation of an artwork. A demo video is available online.
2 IMPLEMENTATION
To develop our painting robot we used a Ufactory Uarm Swift (see Figure 1. The Uarm includes a Python SDK, that allows for relatively simple serial control. The painting canvas has preset coordinates, locked into place through the use of a 3D printed frame. In this way the robot has a set knowledge of the x and y boundaries of the canvas, as well as the positions of each paint brush. To paint each gesture the robot moves into position and then modifes the z position of the arm. For the prototype version described in this paper we used, red, blue, yellow and black paints, with the robot able to alter between colours at any time. Additionally, the robot automatically applies more paint to the brush after three brush strokes.
The system is driven by the audio from the violin, with strokes matching the beginning and ending of phrases. The audio frst goes into MaxMsp which analyzes the pitches, the onsets and ofsets (to give note-lengths) and the density of notes per fve second intervals. These values are then used to both choose the next stroke and choose when to start a new stroke. The stroke itself is generated in a Python script that then sends the stroke to the Uarm through serial communication. Figure 2 shows an overview of the system. In our prototype recording, a custom built audio interface was also used to expand the sonic possibilities of the solo acoustic violin. The audio processing was structured to compliment the physical gestures of the robotic arm and to emphasise the use of diferent colours and the layering of strokes on the canvas. A dual harmonizer created harmonic expansion, whilst fragmented live sampling added a type of accompaniment to the solo playing, bringing out the playfulness and variety of the multitude of diferent
paint strokes used by the robotic painting arm.
3 MUSICAL DIRECTION
The composition for the prototype was primarily focused around a structured improvisation. There were a number of self-imposed constraints designed to best allow for the creation of an artwork, coupled with a large degree of creative freedom. In order for the robotic arm to fll the canvas in a meaningful way, incorporating all four colours that were available for its use, the piece had to be approximately eight minutes in length. A small collection of distinct sections were developed to create a logical compositional and narrative arc - starting with solo violin, then adding some audio processing to signify a change from black to blue paint, then build layers of diferent textures to correspond to layering of diferent colours on the canvas and so forth. This loose scafolding created a basis for experimentation and improvisation. Having specifc anchor points throughout the piece provided a balance between structure, control and improvisatory freedom.
From the perspective of the authors, the overall experience felt like a group improvisation, with three distinct performers - the robotic painting arm with the canvas as its instrument, the solo violinist improviser and the audio processing. Performing with two inanimate partners proved both surprisingly rewarding and somewhat challenging. After playing through multiple iterations of the work, the timing of the performer’s gestures in response to the robotic arm movements became more natural. Initially, all the deliberate synchronizations between the robotic arm and the violinist appeared somewhat forced, but after a period of exten- sive playing, both the music and gestures came together into one cohesive vibrant entity.
Receiving live visual feedback in the form of a painting afected the harmonic and rhythmic pallet of the improvisation. The initial long, refective strokes and sounds allowed for a slow build up of rhythmic and harmonic complexities. On the other hand, the use of bright, primary colours on the canvas with their vital mixes, led to the choice of consonant harmonies and syncopated double time rhythmic passages, leading to the robot inspiring changes in music, as the painting itself was generated by the music.
4 DISCUSSION
In this paper we presented a frst iteration of a robotic agent able to paint new artwork based on the input from a human violinist. There are many possible expansions for this work, currently the robot does not consider timbre variations, inclusion of which would ofer a wide range of potential control to the musician. For this per- formance the restrictions of the Uarm in degrees of freedom, speed and control allowed us to focus on certain areas and not become overwhelmed with possibilities. In the future a higher quality arm, with better gripper control would allow many more stroke types and subtle variations in the application of the brush itself, which was missing from the current version. We believe the relationship between music, long term form, and painting over time also re- quires further research. Ultimately we believe this work shows signifcant future potential for cross-modal interaction between AI agents and human collaborators, and opportunities to incorporate creative robots into the creation of new styles of creativity.
Robotic Arm Generative Painting Through Real-time Analysis of Music Performance
ACKNOWLEDGMENTS
This research is supported by an Australian Government Research Training Program Scholarship
REFERENCES
[1] Theodor W Adorno and Susan Gillespie. 1995. On some relationships between music and painting. The Musical Quarterly 79, 1 (1995), 66–79.
[2] Rong Chang and Yiyuan Huang. 2021. Towards AI Aesthetics: Human-AI Collab- oration in Creating Chinese Landscape Painting. In International Conference on Human-Computer Interaction. Springer, 213–224.
[3] Martin Daniel Cooney and Maria Luiza Recena Menezes. 2018. Design for an art therapy robot: An explorative review of the theoretical foundations for engaging in emotional and creative painting with a robot. Multimodal Technologies and Interaction 2, 3 (2018), 52.
[4] JRV Sai Kiran and S Prabhu. 2020. Robot Nano Spray Painting-A Review. In IOP Conference series: materials science and engineering, Vol. 912. IOP Publishing, 032044.
[5] Shunsuke Kudoh, Koichi Ogawara, Miti Ruchanurucks, and Katsushi Ikeuchi. 2009. Painting robot with multi-fngered hands and stereo vision. Robotics and Autonomous Systems 57, 3 (2009), 279–288.
[6] Amy Laviers and Cat Maguire. 2022. The BESST System: Explicating a New Component of Time in Laban/Bartenief Movement Studies Through Work With Robots. In Proceedings of the 8th International Conference on Movement and Com- puting. 1–3.
[7] Thomas Lindemeier, Jens Metzner, Lena Pollak, and Oliver Deussen. 2015. Hardware-Based Non-Photorealistic Rendering Using a Painting Robot. In Com- puter graphics forum, Vol. 34. Wiley Online Library, 311–323.
[8] Luc Nijs. 2018. Dalcroze meets technology: integrating music, movement and visuals with the Music Paint Machine. Music Education Research 20, 2 (2018), 163–183.
[9] Richard Savery, Lisa Zahray, and Gil Weinberg. 2020. Shimon the Rapper: A Real-Time System for Human-Robot Interactive Rap Battles. In International Conference on Computational Creativity, ICCC20.
[10] Richard Savery, Lisa Zahray, and Gil Weinberg. 2021. Before, Between, and After: Enriching Robot Communication Surrounding Collaborative Creative Activities. Frontiers in Robotics and AI 8 (2021), 116.
[11] Patrick Tresset and Frederic Fol Leymarie. 2013. Portrait drawing by Paul the robot. Computers & Graphics 37, 5 (2013), 348–363.