Artificial Intelligence (AI) is already embedded in the digital products and services we use every day, steadily improving the way we shop, travel, bank, learn and care for ourselves. But this is just the beginning.
AI is poised for massive societal and economic impact, prompting serious questions about what role we actually want AI to play in our lives. How can machines enhance rather than replace what humans do? Where should the human end and the machine begin? What are the limits?
These questions are now roiling the world of art, music and culture – and this trend was reflected in the recent 3rd annual AI for Good Global Summit, which for the first time assembled a range of performance artists to explore and demonstrate how AI can enhance their work.
Indeed, culture was a guiding undercurrent of this year’s Summit, informing discussions on a broad array of topics – from robotics to storytelling and even cybersecurity.
Summit participants were inspired by groundbreaking artists who showcased their explorations into how human and machine can come together to augment the boundaries of human creativity and genius.
During a Summit cultural event, inventor and BBC Click Presenter LJ Rich showed that AI has great potential for music composition.
“The building blocks of melody are repeatable sounds, which are combined in different ways. Music is data, and it evolves just like AI.” – LJ Rich, BBC
Rich had the audience listen to samples of AI-created music, and then demonstrated her ability to improvise on the keyboard. The difference was stark.
“So how do we make machines understand music?” she asked. “The building blocks of melody are repeatable sounds, which are combined in different ways. Music is data, and it evolves just like AI.”
“The artist’s responsibility is the narrative. The stories around these new ideas are what leads to engagement, to what leads to embracing.” – Harry Yeff (Reeps One)
After Rich’s performance, award-winning vocal and visual artist Harry Yeff, also known as Reeps One, demonstrated his use of AI to create a digital sculpture of his voice. Yeff also showed how this AI-powered 3D image looks entirely different for different voices – a unique fingerprint of our individuality.
Champion beatboxer Yeff then showed how he beatboxes to an AI simulation of his voice that responds in near real-time to his input, and an AI-generated 3D model that dances to his sounds. It was an impressive show of man collaborating with machine to create new artistic expression.
One of the goals of working with AI, said Yeff, is to gain fresh inspiration and new material that he as a human artist could take to the next level.
“The artist’s responsibility is the narrative,” Yeff explained. “The stories around these new ideas are what leads to engagement, to what leads to embracing.”
Also during the Summit, Jojo Mayer, musician & educator, and drummer of Nerve, gave a performance and talk that addressed the question of the role of human drummers in the digital age.
As drum machines began to replace and out-perform drummers, listeners embraced synthesized drumming, Mayer explained. Drummers had to adapt or get left behind.
“The AI world needs rhythm. In order for this network to get it right, you’ve got to work together.” – Dr. S. Ama Wray, University of California, Irvine
By testing his own limits, Mayer realized that he cannot play like a machine. “But with this defeat came an important artistic breakthrough,” he said. “I understood that it was not necessary for me to actually play like a machine if I could create the illusion that I do.”
For Mayer, our human ability to improvise just might be the one thing that keeps us ahead of the machines.
“What makes us [humans] so special is that we believe that we are able to think about ourselves, which generates a choice from within,” said Christian “Mio” Loclair, a new media artist, computer scientist, choreographer and Creative Director at the Waltz Binaire art studio in Berlin during a performance. “And sometimes when this choice deviates from an expected choice, then we consider this to be ‘creative’.”
In his talk, Loclair described his artistic installation called ‘Narciss’ where he asked an AI-powered machine to describe the world around it – except that the machine was placed in front of a mirror, so it could only see and analyze itself.
“Maybe you always look at yourself,” he said, “and yet you don’t fully understand what you see. We wanted to see whether machines wonder who they are, just like us.” The ‘Narciss’ machine continually explores itself through its lens, making and recording thousands of guesses of its own identity.
At one point during the Summit, Dr. S. Ama Wray, Associate Professor of Dance at the University of California, Irvine, demonstrated her creation of Embodiology – an African approach to the performance of improvisation.
Wray’s talk was art in motion, her movements accompanying her words in perfect and precise synchronisation.
“A fractal is a design that comes from the word Seselelame, of the West African Ewe language,” said Wray. “This means knowing the world through the body. The mind and the body are one.”
Wray invited the audience to stand and clap along with a rhythm she led.
“This is all mathematics,” she said.
“The AI world needs rhythm,” Wray said, in a metaphor for the Summit and its aims to connect all relevant stakeholders in the building of AI. “In order for this network to get it right, you’ve got to work together.”
Send this to a friend