Charting the path to a biologically informed system of mental illnesses
Willa Goodfellow ~ Mar '20
A moonshot exploring the human brain at single-cell resolution
Andrew Neff ~ Mar '20
Mindfulness combats apathy by offering a taste of transcendence
Andrew Neff ~ Feb '20
Written by Grace Browne
Can we Code the Next Michelangelo?
The field of artificial intelligence has exploded in recent times, and it’s escalating at a pace that many find unnerving. It seems like anything we can do, a machine can do better. This unease is fueled by fear of the unknown, concern for potential job displacement, and, even more ominous, the possibility of a robot revolt. Elon Musk labelled AI as “our biggest existential threat”.
But there are some attributes that are considered uniquely human, and one of those is creativity. Creative types are encouraged to rest easy, that their profession is safe from automation. But is that true?
The creative process drives every scientific discovery, every technological innovation, the production of every piece of art, music or literature ever made. A Pollock painting to a Shakespearean sonnet to a Mozart concerto all arise from the same fundamental procedure.
AI researchers don’t seem to think creativity is limited only to humans, however.
The amalgamation of AI and creativity is referred to as ‘computational creativity’. The field has struggled with landing on a strict definition of what exactly their speciality is; the term presents itself as an oxymoron, almost paradoxical in nature. Colton and Wiggins (2012) defined it as “philosophy, science and engineering of computational systems which, by taking on particular responsibilities, exhibit behaviours that unbiased observers would deem to be creative.”
The company leading the field of computational creativity is IBM. Their first foray into this area was the development of new culinary recipe, pioneered by Chef Watson. Watson then moved into the music world, scoring a writing credit on a hit rock song. Watson also tried his hand at fashion design, designing a dress for the brand Marchesa, and partnered with 20th Century Fox to create the first-ever “cognitive movie trailer”. Other companies are also joining the playing field - the Sony CSL Research Laboratory created the AI music system Flow Machines, which released an album under the name SKYGGE, and Google’s Magenta programme are in the midst of attempts to employ machine learning to create art and music.
But is this true creativity - or is it simply mimicry?
Want more Brain Science in your life?
Figured you did,
support Mind & Brain Illustrated today on Patreon.
5% of our annual proceeds are donated to the John Templeton Foundation.
What is creativity - or can it even be defined?
It’s difficult to study creativity in any kind of scientific capacity. It’s inarguably a subjective experience, personalised to each individual.
For centuries, the demystification of creativity was avoided, brushed off as a mysterious and impenetrable human faculty. =
However, in recent decades, behavioural and cognitive psychologists have begun to unravel the creative process. In academia, there are two defining characteristics of creativity: “the ability to produce work that is both novel (i.e., original, unexpected) and appropriate (i.e., useful, adaptive concerning task constraints)” (Sternberg & Lubart, 1999). Another way of describing this is as a train of divergent thinking (departing from the norm) followed by a train of convergent thinking (choosing the best/most suitable option). This boils creativity down to two central and successive elements: originality and usefulness.
By defining and breaking down the steps of creativity, it implies that it can be automated. It begins with a generative step, in which ideas are produced, followed a selective step, in which one of these ideas are selected as the most apt to that situation. The selective step is where machines fail, their so-called Achilles’ heel; they can’t decide by themselves what’s relevant and what’s not.
The Lovelace 2.0 test
Before Lovelace, there was the Turing test, devised by English mathematician and computer science pioneer Alan Turing in 1950. For decades, this test was considered to be the definitive benchmark standard of machine intelligence. The test involves an artificial agent, a robot, which must convince human judges that they are talking to a human and not a computer. However, the inadequacies of the Turing test have long been cited, and were confirmed in 2014, when the test was passed by a 13 year old Ukranian chatbot called Eugene Goostman.
In the aftermath of its defeat, many have suggested the Lovelace test take its place. Named after the world’s first computer programmer, Ada Lovelace, and revised by The Georgia Institute of Technology professor Mark Reidl, the key difference for the Lovelace test is that it’s not strictly about a conversation, instead, it’s about creation more generally (see all the things Watson’s doing). In the test, the AI must create an artifact of a certain type from a subset of artistic genres (for example, a poem), which must conform to a predetermined criterion (say, about nature). If the creation is satisfactory by those standards, it must be decided by a human judge whether the artifact could plausibly be created by a human. However, like the Turing test, the vagueness and subjectiveness of its criteria have attracted criticism from experts, and has yet to be successfully implemented into practice.
Can creativity be programmed?
Many argue that creativity is a fundamental human expression, one that cannot be coded or simplified into an algorithm. That AI will never evolve past emulation, and that the steps required in the generation of something completely novel – ideation, inspiration, intuition – are beyond the scope of a machine. Other more technologically optimistic types think that it’ll only be a matter of time until human creativity can be fully codified and replicated.
Rather than authentically exhibiting creativity, machines may simply streamline the creative process for humans. AI may never autonomously eclipse human talent; instead, they may solely augment the process, rather than automate it. AI, at the end of the day, are robots — a term that came from the Czech word robota, which literally translates to forced labour. Without mankind, AI pretty much ceases to function.
However, only time will tell, and the evolution of AI continues to grow in (previously unimaginable) leaps and bounds every day. Machines could soon join the ranks of, and even overtake history’s most famed and renowned creators.
Riedl, M. (2014). The Lovelace 2.0 Test of Artificial Creativity and Intelligence. Semanticscholar.org.
Simon Colton , Geraint A. Wiggins, Computational creativity: the final frontier?, Proceedings of the 20th European Conference on Artificial Intelligence, August 27-31, 2012, Montpellier, France [doi>10.3233/978-1-61499-098-7-21]
Sternberg, R. J., & Lubart, T. I. (1999). The concept of creativity: Prospects and paradigms. In R. J. Sternberg (Ed.), Handbook of creativity (pp. 3-15). Cambridge: Cambridge University Press.
Cajal’s frustration with neuroscience communication
Andrew Neff ~ Nov '19
Appreciating external contributors to mental illness
Natalia Lomaia ~ Nov '19
J.D. Salinger and the science of mantra meditation
Andrew Neff ~ July '19