AI Creativity: AI can create with Flair
Philosophy of art meets generative AI
The possibility of AI Creativity is hotly contested. With the proliferation of AI slop and generic ChatGPT text everywhere, we might be more convinced than ever that AI cannot be truly creative. However, when we break down the components of creativity, this might not be so certain. In a recent paper, I argue that AI can meet at least one of these conditions: flair. Here I will outline the core of that argument.
There are two ‘standard’ components of many definitions of creativity: novelty (or originality) and value. One philosopher, Berys Gaut, adds a third condition to this standard account of creativity: what he calls “flair”. We might guess that flair relates to something like “style”. However, Gaut defines flair through the exclusion of four different cases. These are a few cases which the standard account of creativity (value and novelty) cannot exclude:
Accidents: spilling a pot of paint and it landing on a canvas in a new and gorgeous design is not creative
Mechanical search: Following a process of searching through options to find something new and valuable is not creative
Rule-following: following rules, such as in a paint-by-numbers is not creative, even if we’re the first to do it and the result is attractive
No evaluation: very young children, or even animals that paint and paint and never know when to stop are not creative, even if (when we take the picture away) they have made something new and valuable
Gaut proposes that flair will serve to exclude these cases. We can summarize Gaut’s account of flair as requiring the following:
a relevant purpose (not accidental, or by pure luck);
some degree of understanding or skill (not merely using mechanical search procedures);
a degree of judgement (in how to apply a rule if a rule is involved);
an evaluative ability directed to the task at hand.
Flair is a very human-focussed condition of creativity - this is of course common because human creativity is the paradigm case of creativity. But because of this anthropocentrism, we might ask whether flair is a barrier to creativity in machines. However, I argue that AI can meet the requirements of flair. Let’s take them one-by-one.
1: A relevant purpose (not accidental, or by pure luck)
Whilst AI systems like Generative Adversarial networks do include some randomness (or, pseudo-randomness), they are also based on machine learning. As the system is trained, it learns the distribution of the training data, and this impacts upon the output of the system. The random input built into the system will still have an impact on the final output. However, as soon as these systems have been trained on a dataset, they are no longer producing images through pure luck, but through some chance and (as I argue below) some skill.
2: With understanding or skill (not by mechanical search)
One of the key cases Gaut wishes to exclude from creativity is mechanical search procedures. This may present a problem for some kinds of AI systems. A classic case which Gaut discusses here is IBM’s Deep Blue chess computer: the computer beat Garry Kasparov, but did so by searching through all the possible moves and choosing the best. Gaut’s concern that this does not constitute creativity is flair, however (as he was writing in 2003) AI has moved on since then. Even AlphaGo, the “Deep Blue” of the 2010s, does not search through all the options, as that would simply be too computationally demanding. However, I expect some would still not consider AlphaGo to be creative, due to its modified search procedure (despite its ‘God move’, move 37).
Instead let’s consider whether we could say AI systems can be skilled. Berys Gaut again offers an account of skills as:
The capacity is special (i.e. it is not universally shared).
It is a kind of accomplishment.
It can be practiced.
It can be learnt
i) Not universally shared: The ability to generate images (for example) is not something universally shared by algorithms. The ability to generate good images (either judged as images that could be part of the training set as in GANs and CANs, or as judged by humans) is also not universal. Only some systems, and (perhaps more pertinently) only some iterations of systems can produce images judged to be good.
ii) Accomplishment: We might not necessarily consider an AI system itself to be accomplished, but this could be about our unwillingness to praise a machine. However, the outputs of many AI systems are praised widely, as we have seen in the news, online, and in exhibitions and auctions. If we are able to attribute responsibility for the work to the AI (this will require some level of autonomy at least) then we may consider the AI to be accomplished.
iii) Practice: I will take practice to mean repeated exercise of an ability. Many AI systems do indeed repeatedly exercise their abilities, either continually generating (and, for GANs and CANs, assessing and improving upon) their images or (in the case of DALL-E) generating multiple images in response to prompts. We might also require that the repetition results in improvement. In the case of adversarial networks, these repeated exercises will result in improvements.
iv) Something that one learns: it seems easy to take at face value here that machine learning involves learning. One could object that a deep learning algorithm like GANs or diffusion models do not learn to make images as they are programmed to make images; thus, they do not learn the skill of ‘image-making’. We could respond, however, that these systems learn to make images of a certain kind or quality. This is not something they are programmed to do - this is something that must be learned from the training data through an iterative process.
3: A degree of judgement (in how to apply a rule if a rule is involved) (not specific rule following)
If we are to have some level of autonomy in our AI system, this should ensure that the system is not merely following rules. Not following specific rules, for an AI, could be interpreted as not only following predetermined rules—that is, not only following rules assigned to it from an external source. Some might object that as an AI system made up of algorithms, it is comprised solely of rules and thus is, in its very nature, unable to exercise judgement in applying rules. However, it is not clear that current AI can be simplified as following rules in this way. There is a key distinction to be drawn between ‘GOFAI’ (Good Old-Fashioned AI) and contemporary AI systems which are made up of deep neural networks. GOFAI systems are programmed with explicit rules in mind. In contrast, AI built on neural networks is trained. It is this training which determines the system’s outputs, not preprogrammed rules. As others have argued, this distinction is significant for considering whether AI systems can be said to be following rules. Due to the ‘black box problem’ developers may not even be able to predict the system’s outputs, and have not ‘programmed’ them into the system - we have a separation between the designer and the AI. If we have a level of autonomy in the system, there will be behaviours that are not determined by a pre-existing set of rules. They may, however, be determined by the AI itself through deep learning and self-evaluation. In this case then, we could claim that judgement in application of any rule has been involved in the self-altering of the parameters of the system.
Under this condition, we don’t need it to be the case that there are no rules involved at all (in fact, many human creative processes might involve some rule-following). We merely need to ensure that there is not only the following of rules that are determined by others. As long as the system was determining its own rules for creating works (and not merely those predetermined by a human designer, for example) then we should satisfy this condition of flair.
4: An evaluative ability directed to the task at hand
In the case of some AI systems (notably GANs) evaluation is central to their architecture. GANs are generative deep learning algorithms made up of two neural networks: a generator and a discriminator. The discriminator is trained on a set of training images, and learns to distinguish images from the training set. In the case of GANs, there is an explicit evaluative process involved in the generating of images. The discriminator evaluates the outputs of the image generator in the system. These outputs are evaluated against success criteria and given a score. This score is then used to improve future iterations of image generation. Consequently, we could certainly say that there is an evaluative ability at play in (at least some) AI systems.
Given my assessment, outlined above, it seems that AI systems may be able to achieve flair. Just because AI can meet the requirements of flair, this doesn’t mean that AI is, or even can be creative. There may be other barriers to AI creativity, such as an inability to produce anything genuinely original, or its lack of full agency. But, the issue is not flair.
For the full paper (with Open Access), see here: https://academic.oup.com/bjaesthetics/advance-article/doi/10.1093/aesthj/ayaf057/8539602



