Can machines make art history?

0

From the December 2021 issue of Apollo. Preview and subscribe here.

At the end of September, Art Recognition, a Swiss company offering authentication of works of art using artificial intelligence (AI), announced that it had found, with 92% certainty, according to it, that Rubens Samson and Delilah at the National Gallery had been wrongly attributed to the painter. The story had no trouble making headlines, although some journalists quickly noticed the lack of a true peer-reviewed article. As for the museum: “The gallery always takes note of new research,” replied its spokesperson, adding, “We await its publication in its entirety so that any evidence can be properly assessed.” Eye-catching AI applications like this often gain media attention, but perhaps the biggest effect of unsubstantiated claims has been to heighten skepticism about machine learning applications in the field of technology. ‘the history of art.

Before a machine can hope to do something as sophisticated as authenticating an image, it must first be able to see. The algorithmic architecture required for what is called computer vision is known as a convolutional neural network (CNN). This allows the computer to make sense of the images transmitted to it by sensing the outlines of the composition or isolating objects by grouping similar pixels together in a process called image segmentation.

red cabbages and onions (1887), Vincent Van Gogh.

Results of the brushstroke extraction on Van Gogh's

Results of the brushstroke extraction on Van Gogh’s “Red Cabbages and Onions” undertaken in 2012 by the James Z. Wang research group of Pennsylvania State University. Photo: © James Z. Wang Research Group at Pennsylvania State University

Using such methods last year, Art Recognition announced that it had verified a self-portrait attributed to Van Gogh at the National Museum in Oslo. The statement received little fanfare. The painting had already been authenticated by specialists through a more traditional analysis and connoisseur of style, material and provenance. As a guinea pig for new techniques, however, Van Gogh surely makes more sense than Rubens, whose reliance on assistants has complicated authentication efforts before. Van Gogh not only worked alone, but he has one of the most distinctive hands in the history of art. In 2012, researchers at Penn State University used existing edge detection and edge bonding algorithms to automatically isolate and extract his brushstrokes and those of a few of his contemporaries. In the article “Rythmic Brushstrokes Distinguish van Gogh from His Contemporaries,” they describe how they used statistical analysis to compare individual brushstrokes and prove how they set Van Gogh apart from other artists – and how his brushstrokes were. changed over time. Among the researchers was computer scientist James Wang, who was the first to admit that these rather obvious findings might prompt a Van Gogh scholar to respond: “So what?

We should be encouraged, however, by the modesty of the assertions, as the same article also highlights the complexity of digital image analysis. Where, for example, Art Recognition did not specify which aspects of Van Gogh’s style it relied on to train its algorithm, Wang and his team rejected the use of color and texture at this point. to avoid bias caused by variations in the scanning process. After all, computers read photos of artwork, not the artwork itself, and there aren’t always enough high-quality images to create sufficient training data sets, especially in the case of artists with smaller works. As is the case with other scientists I have spoken to who work in this field, Wang openly recognizes his current limitations. “Attribution, especially when we’re talking about forgeries, is one area I don’t think machine learning is ready for,” he says.

Computers, of course, have certain advantages over us. Their massive memories allow them to study large amounts of images at the same time and without losing a single detail, which gives them a much superior aptitude for pattern recognition. This ability has begun to come in handy in some art history debates, and Wang’s most recent project, a collaboration between several members of the faculty at Penn State, is a case in point.

Art historian Elizabeth Mansfield wondered how John Constable, in his cloud studies, captured a constantly moving and changing entity with such precision. She decided to test the hypothesis, first proposed by Kurt Badt, that Constable was referring to Luke Howard’s cloud nomenclature system, created in 1802. George Young, a meteorologist at Penn State, was called to categorize the work of Constable, and those of other landscapers including Eugène Boudin and David Cox, according to four major types of clouds. An algorithm trained by Wang then learned to classify a large dataset of cloud photographs, according to these types, with great precision. When the team then submitted images of the paintings to the algorithm, they were able to gauge how accurately they could confirm Young’s labels. He deduces from this the degree of realism of each artist in relation to the photographs.

Cloud Study (1822), John Constable.  Yale British Art Center.

Cloud study (1822), John Constable. Yale British Art Center.

Constable’s studies scored particularly high, apparently supporting Badt’s hypothesis, but other findings have come as a surprise. After Constable, the most realistic studies were carried out by the French artist Pierre-Henri de Valenciennes with works completed before Howard published his typology. Valenciennes, in any case, had to rely on his own observations.

Telltale was also how the machine had changed Mansfield’s assumptions as an art historian. “I became more aware of my own biases and abilities as a viewer,” she says. “I see computer vision as an interlocutor with the art historian. Wang suggests that it can also function as a more objective “third eye” that helps art historians better explain what they see. “Art historians may have a hunch,” he says, “but they can never show you with a massive amount of evidence. We therefore provide the proof.

Bringing to the arts and humanities more empiricism required by the sciences is the goal of Armand Marie Leroi, an evolutionary biologist at Imperial College London. “I admire art historians, but find it frustrating that so much of their detailed knowledge is in their heads,” he says. “I have no way of testing their claims. A formative experience for Leroi was working with the Beazley Archives in Oxford. For Leroi, John David Beazley’s descriptions of the stylistic traits that led him to identify different Greek vase painters are not supported by sufficient evidence – no matter how precise – and have since, he says, to weave complex accounts of the history of Greek vases. The limit, for Leroi, is that “it was all in Beazley’s head”.

It’s difficult to analyze exactly how AI compares to the subjectivity of human judgment, plagued by our biases (often in the form of imperfect data), as well as its own (within the limits of its architecture). Deep learning can produce a “black box”, which means that the way in which an algorithm comes to its conclusions cannot always be explained. “There is no magic in machines,” says Leroi. “You shouldn’t take their opinions without criticism. The whole point is precisely to be able to question them, test them and reproduce the results. ‘

Leroi favors the type of formal analysis practiced by 20th century art historians, such as Heinrich Wölfflin. Leroi recently presented at the World Musea Forum at Asia Art Hong Kong a machine learning model for dating Iznik tiles by extracting and classifying three typical patterns, the tulip, carnation and saz leaf. The project shows how such algorithms could learn forms of human expertise, in this case that of Iznik specialist Melanie Gibson, and apply it more widely.

“In some ways we are backsliding. We are coming back to a close visual analysis of works of art and not to a high level theory, ”explains David Stork. Something of a father of the field, Stork lectures on computer vision and art image analysis at Stanford and is the author of the upcoming book, Pixels and paintings: foundations of computer-aided knowledge. His suggestion seems to be that one might expect the return of a connoisseur art history that faded away once the human mind fully realized its ability to practice it.

According to Stork, the next frontier is the exploration of images for meaning, although this differs by time, place and style. For now, the goal is to achieve this first with highly symbolic Dutch. vanities paintings. “I don’t expect that in my lifetime we will get a computer capable of analyzing Las Meninas, a deep and subtle masterpiece that is perhaps the most analyzed painting ever, ”says Stork, but challenges like these will not only benefit art history. “I think that fine art images pose new classes of problems that will advance AI,” he adds.

All the scientists I speak to stress that it is crucial to work closely with specialists in the art. “Cultural context guides the use of computational techniques,” says Stork, who maintains that art experts are needed to interpret machine learning results. An instructive example of how this might work is “Deep Discoveries,” a project to develop a prototype computer vision search engine as part of the Towards A National Collection program, which aims to create a virtual home for children. collections of British museums, archives and libraries. which breaks down barriers between institutions. The tool would allow users to visually explore the database by matching images of cultural artifacts. “We are dealing with abstract notions such as visual similarity and patterns and patterns, which have very specific meanings for different users,” explains lead researcher Lora Angelova. Therefore, the focus has been on producing an ‘explainable AI’, which uses heat maps to express areas of similarity between two images, allowing users to give their opinion on the AI ​​by indicating the areas. of interest.

Much of what has been achieved with machine learning so far mimics what we already know, but now is the time to broaden those horizons. Adopting data science slang, Leroi sees art as existing in a “high dimensional hyperspace” filled with as yet unknown connections and correlations. “It is in this space barely grasped by the human mind,” he dares, “that we are now throwing our machines away, awaiting their return and asking them what they have found.

From the December 2021 issue of Apollo. Preview and subscribe here.


Source link

Share.

Comments are closed.