In all the descriptions of neural style transfer that I've seen so far, there is a single style image and a single content image, and the task is to produce a new image with the style of one image and the content of the other. For example, we can make the Mona Lisa in the style of van Gogh's Starry Night.
However, this approach seems limiting to me because it is reproducing the style of a single image and not the overall style of the artist. If van Gogh painted a portrait like the Mona Lisa, chances are he would not have used the same colours and motifs as Starry Night. For example, in the Starry Night Mona Lisa linked above, the skin is bluish with swirls and there are a few star-like artefacts. These features are not present in the original Mona Lisa, and they don't appear in van Gogh's portraits like this one either.
Has much work been done in the direction of reproducing an artist's overall style, rather than the style of a single image?
One solution I am imagining is you could have a database of images in the same style, and to stylise a particular content image you could find the style image which is "most similar" to the content image (for some appropriate definition of "similar") and use that as input to the style transfer algorithm. But I haven't tried this and maybe there are better solutions.