Can classification of graphic novel illustrators be achieved by convolutional neural network (CNN) features evolved for classifying concepts on photographs? Assuming that basic features at lower network levels generically represent invariants of our environment, they should be reusable. However, features at what level of abstraction are characteristic of illustrator style? We tested transfer learning by classifying roughly 50,000 digitized pages from about 200 comic books of the Graphic Narrative Corpus (GNC, [6]) by illustrator. For comparison, we also classified Manga109 [18] by book. We tested the predictability of visual features by experimentally varying which of the mixed layers of Inception V3 [29] was used to train classifiers. Overall, the top-1 test-set classification accuracy in the artist attribution analysis increased from 92% for mixed-layer 0 to over 97% when adding mixed-layers higher in the hierarchy. Above mixed-layer 5, there were signs of overfitting, suggesting that texture-like mid-level vision features were sufficient. Experiments varying input material show that page layout and coloring scheme are important contributors. Thus, stylistic classification of comics artists is possible re-using pre-trained CNN features, given only a limited amount of additional training material. We propose that CNN features are general enough to provide the foundation of a visual stylometry, potentially useful for comparative art history.