This paper investigates the potential of Large Language Models (LLMs) for computational analysis of intertextual relationships in literary texts, focusing specifically on the methodological challenges of operationalizing complex literary concepts. We propose a two-stage approach that combines semantic similarity search with prompt-based analysis to examine intertextual connections between Virginia Woolf’s Mrs. Dalloway and Homer’s Odyssey. Through systematic evaluation of both expert-informed and naive prompting strategies, we demonstrate that while LLMs show promise in detecting sophisticated literary relationships, their performance depends critically on the effective operationalization of domain knowledge. Our results indicate that expert-informed prompts achieve higher theoretical alignment (+16.23%) but also reveal a tendency toward over-interpretation, with 90% of analyses claiming classical transformations.
