Dark
Light

Exploring the Quirks of Large Language Models Through a Fashion Assistant

May 14, 2025

When I started building my GPT-powered fashion assistant, I expected elegant style advice and a smooth experience. Instead, unexpected challenges—like memory slips and hallucinations—turned this project into a lively exploration of prompting mechanics and the quirky behaviour of large language models (LLMs). Think of it not as a perfect tool, but more like managing a wild creature with its own instincts.

In an earlier piece, I introduced Glitter, my flamboyant GPT-based stylist, as a playful proof-of-concept. What began as a straightforward idea to create a stylish companion quickly evolved into a vibrant lab for probing LLM quirks, vulnerabilities, and emotional nuances. As a product leader with a keen passion for fashion, I wanted Glitter to be more than just logic; I envisioned a stylist with flavour. That’s why I customised GPT-4 to be bold, affirming, and firmly rule-bound—no mixed metals, clashing prints, or dubious navy-and-black pairings.

With a wardrobe structured in a JSON-like file, Glitter had all the details needed to make informed style choices. But as it turned out, this smart assistant wasn’t purely deterministic. Instead, it exhibited traits influenced by probability and even a tinge of memory leakage. If you’ve ever wished your tech could be as spirited as it is smart, you’ll appreciate how these quirks reshaped my approach to prompting.

This exploration builds on previous insights into personal styling with GPT-4 and dives deeper into the oddities that arise when trying to craft a stylist with character. From fabricating high heels to devising unique prompting rituals, every misstep provided a learning opportunity, reminding us that while you can simulate personality convincingly, creating an authentic soul remains a challenge.

Don't Miss