I vaguely knew that adding more examples was useful for getting language model help with writing. And despite being very familiar with the idea that if doing some of something is good, doing lots might be much better, I hadn't actually sat down to really scrape together loads of examples for my LLM writing.
But then, after one too many nudges from sensible people I should have listened to earlier, I put in the time to collect 30 emails I had written and record short voice clips explaining what I was trying to do with each email. I dumped all of that text into a Claude project and it has been extremely helpful for drafting ugh-y emails.
That somehow got me over the hump and I've been adding more examples and building different Claude projects for different things I want to use. I've also been experimenting with the o1 and o3-series models which don't yet have good project functionality (and very annoyingly can’t read pdfs), but you can do a simple workaround: paste all the context you want into a Google Doc and then copy-paste that into the prompt window. And just like with the emails case, I'm increasingly convinced that more context is better. Just take everything you think is relevant - every example of your style you can find, every bit of background information - dump it all in and let the models work through it.
A significant advantage language models have over people is how quickly they can read and how much they can hold in working memory at once. I think the way to get the most out of this is to really push the amount of context you're providing as high as you can reasonably get it.
That's it, that's the take.