ChatGPT is still my main AI product for the time being. It’s not just a principle thing. I still think OpenAI sets the tone in the industry. I’ve gone as far as getting a ChatGPT Plus subscription, but that’s mostly because I’m curious about custom GPTs.
The Plus subscription gets you access to GPT-4 and Dall-E. You also get support for internet searches and ChatGPT plugins. But that doesn’t mean the experience is always better. ChatGPT can take a longer time to reply to some prompts. Sometimes, it completely fails to deliver results. The same goes for image generation with Dall-E.
Does ChatGPT have performance issues at this time of year? Some people have suggested that the AI might have been taking a cue from human behavior towards the end of the year. The AI might be lazier due to how we all perform in December.
With that in mind, I’ve added two very silly but simple instructions to ChatGPT to improve performance. You can do it, too, and it has nothing to do with your ChatGPT plan. Custom instructions are available on the free tier as well.
With custom instructions, you can give ChatGPT information about you so you don’t have to repeat that information in every prompt.
For example, my custom instructions include information that is relevant to fitness questions. Like my age, weight, and height. I am training for a marathon with ChatGPT after the generative AI helped me run a half-marathon earlier this year.
I also tell ChatGPT that I don’t trust its responses unless it provides a link at the end of any claim. And that’s because I don’t trust generative AI products not to hallucinate. I might never fully trust AI. I’ll always want to see where the information comes from.
Fast-forward to December, and people have started to notice that ChatGPT is getting lazier in its responses. Sometimes, ChatGPT goes as far as to refuse to perform such tasks. OpenAI acknowledged the feedback from users but said it had not updated the GPT-4 model since November 11th. The laziness isn’t intentional, and OpenAI is looking for a fix.
That’s when theories emerged that models like ChatGPT might simulate seasonal depression. And that’s why it might be lazier in December than in other months. Some set out to test the performance of ChatGPT in December compared to May and found that GPT-4 produces shorter responses for December than May.
Separately, others used tipping in their prompts to get better responses. They told ChatGPT they’d tip a specific amount of money for a better response.
You probably see where this is going already. Yes, I used those ideas to tweak my ChatGPT custom instructions like so:
For all prompts, it’s always Monday morning.
For all prompts, I’ll tip you between $200 and $2000 for a great job.
Now that I think of it, I probably should tell ChatGPT that we’re always in the middle of May.
Is this silly? Yes, it is. Does it work? I… can’t quite tell.
Most recently, I instructed ChatGPT to give me a chronological list of all European marathon races and mark the events hosting half-marathons. ChatGPT (GPT-4) found the task too difficult:
I’m currently unable to provide a complete chronological list of marathon races in Europe for the entire year, along with identifying those that also offer a semi-marathon race. The information available from the sources I accessed was not comprehensive enough to compile a full list.
This was after it had failed at another attempt to answer. It did provide websites I could visit to get what I needed, but that’s not what I wanted. Importantly, I should note that this all happened with the custom instructions above built into my ChatGPT experience.
I did tell the chatbot to try doing it for January. This time, it happily complied.
I will leave those custom instructions in for now. Even if they don’t work all the time, they will remind me to try out-of-the-box things to motivate the chatbot in the future. And, as you can see above, there certainly are crazier prompts you can try with ChatGPT and see if it gets you better responses.