This blog post is ChatGPTs rewrite of my own blog post. You can read the original here.
Lately, I’ve been exploring the potential of Large Language Models (LLMs) like GPT-4 to enhance productivity, particularly in the realm of blog post writing. While SEO spam is commonly regarded as the first major commercial application for LLMs, I decided to experiment with generating blog posts using ChatGPT without sacrificing quality.
My approach involved drafting ideas by speaking into a microphone and having them transcribed by the speech recognition model, Whisper. I created a small Python CLI wrapper to capture audio and send it to OpenAI. This process resulted in a transcription of my notes, which GPT-4 then summarized. Speaking to Whisper in my native language provided the added benefit of reducing cognitive overhead.
Overall, the outcomes have been as good as expected. ChatGPT accurately represents my notes, and I doubt a human with only the transcription as a reference could do a significantly better job summarizing them. However, I’ve encountered two major issues with this approach.
The Challenge of Substance
Despite my previous struggles with writing blog posts1, it appears that once the writing process is taken care of, I still face the challenge of generating substantial content without investing time in research and critical thinking. I possess numerous insights worthy of blog posts, but each one requires validation to be more than just a random thought.
This experience has taught me that my primary obstacle in consistently writing blog posts isn’t the form but the substance.
The Blandness Dilemma
Even when providing a sample blog post to GPT, the output feels bland and overly reminiscent of generic copywriting. This doesn’t align with the style I desire for my blog. The issue manifests in several ways:
- Word choice differs from my own. For instance, it refers to “complex commands” instead of my preferred term, “nontrivial commands.”
- Uncharacteristic claims, such as mentioning a “whole new level of convenience and ease in handling various tasks,” which I wouldn’t use to describe a set of hacked-together
- A rigid structure that makes the generated blog post resemble an SAP training manual.
Determining whether this is due to difficulty emulating a writer’s style, bias in the training data, or insufficient fine-tuning remains an open question.
LLMs, such as ChatGPT, will undoubtedly play a role in my blog post writing, whether as editors, summarizers, or first-draft generators. However, this is unlikely to result in a substantial increase in the release of high-quality blog posts. Realizing that the substance of blog posts is the primary obstacle to regular publishing has been a sobering revelation.
More than sixty blog posts sit in my drafts folder, yet it’s been over a year since my last published post. ↩