I wrote this blog post on my own. I let ChatGPT create an “improved” version of this blog post. You can read the rewritten version here.
I’ve been trying to use LLMs(Large language models like GPT4) for productivity increases lately. One area of interest is writing blog posts. SEO spam is the area that everyone seems to be thinking of as the first to see wide commercial application of LLMs. So I thought I might give it a try and generate blog posts with ChatGPT while trying to not lower the bar too much.
I’ve been experimenting with the approach of drafting my ideas by talking into a mic and letting that be transcribed by the speech recognition model
A small python CLI wrapper helps me capture audio and then send it off to OpenAI.
The resulting transcription of notes is then summarized by GPT4.
One benefit of this is that I can speak to whisper in my native tongue which lowers the cognitive overhead marginally.
The outcomes so far are as good as could be expected. I see my notes well-represented in the output of ChatGPT. I don’t think a human who has nothing else to go by than the transcription could do a much better job at summarizing my notes. There are two problems with this approach though.
The lack of details
Despite me having been continuously stuck at writing a blog posts in the past1, it seems that now that the actual writing part is taken care of, I don’t actually have that much to say without putting the time in to do the research / thinking. While I have quite a lot of insights that are blog post worthy, each of these needs to be vetted and validated to be more than just a random thought.
The biggest learning for me really is that my problem with regularly writing blog posts is not so much the form as it is the substance.
The blandness of the result
Despite me providing a sample blog post to GPT, the resulting output feels too bland, too GPTesque. It reads too much like reading the output of a copywriter, which is not the style that I am looking for in my blog. This is true at multiple levels:
- It uses different words than I would do. For example it speaks of “complex commands” where I would have used “nontrivial commands”
- It makes claims that I wouldn’t make, e.g. mentions “whole new level of convenience and ease in handling various tasks” which even if it were true, I would’nt use that wording for a set of hacked together
- The structure of the output is too rigid. The generated blog post reads like it might come straight out of an SAP training handbook.
Now whether it is simply tough to emulate a writers style, a bias in the training data or a lack of fine-tuning I am still trying to figure out.
I will definitely use LLMs for writing my blog posts, be it as an editor that gives feedback or a summarizer of notes, or as a way to get a first draft that I can then critique/iterate the hell out of. But it probably will not lead to a much increased release cycle of high quality blog posts. Realizing that it is the substance of the blog posts that is preventing regular blog posts is really quite sobering.
None of this blog post(except where quoted) was written by ChatGPT.
I have more than sixty blog posts sitting in my drafts folder, yets its been more than a year since my last published post. ↩