Keep Callm and Cairry On
Many companies and individuals today are experimenting with LLMs. Proponents of GenAI suggest that in can improve our work by helping with tasks like authoring quarterly updates, writing emails, or even reading and summarizing emails you receive. Having AI both write and read emails raises the question of whether the AI is actually adding any value, but let's assume that someone is ultimately reading the AI generated prose.
While this would seem to create value for the author of the text by saving them time in authorship, what is the impact, and the cost, on the readers of the text? The following questions jump to mind:
What is the cost of the additional time spent reading?
For example: a multi-paragraph email that might have been one or two sentences without GenAI augmentation.
- How do we guard against degeneration of the quality and relevancy of written output?
How do we avoid implicitly punishing people who take the time to write meaningful texts "by hand"?
Imagine that it takes thirty minutes to write one detailed, short, and meaningful document, but you can generate a similar but longer and less relevant update using an LLM in just five minutes. Assuming we judge people by "productivity," this penalizes the person who takes the time to do a good job and rewards the one who sends an email filled with junk an LLM barfed up.
How do we ensure important human-written "needles" aren't lost as the "haystack" of text grows?
LLMs will surely increase the volume of text we are expected to read, but the time we can devote to reading is fixed. This is likely to result in less more "skimming" and more skipping, i.e. simply not reading things. This increases the risk of readers missing important pieces of information.
How do we ensure we don't use LLMs to create problems we then need LLMs to solve?
"LLMs can summarize email threads and documents for you!" If the reason I need the document summarized is because the author created an overly-long document using an LLM, then GenAI's contribution to that interaction will have been all cost, no value.1
Reflections
As long as we keep the bar for quality high, insist that communications be meaningful and relevant, and have a feedback mechanism to let people know if their communications need improvement so they can correct course, then there's absolutely no problem with using LLMs. At the moment, however, it's not clear to me that we apply these standards of quality & "high signal-to-noise ratio" to internal communications–there's generally no consequence to writing excessively long emails or documents, because why would you criticize someone for taking the time to write things out? However, this calculus changes when it no longer requires time or effort to produce reams of text.
In my view, what's needed to keep LLM-spam from flooding our lives is raising the bar for written communication and cracking down on long-winded, irrelevant fluff in emails and other documents. Ultimately, as long as people are producing high quality work it doesn't matter where it came from.
If you have a plan for how we can avoid drowning in LLM slurry, please shoot me a note and I'll include it below!
Footnotes
1 Personally, I am prejudiced against reading prose generated by an LLM. If it wasn't worth your time to write the email, why the heck should I waste my time reading it? ⤴
📝 Comments? Please email them to sequoiam (at) protonmail.com