Photo by Willian Justen de Vasconcellos on Unsplash

Experimenting with posting this on LinkedIn first.

DeepSeek burst the hype bubble with Generative AI. A Chinese hedge fund using cheap labor (grad students, not Uyghrs this time), ‘borrowing’ work of others, has exposed the tech narrative that only a handful of well funded big players using lots and lots of computing and electric power can run these systems. But it also exposes the deeper question, where we always start with UX design: what problem is this solving?

It’s time to wake up and accept that there was never an “AI arms race,” and that the only reason that hyperscalers built so many data centers and bought so many GPUs because they’re run by people that don’t experience real problems and thus don’t know what problems real people face. Generative AI doesn’t solve any trillion-dollar problems, nor does it create outcomes that are profitable for any particular business.  Ed Zitron

Ed’s trillion-dollar reference is to the Goldman Sachs report June 2024, e.g. “If this ends up just doing coding and customer service, we’re massively overspending on this.” and:

While the question of whether AI technology will ever deliver on the promise many people are excited about today is certainly debatable, the less debatable point is that AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do. Jim Covello

So now the cost has been reduced 25X, does that still hold? What problem is GenAI solving?

  • Fixing typos
  • copying stuff from the internet
  • reorganizing stuff copied from the internet.

That’s about it. The rest is theoretical vaporware. Like all new shiny tech (Newton handwriting recognition! Clippy agents! VR metaverse! Web3!) it is tech/engineering looking for a business model.

It is key to understand that GenAI does not equal AI, it is a subset, but one getting all the attention now. Machine learning, which uses specific data sets to learn patterns and make predictions, has been solving problems and adding value for years now. All without scraping copyrighted content and presenting it without attribution.

The most ubiquitous innovations of GenAI today are:

  1. Internet search – Google and Bing will give you answers to your search query by pulling them out of the content on the respective web sources, so you don’t have to dig through each link yourself. Great, but what is the long-term incentive for a content creator (human), not getting the traffic/revenue that enables them to keep adding original content?
  2. Creating content (see above) — i.e. ‘slop’. Shelly Palmer introduces the ‘curation generation boundary’; we are “drowning in AI generated nonsense”.

The key to LLMs is the training data–is it valuable for the repository to contain every idiotic piece of information on the internet, including this pointless opinion piece you are reading (thank you by the way)? Is it ethical or even legal to contain all content obtained without consent, license, or renumeration?

The promise of a ‘large language model’ or repository of all human knowledge is very real. Quite a while back I read an article in Wired, where essentially scientific discoveries were lagging today because of overspecialization; it takes cross-pollination from scientists in unrelated domains to find insight. Can’t find it now, could be close to this one Let’s Bring the Polymath Back, or this example of Stephen Wolfram. Scientists read and publish in their domain; where the answer already be waiting in another journal. This started me sketching out a story for a modern-day Faust, where the protagonist invents a brain-machine interface to ‘ingest’ the entirety of human knowledge and becomes wealthy inventing technologies that solve all earthly ills (unlimited clean energy, food production, medicine, etc). There was a comic aside where the protagonist has to filter out the pop-culture/social media content that was just making him goofy and stupid. This was years ago–Netscape era!–so the tech kind of mooted the concept.

There are plenty of problems AI can solve, particularly machine learning focused on specific data sets, but most apps are continuing the age-old user experience model of making the human do all the grunt work instead of developers figuring out how to make the computer do the work. Classic example of the field on a form insisting on correct syntax, instead of assuming 1/1/25 means 01/01/2025 and making sure to chastise you for your ignorance. I frequently spend days in Figma just cleaning up layers and substituting components that should be bulk select, but it doesn’t work. Please just fix your app and then you can add the ‘Generate My Design’ button for Product Managers who fired all the designers.

AI: The Emperor Has No Clothes

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.