A while back I did a smarmy post about AI takin’ r jobs, and decided that since it couldn’t solve a critical UI conundrum–i.e. a meaningful icon to replace the obsolete floppy disk for ‘save’– that I would continue to work.
However, I do agree with the likes of Jakob Nielsen, and my employer who is pushing all sorts of AI training on us (including the NNG workshop) that, whatever your personal or ethical feelings about AI, if you aren’t keeping current with Generative AI tools, you will be left behind in the job hunt. Just like when designers had to be expert on Omnigraffle…which is still a thing apparently (lookin at you Amazon).
Figma will not be able to design your website from a prompt. They tried. And will try again. And other sites/plugins will get closer.
As a long-time UX Generalist, née human factors engineer, I think about the full range of the UX design process and how GenAI can help.
Strategy
Problem framing: The key to UX design is understanding the problem to be solved, not making logos or wireframes. Interestingly, ChatGPT4.0 understands abstraction laddering and the 5 Whys, and can spit out a generic case that could be iterated with more accurate statements to generate insights. As a minimum, it could be used in a workshop to show ‘this is an example of 5 whys’ and then do the exercise to ideate accurate statements as input. GenAI tools are notably best for ‘generating’ ideas, and reducing the blank whiteboard stage.
Affinity mapping. All the whiteboard tools are doing this now. Not well, but it’s a start.
Customer journey: ChatGPT4.0 does a reasonable job of responding to the prompt: “describe the customer journey for cashing a check on a bank website” (though didn’t remind me that unless website is on a platform with camera access it ain’t happening). Again taken to a workshop scenario this can create a draft flow that can be iterated, however you might lose the insights and discussion you get by stepping the team through a journey step by step.
Storyboarding: The hands-down winner in GenAI today is image generation. Tools are getting better at retaining the same characters and settings. Sorry Shutterstock/Getty Images! Images are getting pretty ‘same-y’ though.
Research
Unmoderated usability testing is already a thing. I think we will get to a point where A-B testing from concepts generated from existing design systems will be automated. In my experience, the testing platforms have a built-in bias with the professional test participants (sure, you are a small business CEO with veterinary supply sales experience in the market for a loan… aren’t we all).
Structured interviews can be automated through prompts, though we aren’t talking GenAI here. Unstructured interviews and contextual inquiry are unlikely to be automated or generated.
Synthesis is a key benefit for GenAI. As one who had to transcribe micro-cassette interviews, then analyze multiple participant transcriptions, compare contrast, etc. this is one task that can stay gone. I have not had the opportunity to use these, for IP and client privacy reasons, but the general ability of GPT to summarize is known.
Design Ops
ChatGPT4.0 did a pretty good job with this prompt: generate a naming scheme for design tokens for a design system with 4 brands and light and dark themes. Though similar results from Google and click the Nathan Curtis link. Since GenAI has a lot of utility as coding assistant there is likely a lot of support here for code inspection and ensuring design system compliance, but it’s not where I’m spending a lot of time right now.
Icon generation. On the TI-nspire, I made hundreds of icons, 16×16 black-white, grayscale, 8 color, for obscure concepts like Laplace transformation, mean, median, mode, etc… Revisiting my original quest to update the ‘save icon’ for a form, I tried Adobe Illustrator vector generator:
Maybe I can work from these, I was never good at vectors.
Interaction Design
This is the meat and potatoes of what most think of UX design, and the least useful for GenAI. NNG said as much. The thing is, once you have a good design system and pattern library, and have it built into Figma asset library with components and styles, this is the easiest part of the job. It is just plug and chug.. You really don’t want to generate screens from scratch/AI for every app you are building, you want consistency. Dashboard, search page, detail page, form entry, content page — there are a handful of patterns that have been around for a while, and plenty of sites and books documenting them.
Where we could use some help, ahem Figma, is automating prototype building. Somehow I don’t trust you…but I will try it out on a copy of a copy of a working file.
What Else
There is probably more I could add to this, as I think through day-to-day tasks and occasional tasks. UI content writing is useful, especially for those writing in a non-native language. Personally I feel there is more promise for UX in machine learning than generative AI, such as customization of a site or app, getting proactive usability feedback based on site analytics, etc. GenAI is the shiny new thing.
BTW, as an experiment, I used ChatGPT4.0 to generate a post for contrast. Meh. Here is the one insight I have — I learned a lot manually writing a post in 3 hours, and less than nothing (negative information!) by generating one in 15 seconds.