Actually, It's AI That Needs Us

By Guus Baggermans

;

As generative AI technologies like ChatGPT find their way into every nook and cranny of digital products, it's clear that we are all participants in a grand experiment. AI companies have put the onus on us to collectively explore their capabilities and limitations, and they are paying close attention to which use cases resonate and stick.

Currently, those use cases seem to be focused on the creative field, specifically training AI to take over tasks currently performed by designers. What many of us are discovering however is that the current generation of generative AI—while great at general tasks—falters when we need it for more specific purposes in production. Let’s explore why.

 

The challenge of training data

Creating the machine learning models that power tools like ChatGPT requires huge amounts of training data. Many of the large language models (LLMs) start from the same foundational datasets that aggregate text from sources like Wikipedia, StackExchange, and even TV show subtitles. These sources are all available as starting points for LLM training.

But curating and vetting this hodgepodge of content presents significant challenges. Not all of it has been screened for biases or checked for usage rights, raising ethical concerns that tech companies are starting to acknowledge (albeit in sometimes questionable ways).

 

Teaching through fine-tuning

Because LLMs are trained on such wide-ranging data, they struggle with specificity, hence the apt name "General Purpose Transformer." To overcome this, companies teach models to excel at particular tasks through a process called fine-tuning.

Like Michaelangelo’s ubiquitous quote on sculpting, fine-tuning is the process of chiseling away the superfluous to hone in on the desired output. We, the users, are already doing this when we give a thumbs up or thumbs down on Spotify, or select our favorite Midjourney-generated image. This human feedback method is called Reinforcement Learning from Human Feedback (RLHF). RHLF is the process of refining the model's outputs to align with user preferences.

 

The limits of human feedback

The problem is that RLHF only works if the desired output already exists within the model's initial training data. What happens when you need the model to perform a task it was never originally trained on, like designing UI screens? Figma has recently announced a tool that can do just that.

Figma's solution is to draft its users as unwitting teachers for its AI features. By updating their terms to include automatic opt-in, Figma can soon parse user-created designs to train its generative AI tools. Besides the obvious ethical and legal concerns with this, it’s also important to think about data quality. While massively aggregating data, it’s impossible to check for quality. These missing quality checks is how Google’s AI suggests putting glue on your pizza because someone 10 years ago made a joke on Reddit. It turns out that the internet isn’t always great at sourcing facts. 

As Figma recently learned when it had to disable features that appeared to copy Apple's designs, the quality and sourcing of training data is paramount. Even though Figma has an appropriate explanation on how it’s not a copyright violation from the technological sense, the output does make you think differently.

 

The designer's process is still key

Luckily, starting from reference material is a natural part of the creative process. Designers are always looking for inspiration and references before developing novel, problem-solving ideas. During the design process, they also vet solutions for legality and ethics before presenting solutions to clients.

While AI tools can mimic certain design tasks, they are far from replacing the invaluable role of the skilled designer. Sure, generative AI can create early inspiration but designers will need to take care of other factors like making the right decisions in context, building systems, and effectively presenting data. argo’s own Jarrett Webb shared a great take on this over on our LinkedIn. Instead, GenAI will likely serve as a power-up, enhancing and accelerating the design process.

As we continue to collectively teach and shape AI, designers have a critical role to play not just in wielding these new tools, but in thoughtfully guiding their development to align with our creative and ethical aspirations. The great AI drafting is still in its early stages, and we are all, in a sense, teachers shaping its path. We shouldn’t take this responsibility lightly.

About the Author

Guus is a principal designer at argodesign, directing the design of AI enabled tools, and also responsible for experimenting with the latest technologies in design simulations. During his career he has worked across many different business verticals, from inventing the future of public transport and tourism to designing technical software for Hollywood. His passion lies in crafting experiential prototypes that explore, inspire, and communicate experiences designed to delight and leave a lasting impression on people. He was one of the founders of Raft, a strategy and design consultancy in Amsterdam, and previously worked as an interaction designer at frog.