Time appropriated, not saved
The tide has turned significantly against generative AI tools in recent weeks, partially due to the shoddy results that have been cranked out and thrust upon us. Google ultimately pulled an Olympics ad promoting its chat client Gemini, due to the uproar over it, and ProCreate recently announced it will never integrate gen AI tools. With school returning, there has been no shortage of consternation around AI’s role in education. These are unsurprising developments, really. Any digital tool left to its own devices is guaranteed to spin off the rails. The entire point of technology is to help humans do more, better. Removing the human touch turns it into an exercise in automation.
This was all brought home to bear for me personally when a recent project I worked on involving a microcontroller and ChatGPT shed light on the nuanced interplay between these advancements and their practical implementation. Three things became apparent over the course of my experiments: 1) AI affords a wealth of expanded capabilities (almost to a fault), juxtaposed against 2) the inherent limitations and challenges of said tools, which necessitates 3) an evolving role of human expertise in maintaining a balance between the two.
Rapid prototyping on steroids
My task was to integrate a 32-bit microcontroller with a monochrome display to create a UI for an embedded product. The microcontroller, equipped with WiFi and Bluetooth, operates as a tiny, powerful computer capable of being remote-controlled with a mobile web app or serial commands. As I initially had limited knowledge about these components, I used ChatGPT to quickly gain insights and identify the right tools and techniques. The AI tool's ability to provide visibility into cutting-edge products and align specific project requirements with appropriate hardware and software solutions not only saved me time, but exposed and contextualized a raft of new information to better inform the project.
This ability to "shop globally" and access a vast array of resources through AI represents a paradigm shift in how engineers and developers approach problem-solving. The traditional reliance on outdated methods for finding components and solutions is being replaced by AI-driven recommendations, streamlining the development process. However, this increased capability comes with a need for an appropriate level of rigor and understanding to fully harness these tools. Too often real-world applications don’t come down to a strict “this is better than that,” but more so an understanding of the nuanced tradeoffs between choices.
Staying sane in a blizzard of information
While AI significantly accelerated the development process, I encountered several issues in which ChatGPT's suggestions were incorrect or incomplete, particularly when dealing with nuanced versions of the microcontroller. For instance, the open-source ESP32 microcontroller has been made by many different manufacturers. ChatGPT couldn’t wrap its ‘brain’ around this and kept providing the wrong information, often referring to the more generalized versions. This underscores the importance of human expertise in this process—in practice it forces more inspection of the details behind a problem.
I also encountered a critical issue with GPT's handling of memory and its tendency to generate excessive amounts of information. It has a propensity to flood users with data, often without the necessary context or accuracy, leading to a "needle in a haystack" problem. This issue is exacerbated by AI's inability to discern subtleties and context, resulting in a need for aggressive and exhaustive interrogation of AI-generated outputs.
For example, most responses for this project resulted in GPT always responding with exhaustive code. To mitigate this, I asked GPT only to provide code when I asked for it and even then it quickly reverted. The balance between leveraging AI for efficiency and managing the resulting complexity is a delicate one that requires careful consideration. We are in the infant stages of managing or even translating memory as a contextually nuanced feature. The current ChatGPT interface for memory management is woefully inadequate.