The first time our AI business advisor recommended Viagra to solve a cash-flow problem, I should have known we were in trouble. At the time, it just felt like part of the fun. This was 2019. Before COVID, before “AI” meant ChatGPT to everyone. Back then, TikTok was still a teen fad, WeWork had just imploded, and the hottest tech IPO was Peloton. Large language models were a fringe research curiosity, not the center of the tech universe.
I’d been experimenting with GPT-2 for a few months with my cofounder Joel. Back then, accessing a language model meant wandering through flat, utilitarian websites clearly built for researchers, not companies. No marketing polish, no onboarding flows. Just sparse pages where you could type in a prompt and watch it conjure words no one had written before.
We’d fine-tuned GPT-2, adjusting the base model so it picked up on the patterns in our dataset, and built a command-line interface. With help from a developer friend named Zach, we turned it into something a non-technical person could use.
We were building an AI business advisor, something to supplement the startup accelerators and founder education that I teach for a living. Our idea was to train GPT-2 on business books through a process called fine tuning.
We had to dig around sketchy websites, hunting for .txt file versions of business books for the model to learn from. The works we found weren’t always high quality—and somehow, far too many were maritime economics textbooks. That’s how you could ask about product–market fit and get a confident answer involving Norwegian shipping lanes. And occasionally, for reasons we never uncovered, it would suggest Viagra as a business strategy.
But when it worked, it was fascinating. Ask it “How should I price my SaaS product?” and you might get something like “Consider the lighthouse keeper who charges by the storm” or “Think of pricing like layers of an onion—each customer peels back what they can afford.” The suggestions were abstract, metaphorical, often nonsensical—but occasionally they’d spark a genuine line of thought we hadn’t considered.
When I pitched our “AI business advisor” to Janet, who runs innovation programs for pharma leaders at the University of Texas, she was polite but unconvinced. Like most people in 2019, her mental model for AI was “Google, but worse.” Explaining that this wasn’t searching the internet, but generating text from patterns in data, didn’t help. Her reaction should have been a warning sign. Joel and I ignored it. We were hooked on the tech’s potential.
Users Will Always Surprise You
I’d been doing one-on-one demos, showing the prototype to maybe 20 people, but I wanted to see if I could get feedback at scale. So I convinced the executive education branch of the McCombs School of Business at the University of Texas to let me run a session on the history of AI. This took some doing. In 2019, AI hadn’t hit public consciousness yet, and most people thought it was science fiction. The hook that finally worked: their executive MBAs would get hands-on time with my GPT-2 business advisor tool.
Because our system didn’t have a web interface, we had to run the model locally and project it onto a big screen. At one point, I noticed a participant typing something in. I glanced over his shoulder—”How do I get a girlfriend?”—and nudged the student next to me. “Is he seriously using this to try to get a girlfriend?” We both smirked. Then I walked over and quietly told him, “Hey, everyone can see that.”
It was funny, but it also planted a lesson I still use when I work with organizations today: people will always find ways to use (or misuse) your AI system in ways you didn’t anticipate. In 2019, those surprises were harmless. In 2025, they can mean compliance failures, reputational damage, or data leaks. Either way, you have to design with the unexpected in mind.
The Steamrolling
Then came December 2022. OpenAI released ChatGPT.
I typed in my first question. The answer came back faster, more coherent, and more relevant than anything our product could produce. My eyes widened. This was the best AI product I’d ever used. I sat back for a second, exhaled. And then my stomach sank. The race was over.
It was also a good reminder of something I’d learned the hard way: the models matter more than anything. You can improve prompts, datasets, and interfaces, but if someone else has a model that’s vastly more capable, your product is at risk. And model risk isn’t just a factor in building AI products. It’s also a risk in using them. If you depend on a model and it changes, disappears, or degrades, your entire workflow can break overnight.
We hadn’t raised VC money or quit our day jobs, so the wipeout wasn’t catastrophic. But seeing that leap in capability up close made it crystal clear what separates a fun prototype from something the whole world actually wants to use.
The Edge I Kept
Even if the product died, the years in the trenches left me with an edge I still use when designing and leading AI education and innovation programs.
I knew how to:
Context-load models. Give them thorough, relevant information about the problem you’re solving so their outputs are more relevant and applicable. I rarely type long prompts anymore; instead, I hit the microphone and dictate the context, the challenge, and what I’m asking. Speaking is much faster for giving the detailed background that makes AI responses actually useful. SITO (“Speech In, Text Out”) is the best way to use AI.
Use models to generate multiple approaches, not just single answers. Instead of asking “What’s the best marketing strategy?” I learned to ask “What are five different marketing strategies I could try?” Then I’d pick the most promising options rather than accepting whatever the AI suggested first.
Separate the wheat from the chaff. Quickly discern which AI outputs are actually valuable.
That expertise helped me win a $50,000 government contract. While other consultants were still figuring out what prompts were, I already knew how to structure complex requests, anticipate where AI might go wrong, and design safeguards for unpredictable outputs. The same skills I’d developed wrestling with our buggy GPT-2 system—context-loading, output evaluation, user behavior prediction—turned out to be exactly what government agencies needed as they started adopting AI tools.
It also led to other opportunities: running the U.S.–Mexico AI Education Initiative, facilitating AI Innovation Week at IPADE (Latin America’s most prestigious business school), and building a reputation as someone who earned their AI credibility by building with it before it was fashionable.
Learning AI When It Was Broken
That Viagra recommendation taught me the most important lesson about AI: the technology will always surprise you. The key isn’t building perfect systems. It’s learning to work with imperfect ones before everyone else figures it out. I got that head start in 2019, when AI was still weird, wonderful, and broken. Now that it’s everywhere, that early experience with the messy stuff is what gives me an edge.
Thanks to early readers: Mak Rahman, Joe Ballou, John Sherwin