Three months ago, I left my role as CTO to explore new opportunities and rekindle my passion for hands-on problem-solving. Over the past 15 years, I’ve led technology teams and scaled companies, but stepping back to build from scratch—writing code, testing systems, and addressing real-world challenges—has been deeply rewarding. This journey has brought me to an exciting intersection of AI, data, and automation, a convergence that I believe will transform industries much like cloud computing has done over the past decade. Look around, and it’s clear: AI is reshaping everything.
With over 90% of corporate data in unstructured formats like documents, emails, and web pages, I set out to create a system that could turn this chaotic information into structured, actionable data accessible via API, unlocking new value for companies in any application they envision.
The AI-driven system I built does exactly that, automatically converting unstructured source data into structured, queryable relational databases with an accessible, fully documented API. Using a robust Retrieval-Augmented Generation (RAG) pipeline paired with agentic AI, this system extracts, organizes, and refines information from tangled, unstructured sources, applying sophisticated semantic analysis and quality checks, then establishing precise data relationships before storing everything in a PostgreSQL database. This system is designed to fight data entropy, transforming messy data into organized, insightful structures—seamlessly and autonomously. This goes far beyond simply prompting an LLM.
A note: I experimented with a range of modern tools to tackle this problem. Some exceeded my expectations; others fell short. I’ll share a follow-up article soon with insights on each tool I tested.
Imagine starting with just a list of URLs or a collection of documents—PDFs, presentations, web pages—and ending up with a fully organized, query-ready database and API. That’s the transformation my system enables. Designed for versatility, it dynamically interprets data models, allowing it to adapt seamlessly to different industries and applications with minimal manual setup. I'm already collaborating with companies to tailor this system to their unique needs.
Building this solution meant tackling fascinating technical challenges: engineering a recursive data model walker capable of handling cycles in object graphs, developing an introspective model analyzer to support arbitrary data structures, and incorporating self- and adaptive-RAG with memory for continuous, automated quality assurance. Each innovation is a step toward a more intelligent approach to converting unstructured data into structured, actionable insights.
I believe a new paradigm is emerging. It will be a step beyond data extraction, transformation, and loading (ETL). Something like ET-I-L, where the "I" is an intelligence layer with semantic understanding of the data, technically a part of the translation step, but far more valuable than just "translation." Maybe EITL -- Extraction, Intelligent Translation, Loading? Anyhow, I digress.
Introducing: Happy Camper
Happy Camper's Landing Page.
Since most folks think looking at database tables is boring I needed a way to showcase the system. So, I built a proof of concept called Happy Camper. (Please note: Happy Camper runs on budget servers, so it might be slow.) It’s a marketplace using data created by my system that helps parents find and manage children’s camps—something that millions of families struggle with every year. This is where it gets really interesting.
After defining the data model for youth camps, in just three days at a cost of $10.86, I created the most comprehensive database of children’s camps in Central Texas. My system processed more than 80,000 pieces of semantic content and created the database, all for under $11. (I still laugh when I read that.) In the past, this would have cost companies thousands—if not hundreds of thousands—of dollars. I see huge economies of scale being unlocked here.
Screenshot of search in Happy Camper.
Happy Camper makes it effortless for parents to discover, plan, and register for camps, turning what’s usually a stressful experience into a seamless one. The AI system’s adaptability means it can handle inconsistencies between how different camps describe their offerings. No matter how these camps decide to present themselves—whether they change terms or restructure their websites—my system figures it out.
Beyond Camps: The Bigger Picture
Happy Camper is just one compelling application of a broader technology with immense versatility. From creating niche B2B marketplaces to managing property data or building comprehensive, structured databases from scattered information sources, the potential reaches far beyond camps. This toolset I’ve developed automates the transformation of unstructured data into structured, accessible databases—turning complexity into clarity. I’m collaborating with organizations that recognize the power of these capabilities to bring smarter data solutions to industries where they’re needed most.
Let’s Connect
If you’re interested in how adaptive, agentic AI can transform data use in your industry or if you’re passionate about expanding Happy Camper’s impact, I’d love to connect. I’m also open to fractional CTO roles and consulting engagements where I can help companies drive their technology strategy forward.
Together, let’s explore how this technology can open new avenues for innovation. Feel free to reach out directly or connect with me here!