AI Makerspace Cohort 9: (Jan–Mar 2026)

From January through March 2026, I participated in the AI Makerspace “Certified AI Engineering” Bootcamp (Cohort 9)—a 10-week, hands-on program focused on actually building with modern AI systems.

The format was consistent and manageable:
2 nights a week, 2 hours per session, with additional homework ranging from 2–6 hours per assignment.

But this wasn’t a passive learning experience—it was fast-paced, immersive, and very much about shipping real things.

Leadership & Teaching

A big part of what made the experience strong was the instruction.

Dr. Greg and The Wiz were excellent class leaders—high energy, deeply knowledgeable, and very grounded in real-world application rather than hype.

Dr Greg’s meme game is very strong, you’ve been warned.

3/4 ofthe way through the course, The Wiz transitioned out and a new presenter Can stepped in. The handoff was handled well, though it was still a bit of a loss given how awesome the Wiz is.

What I Learned

The program covered a wide range of modern AI engineering concepts, with a strong emphasis on practical implementation:

  • RAG (Retrieval-Augmented Generation) and vector databases

  • Building agents ground up by defining the state machine (DAG) bottoms up

  • Multi-agent system design

  • Serving Open source models locally or in the cloud

  • Deep learning agents

  • Guardrails and evaluation frameworks (Evals)

One of the biggest takeaways:

Building AI systems is much more about architecture, evaluation, and iteration than just calling a model.

Tools & Stack

We got hands-on experience with a modern AI tooling ecosystem:

  • LangChain / LangGraph (agent orchestration and workflows)

  • LangSmith (tracing, debugging, and evals)

  • Qdrant (vector database)

  • Fireworks.ai and Ollama for model serving

  • Guardrails AI (structured outputs and safety constraints)

Every session included Python notebooks, which made it easy to follow along and experiment in real time.

Notably, the notebooks actually worked with minimal issues, which is not trivial.

Format & Workload

The course was:

  • Very hands-on

  • Fast-paced and engaging

  • Structured around building, not just learning

Each week included:

  • Guided notebook work

  • Practical assignments (2–6 hours) per assignment

  • Iteration on concepts introduced in class

There were also two larger projects, culminating in a final presentation.
(Public Demo video coming soon.)

Community & Collaboration

One of the underrated strengths of the program was the community.

  • Students were placed into “journey groups” for breakout sessions

  • Each group had a leader (mine was Mark—great discussions)

  • Active Discord community with both current and past cohorts

This created a strong environment for:

  • Sharing ideas

  • Debugging together

  • Learning from different perspectives

  • People posting random stuff they were doing with AI (

What Stood Out

A few things the program did particularly well:

  • Content evolves continuously — material changes as the AI landscape changes and that was addressed in lecture as we went (the presenters gave a Wednesday non-class lecture series covering OpenClaw and other topics that were emerging)

  • Strong sequencing — concepts build logically over time

  • Emphasis on shipping — you’re encouraged to build, demo, and share your work

  • High energy — it never felt stale or overly academic

That last point is important:

Many engineers don’t ship enough. This program pushes you to.

Stuff I’d love to see next time

1. Pre-Read Materials - Ealier

It would be helpful to have:

  • Assignments

  • Notebooks

  • Supporting materials

MORE ahead of class, to allow for better preparation and deeper understanding during lectures. I know they have reasons for the timing, I just like to know walking in and have my notebook already rockin.

2. Cohort Retention

A number of participants dropped out over the course. Almost understandable—life happens—but stick with it man. it did impact:

  • Journey group continuity

  • Depth of collaboration

Not an easy problem to solve, but worth noting.

3. Materials Organization

At times, navigating materials felt a bit clunky:

  • GitHub pulls

  • Finding the correct folders/notebooks

It wasn’t a blocker, but a more streamlined structure would improve the experience. There were quicklinks pages and readme files, but I found myself scrambing around finding old browser tabs before each session

Final Thoughts

Overall, I highly recommend the AI Makerspace bootcamp.

It strikes a strong balance between:

  • Theory and practice

  • Structure and flexibility

  • Individual work and collaboration

We had to present a project at the end, to our peers, and spent the last 2 sessions working on it and delivering, doing AV checks. At first this was a total eye-roll and I wanted to get back to the tech, but turns out it was really valuable. Presenting your work and being organized and slowing down and thinking about business outcomes is absolutely important. I’ve presented, and given talks before, but this turned out to be sneaky value.

Other hidden value, the class builds the habit of actually sharing about AI systems—not just building them.

If you’re serious about AI engineering—especially in areas like RAG, agents, and evaluation—this is a high-value program.

Derek

Startup CTO, Software Hacker

Previous
Previous

Welcome to the Team

Next
Next

Boston OpenClaw Meetup