BLAKEWERLINGER

Self-taught since 2015. Early AutoGPT contributor. Building with C#, Python, and AI agents.

Selected Work

Projects

View all on GitHub
Contribution
Read My Story

AutoGPT

Contributed to the development of AutoGPT, A 180k+ starred project that was one of the first to use LLMs for agentic workflows. Worked directly with creator Toran Bruce Richards on agent architecture and features.

  • Video demo featured in AutoGPT's official README
  • 180k+ GitHub stars
  • Worked on a recursive context compression system
Python
Python
OpenAI API
OpenAI API
Selenium
Selenium
Shipped
View Case Study

Imagine App

An agentic AI assistant for Best Buy product search. Built a complete tool-calling architecture with fluent API client and OpenRouter OAuth integration.

  • Full agentic loop with tool calling
  • 600+ LLM models via OpenRouter
  • Fluent builder API design
Flutter
Flutter
Dart
Dart
OpenRouter
OpenRouter
Best Buy API
Best Buy API
Shipped
View Case Study

Godot GOAP Demo

High-performance goal-oriented action planner in Godot. Custom ECS, custom sprite renderer that bypasses the scene tree, backward dependency pruning.

  • 76,500 plans/sec throughput
  • Custom ECS architecture
.NET
C#
Godot Engine
Godot
Shipped
View Case Study

Rebang

A fast, modern bang redirect service combining DuckDuckGo and Kagi bangs into one optimized database that supports custom bang creation.

  • 35% smaller optimized database
  • DDG + Kagi bangs combined
TypeScript
TypeScript
React
React
Tailwind CSS
Tailwind
Vite
Vite

More projects on GitHub

Self-taught
AI-obsessed

Developer Journey

Scratch (2015)

I got my start with Scratch around age 11. Nothing fancy, just dragging blocks around trying to make a clone of "Cookie Clicker" that didn't suck. I spent way too many hours on it, but that's where I learned programming fundamentals: variables, loops, and the satisfaction of "Making a Game".

Unity + C# (2018+)

By 14 (2018), I moved to Unity and C#. Over the next few years, I built 100+ prototypes exploring A* pathfinding, procedural terrain generation, data serialization and saving, a whole host of software design patterns, and both 2D and 3D systems. Most never shipped - they were learning exercises. I'd hit the interesting technical problem, solve it, then move on. Or I'd have a vision for the next GTA-5 and would burn myself out after a few weeks of building.

Eventually, I started using my skills for game modding. I would use systems like MelonLoader to decompile IL2CPP executables and Harmony to extend RimWorld and Bloons TD 6. My Banana Farmer Tower mod for Bloons was featured by ISAB in a video with 1.4M views and downloaded by 180k players. It was the first big project that I properly shipped and got recognized for. I also made some popular mods for Rivals of Aether (100k downloads), though they were a lot more silly and less technically interesting.

Python (2021)

In my senior year of high school, I spent around 2 months teaching myself Python to build a Jarvis-style voice assistant. I used an existing PyTorch library for intent recognition, trained it with custom training data, and learned a lot about ML training in order to make it consistent.

This was before LLMs were small or good enough to run locally, so it was really just a bit of ML to infer intent and pair a request to the closest matching command. It actually did pretty good 60% of the time. It was a relatively straightforward project but I learned Python, NLP fundamentals, and how frustrating ML training can be when your dataset is tiny and your laptop has 4 cores and no GPU.

It was my first time working outside of C# and Unity, and it taught me that I could pick up new languages and ecosystems when I had a concrete goal."

Performance obsession (Godot GOAP)

I genuinely enjoy profiling. Finding a hot path, fixing it, and watching the metrics drop is deeply satisfying. It's puzzle-solving with immediate, measurable feedback.

With my Godot GOAP (Goal-Oriented Action Planning) system, I spent two months obsessively optimizing the AI planner for fun. Nobody asked me to. I just couldn't let it be slow. By implementing a two-stage approach with backward dependency analysis, I pruned up to 70% of the search space before even starting the A* search.

Combined with zero-allocation patterns and cached state transitions, the results were dramatic: 70% reduction in planning time, 139.6% increase in throughput, and 10,000 concurrent agents each making intelligent decisions in just 0.020ms. That's the kind of performance that keeps rendering and physics budgets healthy.

The graphs compare the Initial Feature-Complete Baseline and the Production-Optimized Head. This wasn't about over-engineering; it was about respecting the performance budget to ensure complex AI never steals from the rendering or physics thread.

AI Obsession

Early curiosity (pre-ChatGPT)

I've been obsessed with AI since before it was mainstream. Back in high school, I was experimenting with GPT-3 Davinci through the API, trying to figure out how to integrate language models into my projects. I built that voice assistant in Python because I wanted AI everywhere, even when the tech wasn't quite ready.

When ChatGPT dropped in late 2022, I was stunned. The jump in capability was massive. I immediately dove in and spent months learning prompt engineering, tokenization, context window management, and all the logistics of working with LLMs. I was thrilled to finally have access to models powerful enough to build the kinds of systems I'd been imagining.

AutoGPT contributor (Apr 2023)

In April 2023, I contributed to AutoGPT during its explosive early growth. This was before function calling, before thinking models. GPT-4 had to "think" by outputting structured JSON tool calls that the system would parse and execute.

I built a recursive summarization system to compress long documents into the 16k token context window, and I added Selenium-based web search. The models were expensive and not very smart, so it took careful trial and error to avoid burning $20 on an agent stuck in an infinite loop.

I got deep into the codebase and saw early versions of what's now the standard agent loop: think, justify, output JSON, execute, observe. I sat in meetings with other contributors and the project creator, Toran Bruce Richards, contributing ideas about the future of AI agents. It was a defining moment. I got to help shape a project that now has 180k stars on GitHub.

Below is a demo video I made at the time.

Featured Achievement

My demo video was featured in AutoGPT's official README for several months during development. Despite the rough audio quality from my laptop mic.

AutoGPT ended up becoming a landmark project, and it's now one of the most starred AI agent repositories on GitHub (~180k stars as of late 2025). My contributions weren't massive in terms of lines of code, but being in those early conversations about the future of AI agents shaped how I think about this space. I'm still genuinely passionate about agentic systems like AutoGPT, Claude Code, Cursor, and other tools that are pushing the boundaries of what AI can do.

AI in my daily workflow

I use AI extensively in my development workflow. Cursor, Claude Code, and Windsurf are daily tools.

I'm upfront about this because it's the reality of modern development. I'm not a developer who just prompts until something works. I'm not afraid to use AI because I make sure I can defend every choice, and every decision in my codebase. I know why it's structured the way it is. I only "vibe code" private tools for myself. Anything users touch makes me paranoid if I let AI drive. I've been writing code as a hobby since 2017, long before AI was useful. I use it as a code accelerator, a code reviewer, not a replacement for understanding and purposeful design.

It's good though: AI has turned every project into a learning opportunity. If I encounter a new library, run into a new concept, or an obscure problem I don't understand, I use AI to explain the underlying principles and show me examples. It has fundamentally shortened my time to learn any new technology. It's "Search Stack-Overflow" on steroids.

My personal favorite tool is Cursor, because it's still a fully-featured IDE with all the tools I need to build anything I want. The pricing is reasonable for a professional tool, and the agent is well designed and relatively bug-free. But I've also used Windsurf, Claude Code, Gemini CLI, OpenAI Codex, and other tools extensively. Though I heavily prefer Cursor.

What I Work With

Specialty

AI Agent Systems

Primary

C# / .NET

Secondary

Python, TypeScript

Philosophy

Iterate to Innovate