AI Insights
Tools, learnings, and insights about building products using AI workflows and tools
On AI Insights I am sharing practical tools, learnings, and insights from building products using AI workflows and tools. As more organizations embrace AI-native development, understanding what works — and what doesn't — becomes crucial for success.
This page will evolve over time with more insights, tool recommendations, lessons learned, and scenarios of AI implementations and Build With AI workflows.
Cases and Scenarios
🚀 Building Work2gether AI — Solo with a Team of AI Agents
Work2gether AI is more than a product that uses AI — it was built with AI tools at every stage. This isn't just about the final product; it's about demonstrating what's possible when you fully embrace AI in your development process.
Explore Work2gether AI and how it was builtBuild With AI Workflows
⚙️ Choosing Your (main) LLM Coding Assistant
There’s never been more choice in AI-powered coding tools — but more choice means more tradeoffs. Each tool balances simplicity vs. control, and the right fit depends on your team’s skill level and what you’re building.
When evaluating options, consider:
- Simplicity - How easy is it to get started? How much setup, prompt engineering, and configuration do you need?
- Control - How much control do you have over what the Agent does on your code, and how easy is it to be the human-in-the-loop?
Below are examples of some of the popular tools,
positioned on the simplicity-control spectrum.
Keep in mind that this field moves VERY quickly so the
positioning is a moment in time and it is only meant
to provide you with framework for your own evaluation:
💡 Tip: Start simple. Many teams overestimate how much
complexity they really need early on. As you build
confidence, layer in more advanced tools and
workflows.
It is easy to switch if you keep your code and code
principles syncronized somewhere else, for exmaple in
a GitHub repository.
🤝 Agent Mode vs. Assistant Mode
When working with Replit (or similar tools like Cursor or Windsurf), it helps to think about Agent Mode and Assistant Mode as different collaboration styles with your AI coding partner:
Agent Mode is amazing when:
- You want to create a big first version of a new feature or application—even if it rewires other parts of the codebase
- You’re exploring or experimenting, and don’t mind if things get messy
- You’re in freestyle mode and want to see where the AI takes you
Think of Agent Mode as pairing with a very confident junior developer who’s not afraid to take big swings.
Assistant Mode is amazing when:
- You have a clear idea of what you want built, but don’t want to write all the scaffolding yourself
- You need to make small or medium changes that should mostly respect existing structure
- You want something done in minutes, not hours, with fewer surprises
Assistant Mode feels more like delegating targeted tasks to a helper who respects your direction.
👉 Tip: The Builder's skills in debugging, analyzing, and refining will strongly influence which mode feels most productive. If you’re comfortable cleaning up and steering bigger changes, Agent Mode can be a superpower. If you prefer tighter control and predictable outputs, Assistant Mode often works better.
Using AI in Your Product
🎛️ Working with LLMs in Your Product
LLMs can unlock powerful capabilities in your product, but it’s important to understand how they behave. Unlike traditional code, which is deterministic and predictable, large language models are non-deterministic — the same input can sometimes produce slightly different outputs.
This means you need to approach testing and maintenance differently:
- Continuously test outputs against your expected outcomes
- Monitor for drift or unexpected changes in behavior over time
- Adjust prompt strategies or retrain when results fall outside acceptable ranges
- Build clear fallback and error-handling paths
This mindset can feel unfamiliar for teams used to deterministic codebases, but it’s essential for maintaining quality and user trust when deploying LLM-powered features.
📚 Layering Training and Context in Your Model
When designing LLM-powered features, it’s helpful to think in layers of instruction and context that shape your outputs. Each layer builds on the last to create consistent, relevant results:
-
Foundation Model
The base model you choose (e.g., GPT-4o) determines core capabilities and constraints. -
System Instructions
High-level rules and behaviors set at the system level—these define how the assistant should generally act. -
Dynamic Context
Information you inject based on the specific user, customer, or domain—such as recent interactions, user profile data, or role-based guidance. -
Scenario Prompt
The immediate prompt that defines what the model should do right now.
Structuring your approach this way makes it easier to debug, refine, and maintain consistency over time. For example, if outputs are inconsistent, you can isolate whether the issue is in your system instructions, the dynamic context, or the scenario prompt itself.
Many teams skip this layering and rely solely on single prompts, which often leads to brittle or unpredictable behavior. Clear separation helps you scale complexity without losing control.