Welcome back, Creators! 👋
We've got +537 new builders joining our newsletter community this week.
This week we also finished editing the Create With 2025 recap video, it's packed with soundbites that will get you excited to explore building with AI.
Don't forget all the talks are available for free on our YouTube channel.
Pour yourselves a cuppa and let's jump in...
🔎 Preview: In today's issue
- 🤔 Why does AI hallucinate and what can we do about it
- 📺 Building AI Agents with Bubble.io
- 🚀 A 6 step guide to create better AI images
Explain Like I'm 5
Hallucinations
If you've used AI at all, you've almost certainly (knowingly or not) have experienced an AI hallucination - when it produces an answer that sounds confident but isn't true.
The problem is, it can be so convincing it's very difficult to spot unless you're already a subject-matter expert.
For example:
- Ask it for the author of a book that doesn’t exist → it might invent one.
- Ask it for a court case → it might cite a made-up case that looks real.
Hallucinations happen because LLMs are language machines, not fact machines. They’re designed to predict the most likely next word, not to check whether that word is correct.
Think of it like autocomplete on steroids: if you start typing "Once upon a…", your phone will suggest "time." But if you start a sentence that's commonly written incorrectly, it may give you a bad prediction.
AI prioritises the most likely choice, not the correct one.
🧠 The problem
- Training – The LLM is trained on billions of words. It learns patterns in language, not facts.
- Prediction – When asked something, it generates the next word with the highest probability of fitting.
- Filling gaps – If the model hasn’t “seen” the answer before, it doesn’t say “I don’t know.” It improvises.
- Confidence illusion – Because the output is fluent and polished, we mistake it for fact.
⚠️ Why it happens
It's pattern-matching, not fact-checking. The model has no built-in way to verify accuracy (and no incentive to tell the truth), and people expect answers, not silence.
💡 What you can do
Hallucinations aren’t going away completely—but you can manage them:
- Verify critical outputs – Always double-check facts, numbers, and citations.
- Ground the model – Connect it to trusted data sources (databases, APIs, search) so it pulls from reality instead of guessing.
- Add rules and guardrails – Instruct the AI not to answer if unsure, or to show sources.
- Human-in-the-loop – For high-stakes work (law, medicine, finance), keep an expert reviewing the output.
- Match task to risk – Use LLMs for brainstorming, drafting, and summarising, but not as the single source of truth.
The takeaway: treat LLMs as creative collaborators, not final authorities. They can generate great first drafts, ideas, and insights, but responsibility for accuracy still lies with you.
📺 VIDEO: Building AI Agents in Bubble
Recently we've been updating the Create With website which is built on Bubble. It's one of the most powerful visual development tools out there.
This is a fascinating workshop where Tom Wesolowski shows us how to build AI agents right inside Bubble - no extra platforms needed!
Tom breaks down everything from scratch, showing us the difference between regular chatbots and true AI agents while walking through the OpenAI API setup. You'll see exactly how to make your Bubble app handle function calls, remember conversations, and execute actions like web searches and sending emails.
Tom doesn't just talk theory - he builds the whole thing live, showing all the workflows and tricks (including some awesome Bubble hacks for creating loops!). Whether you want to soup up your existing Bubble apps with AI powers or build a dedicated agent from scratch, this workshop has you covered.
▶️ Watch Video
🔊 On your radar
Keep your finger on the pulse of what we think is hot in the space right now 🔥
⭐️ Founders as Content Creators
👉 Why it matters: Software is easier to build than ever, so the real value is shifting to trust and brand. People are choosing products from founders they follow and trust, rather than faceless companies. If you’re building something, sharing your story genuinely matters.
⭐️ ChatGPT Adds Conversation Branching
👉 Why it matters: You can now branch off from any point in a ChatGPT conversation, letting you explore different ideas without losing your original chat. This is handy for testing new prompts or following side topics, especially if you use AI for brainstorming or research.
⭐️ Six Simple Steps to Better AI Images
👉 Why it matters: If you use AI for image creation, consider these basics: subject, context, style, modifiers, lighting, and camera control. Getting these right helps produce clearer, more professional results—even if you’re not a photographer. Here's a guide to Nano Banana.
🤖 Extra byte
Bubble.io made some upgrades to their platform such as a faster database and queries (90% faster) and a public experts program if you need help
Member Office Hours
Talk through your AI, agent and app building questions at our office hours
Join our Create With member office hours calls to talk through your AI questions with our expert coaches.
Last time we talked about getting the most from ChatGPT, using Claude Code, and our experiences with Lovable.
The next call is next Thursday at 2pm BST.
🗓️ Events to Take Note Of
🌎 10th Sep: Figma makeathon. $100K prizes (closing date tomorrow)
🇩🇪 11th Sep: Generative AI Summit Berlin
🇨🇦 11th Sep: CreateWith Meetup with Benoit
🖱️ Discover more events on the new Create With events directory!
Subscriber Only Links
🔒 Make.com now works across all Monday's products
🔒 How to build a Shopify app that solves real problems. One to watch
🔒 Try Google’s Stitch for free AI design
🔒 Claude Code: More Than Coding. It's an operating system
🔒 Google are making serious progress with ai. Don't sleep on it
🔒 Grok video now has speech. Kinda fun
💭 Final Thoughts
OpenAI CEO Sam Altman said in a recent interview "I think [in the future] there will be a premium on human, in person, fantastic experiences."
Humans will always need humans
We completely agree with Sam. Humans are social animals and IRL interaction is irreplaceable. That's why we're doubling down on in-person events. Find some near you in our events directory!
(Literally, true story, as I write this newsletter on the train there is a man sitting across the aisle who is conversing to ChatGPT like it's his therapist. 😬 ~ Kieran)
With good vibes ✨
{% poll 28123 %}{% for option in poll.options %}{% endfor %}
{{ poll.question }}
[
]({{ option.url }})
[{{ option.value }}]({{ option.url }})
{% endpoll %}
{{ address }}