Berlin AI, ML and Computer Vision Meetup - April 24, 2026
Berlin’s in-person AI, ML and Computer Vision meetup returns on April 24, 2026 for a focused afternoon of talks, demos, and conversation at MotionLab in Berlin. This is a hands-on, gear-up-for-production kind of event designed for practitioners who want real-world insights into AI systems, computer vision challenges, and data-centric approaches. If you’re building AI agents, visual AI tools, or no-code/low-code pipelines that rely on perception and reasoning, this meetup has practical takeaways you can apply the next day.
What to expect:
- In-person talks from domain experts covering cutting-edge topics across AI, ML, and computer vision, with a strong emphasis on data, robustness, and real-world deployment.
- A curated program that balances foundational insights with provocative, production-focused guidance—perfect for researchers, engineers, and makers building AI agents or visual software.
- A venue designed for collaboration and informal networking, so you can swap notes, demos, and lessons learned with peers working on similar problems.
Talks scheduled include:
- Kaputt: A Large-Scale Dataset for Visual Defect Detection — A new benchmark for retail logistics anomaly detection, with over 230,000 images and more than 29,000 defective instances. Learn why existing industrial datasets struggle with pose and appearance variation and how a 40x larger dataset reshapes evaluation. The talk covers AUROC performance of state-of-the-art anomaly detectors, qualitative analyses, and how this dataset opens paths for future research in retail logistics anomaly detection. The dataset is available for download at kaputt-dataset.com.
- A Spot Pattern is Like a Fingerprint: Jaguar Identification Kaggle Challenge — A citizen science initiative that uses fine-grained visual classification to support conservation in Porto Jofre, Brazil. The talk discusses ongoing Kaggle competition dynamics, dataset curation strategies, and effective representation learning for wildlife identification.
- Data Foundations for Vision-Language-Action Models — A data-centric exploration of VLAs, including how data design and evaluation benchmarks differ for robots versus standard image or video tasks. Topics include Open X-Embodiment, LeRobot, RLDS, and the unique challenges of temporal structure, proprioception, and embodiment heterogeneity.
- Most AI Agents Are Broken. Let’s Fix That — A critical look at agentic systems in practice and a practical blueprint for building modular, observable
automationai agents