AI-Native Engineering Training with Claude Code

A dedicated programme for enterprises and individuals to move from personal ad-hoc AI usage and vibe coding to Agentic Engineering at team and organization level.

Available on demand

Dedicated training for your organization, starting with at least 20 people

  • 20+ practical tasks, targeted to your Product/Platform or provided by us
  • Identifying and building your AI Champions Network
  • Team Homework analysis provided by our experts
  • Conducted in your ecosystem: recordings and supplementary materials stay with you
  • Designed for Engineering organizations: Software Engineers, Architects, Quality Engineers, Data Engineers and other roles
Starts early June

Public training for smaller teams under 20 people and individuals

  • 20+ practical tasks, bring your own repository
  • Recordings are available for a recap, supplementary materials stay with you
  • Designed for Software Engineers, Architects, Quality Engineers, Data Engineers and other roles
0
Plus Week 0 preparation
0
Weekly time commitment
0
Homework crafted by our Practitioners

From Individual Usage to
Team-Scale AI-Native Engineering

You are here

Ad-Hoc

Structured

Spec-driven workflows, test-first discipline, safety nets in place

Integrated

CI/CD pipelines, oversight frameworks, team-wide conventions

Agentic

Multi-Claude orchestration, measurable velocity gains, continuous improvement

What You Get

Live facilitated sessions

4-5 hours per week of hands-on labs, group discussions, and practice with expert guidance.

Self-paced video lessons

2-3 hours of pre-session content covering core concepts and theory.

Real code, not toy demos

From Week 2, apply everything to your own production codebase. Bring Your Own Brownfield.

Trained by Practitioners

Our Trainers are ENDGAME AI-Native Practitioners whose mastery comes from 10s of successful engagements.

Hand-crafted materials

Shared Materials are hand-picked and crafted by ENDGAME Practitioners.

Homework evaluation

For Dedicated training our Practitioners provide a summary of a weekly team work.

The Curriculum

Core Concepts

Prerequisites check

Git, terminal, code reading, and development fundamentals. Self-assessment quiz determines your track.

Tool setup

Node.js 18+, Claude Code, Git with worktree support, GitHub with Actions. Verify everything works on a sample repo.

Foundation videos

Four 30-minute modules: agentic AI concepts, how Claude Code works, AI across the SDLC, and ethics and responsibility.

Choose your codebase

Pick a production codebase for Weeks 2-4. Active development, reasonable complexity, some technical debt.

Labs & Practice

  • Complete self-assessment quiz
  • Install and verify all tools
  • Run Claude Code on a sample repo
  • Submit your brownfield codebase for approval
  • Write a 500-word reflection on your current AI usage

Core Concepts

The agentic mindset

Context, reason, act, verify, repeat. Context engineering matters more than prompt wording. Explore-plan-code-commit workflow.

CLAUDE.md configuration

Project-level configuration files that tell Claude how to work. What to include, what to exclude, and where to put them.

Prompt patterns & token economics

Imperative, exploratory, constrained, verification, and decomposition patterns. Managing context size and cost.

Systematic exploration

Architecture first, then flows, then edge cases. Generate documentation as you explore.

MCP servers

Connect Claude to external systems: databases, APIs, browsers, documentation. The screenshot-iterate loop for UI development.

Custom skills & hooks

Skills in .claude/skills/, hooks for lifecycle events. Automate formatting, linting, and dangerous command blocking.

Labs & Practice

  • Explore a sample repo and document three non-obvious insights
  • Create and refine a CLAUDE.md file, test effectiveness
  • Apply prompt patterns to 10 tasks
  • Create architecture documentation for a legacy codebase
  • Configure MCP servers (filesystem, Playwright)
  • Create a custom skill with documentation
  • Draft a CLAUDE.md for your brownfield codebase

Core Concepts

Why specifications matter

Vibe coding fails at scale. Specs are the single source of truth for both you and the AI. Functional requirements, acceptance criteria, constraints.

Three intensity levels

Spec-first (complete before coding), spec-anchored (living document), spec-as-source (humans edit specs, AI generates code). Match intensity to task size.

The four-phase workflow

Specify (what, not how), Plan (stack, architecture, constraints), Tasks (small, reviewable chunks), Implement (one task at a time, commit often).

Human review gates

Add judgment between phases. The compound error problem: 100 steps at 1% error rate = 63% failure probability. Gates interrupt the cascade.

EARS notation

Five patterns for unambiguous requirements: ubiquitous, event-driven, state-driven, unwanted behaviour, and optional.

SDD tools

Spec-Kit, OpenSpec, BMAD Method, or the manual four-file approach. Pick what fits your context.

Labs & Practice

  • Expand a vague feature request into a full specification
  • Execute the four-phase workflow end to end
  • Convert requirements into EARS notation
  • Practice the iterative workflow: change, verify, commit, repeat
  • Write a specification for a pending feature in your codebase
  • Apply the workflow through Phase 3 on your codebase

Core Concepts

TDD with AI

Write tests from input/output pairs first. Confirm they fail. Then implement. Verify in a fresh session to catch overfitting.

Understanding legacy code

AI gives a head start but lacks domain expertise. Watch for shepherding, drifting, and the illusion of competence.

The 7-step refactor loop

Set the scene, plan first, wrap in tests, propose surgical edits, review diffs, tight loop, land with context. Zero regressions.

PAID framework

Prioritise (high debt + high value), Address (low debt + high value), Investigate (high debt + low value), Document (low debt + low value).

Security & compliance

OWASP Top 10 review. Secret management. Licence compliance. Constitution files in CLAUDE.md. Audit trails.

Diagnosing AI failures

Context pollution, prompt ambiguity, knowledge gaps, pattern mismatch. The Thread Fold technique for recovery.

Labs & Practice

  • Implement a feature with strict TDD, verify in fresh context
  • Generate characterisation tests for undocumented legacy code
  • Conduct a security audit of AI-generated code
  • Refactor a high-complexity module using all 7 steps
  • Categorise technical debt using the PAID framework
  • Add characterisation tests to a low-coverage module in your codebase

Core Concepts

CI/CD integration

GitHub Actions with claude-code-action for PR reviews. Headless mode for automation. Automated gates: coverage, docs, breaking changes.

Multi-Claude patterns

Orchestrator-worker, writer-reviewer, parallel execution with Git worktrees. Team coordination through shared CLAUDE.md conventions.

Oversight & trust calibration

Human-in-the-loop, human-on-the-loop, autonomous with audit. Match oversight level to task risk: HIGH, MEDIUM, LOW.

Advanced customisation

Skills, hooks (14 lifecycle events), custom agents. Combine narrow scope, appropriate model, and minimal permissions.

Session mastery

Stay below 70% context window. Strategic model selection: Opus for reasoning, Sonnet for coding, Haiku for exploration. The Checklist Method.

Engineering at scale

Plugins, Claude Code SDK, Agent Teams. From personal CLI to team platform.

Labs & Practice

  • Configure a CI/CD pipeline with PR review and coverage analysis
  • Execute the writer-reviewer pattern on a codebase
  • Build a PreToolUse guard hook
  • Complete the trust calibration exercise
  • Draft a CI/CD integration plan for your codebase
  • Design an oversight framework for your team
Investment

Dedicated

€40,000 to €100,000

Up to 40 to 100 engineers

UP TO 50% OFF

Public

€2,000

per seat

25% OFF

1–24 seats

€1,750

25–49

€1,500

50+

€1,250

Prerequisites

You should be comfortable with the following before starting the programme.

Git

Branching, merging, handling conflicts

Terminal

Comfortable working in a command line

Code

Proficient in at least one programming language

Fundamentals

Basic software development concepts

Ready to upgrade your team or yourself?

Apply