Skip to content

Skills with Claude API

Claude API supports native skills integration.

You do not pass full SKILL.md text in every request.
Instead, you reference skills in the request container and let Claude load them on demand.

Skills are not automatically shared across every Claude surface.
If you created a skill in Claude AI/Desktop, do not assume it is available in Claude API or Claude Code until you register/install it in that environment.

According to Anthropic docs, skill-enabled API requests require:

  1. Code execution tool enabled
  2. A container with skill references
  3. Required beta headers for skills/code execution

Skills are passed under container.skills and selected by skill_id.

As of February 16, 2026, docs list these beta headers:

  • code-execution-2025-08-25
  • skills-2025-10-02
  • files-api-2025-04-14

You can use:

  • Anthropic pre-built skills (for example document skills)
  • Custom skills uploaded through the Skills API

For custom skills, package your skill directory (with SKILL.md) and register it via Skills API first, then reference its skill_id in Messages requests.

For skill workflows that read/write artifacts:

  1. Upload inputs with Files API
  2. Run Messages request with code execution + container skills
  3. Let the skill/scripts generate outputs in the sandbox
  4. Capture returned file_id values
  5. Download generated files via Files API

This is the core pattern for producing markdown/docx/reports programmatically.

flowchart TD
    A["1. Upload input files"] --> B["2. Call Messages API"]
    B --> C["3. Run container (skills + code execution)"]
    C --> D["4. Skill loads SKILL.md and needed resources"]
    D --> E["5. Generate output artifacts in sandbox"]
    E --> F["6. Return file_id values"]
    F --> G["7. Download outputs via Files API"]
    G --> H["8. Save or deliver final artifacts"]
{
"model": "claude-sonnet-4-5",
"max_tokens": 1024,
"betas": ["code-execution-2025-08-25", "skills-2025-10-02"],
"tools": [{ "type": "code_execution_20250825", "name": "code_execution" }],
"container": {
"skills": [{ "type": "custom", "skill_id": "sk_...", "version": "latest" }]
},
"messages": [{ "role": "user", "content": "Analyze this dataset and summarize trends." }]
}
  • Skills are selected from metadata and loaded progressively.
  • You can attach multiple skills in one container (currently up to 8 per request).
  • You can mix custom and built-in skills in the same request.
  • If your skills are on local disk, API runtime will not auto-discover them; upload/register first.
  • Code execution runs in an isolated container with environment/resource limits; design skills accordingly.
  • Keep a test harness for representative prompts and expected artifacts.

Common lifecycle operations:

  • Create/upload custom skills and keep returned skill_id
  • List skills (for example filtering custom skills)
  • Pin a specific skill version or use latest
  • Delete old versions before deleting a skill object

Treat this like artifact version management, not just prompt management.

  • Keep descriptions precise so skill routing is reliable.
  • Log selected skills and tool calls for auditability.
  • Log file uploads/downloads and final file_id artifacts for traceability.
  • Validate downstream artifacts (tables/files/charts) before final output.
  • Run cross-model checks if you deploy on multiple Claude models.