· 10 min read Posted by Gabriel Souza

Using AI to Check Your KMP Readiness

Before migrating an Android project to Kotlin Multiplatform, you need to answer one question: are your dependencies ready? Here's how to combine deterministic scripts with AI-powered research to check, without wasting time or money.
https://unsplash.com/photos/someone-is-drawing-on-a-tablet-at-their-desk-vhZ8K5Np9mk
Credit: https://unsplash.com/photos/someone-is-drawing-on-a-tablet-at-their-desk-vhZ8K5Np9mk

Before migrating an Android project to Kotlin Multiplatform (KMP), you need to answer one question: are your dependencies ready?

A medium-sized Android app can easily pull in 100+ dependencies. Checking each one for KMP support by hand (visiting GitHub repos, reading changelogs, searching for alternatives) takes forever. AI and scripts can do most of this work for you.

Here’s how we approached it: deterministic scripts check the facts, AI agents research the unknowns, and JSON schemas keep everything validated.

The Strategy: Deterministic First, AI Second

Don’t use an AI Agents for things a script can answer.

AI agents are good at research and generating code. They’re bad at checking whether a Maven artifact publishes a common platform variant. That’s a yes/no question with a yes/no answer. A script can check it in milliseconds.

Our approach has three layers:

Layer 1: Scripts   → "Does this dependency already support KMP?" (deterministic)
Layer 2: AI        → "What KMP alternatives exist for this dependency?" (research)
Layer 3: Schemas   → "Validate and structure all findings" (deterministic)

Most dependencies fall into clear buckets that scripts handle without AI, which cuts down on agent usage by a lot.

Layer 1: Deterministic KMP Detection

Step 1: List All Your Dependencies

First, get a list of every dependency your project uses, with resolved group, artifact, and version coordinates.

Two ways to do this, and you can use an AI coding agent (like Claude Code) to generate the scripts for either:

Approach A: Parse the Gradle Version Catalog

If your project uses libs.versions.toml (most modern Android projects do), write a script that parses it and extracts all dependency coordinates. The TOML format is structured and predictable, so ask your AI agent to generate a parser in Python, Kotlin, or whatever you prefer. The script should resolve version references (where a library entry references a version alias) and output the full group:artifact:version for each dependency.

Approach B: Gradle Init Script Injection

For projects with complex dependency setups (dynamic versions, BOMs, platform constraints), parsing TOML alone won’t capture everything. You can use a Gradle init script instead, a special Gradle script that runs before any project build script and hooks into the entire build lifecycle.

Init scripts are a lesser-known Gradle feature. They live outside your project (or can be passed via the -I flag) and get applied to every project in the build. That makes them good for extracting information without touching your actual build files.

How it works:

  1. You create a file like extract-deps.init.gradle.kts.
  2. Inside it, you register a task across all projects that iterates their configurations, resolves them, and collects every dependency coordinate.
  3. You run it with ./gradlew -I extract-deps.init.gradle.kts extractDeps.
  4. The task writes a JSON file with every resolved dependency.

You can ask an AI agent to generate this. Tell it to register a task that walks allprojects, collects implementation and api configurations, filters out build tools and processors, resolves versions, and writes the output as JSON. The AI can also handle edge cases like filtering debug/release-specific configurations or deduplicating transitive dependencies.

This approach is more complete than parsing TOML because Gradle resolves BOM constraints, version catalogs, platform-specific dependencies, and transitive dependency trees. You get the real, final set of coordinates your project depends on.

Either way, the output should follow a consistent schema:

{
  "dependencies": [
    {
      "group": "org.jetbrains.kotlinx",
      "artifact": "kotlinx-coroutines-core",
      "version": "1.8.1"
    },
    {
      "group": "com.squareup.retrofit2",
      "artifact": "retrofit",
      "version": "2.9.0"
    }
  ]
}

Step 2: Check KMP Support via Gradle Module Metadata

Every modern Kotlin library published to Maven Central or Google Maven includes a .module file (the Gradle Module Metadata). This JSON file describes exactly which platforms a library supports.

For a KMP library, the .module file contains variants with a "platform_type" of "common":

{
  "variants": [
    {
      "name": "iosArm64ApiElements",
      "attributes": {
        "org.jetbrains.kotlin.platform.type": "native"
      }
    },
    {
      "name": "commonMainMetadataElements",
      "attributes": {
        "org.jetbrains.kotlin.platform.type": "common"
      }
    }
  ]
}

Write a script (or ask an AI agent to generate it) that takes the dependency list from Step 1 and, for each dependency:

  1. Fetches the .module file from Maven Central (https://repo1.maven.org/maven2/{group}/{artifact}/{version}/{artifact}-{version}.module) or Google Maven (https://dl.google.com/dl/android/maven2/...).
  2. Parses the JSON and checks if any variant has "org.jetbrains.kotlin.platform.type": "common".
  3. If yes, records which platforms it targets (iOS, JS, WASM, etc.).

Step 3: Check for Newer KMP-Ready Versions

Sometimes a library doesn’t support KMP at the version you’re using, but a newer version does. Your script should check for this by fetching maven-metadata.xml, which lists all published versions.

The metadata file lives at a predictable path in the Maven repository:

  • Maven Central: https://repo1.maven.org/maven2/{group/as/path}/{artifact}/maven-metadata.xml
  • Google Maven: https://dl.google.com/dl/android/maven2/{group/as/path}/{artifact}/maven-metadata.xml

For example, for androidx.room:room-runtime:

  • Group path: androidx/room (replace dots with slashes)
  • Full URL: https://dl.google.com/dl/android/maven2/androidx/room/room-runtime/maven-metadata.xml

The XML contains a <release> tag with the latest stable version and a <versions> list with all published versions. Your script should:

  1. Fetch maven-metadata.xml for each non-KMP dependency.
  2. Extract the latest stable version (filtering out alphas, betas, RCs if desired).
  3. If the latest version differs from the project’s current version, fetch that version’s .module file and check for KMP support.
  4. Record the result as kmpAtLatest: true/false and latestVersion in the report.

KMP Report Script Sample Schema

After scanning all dependencies, the script should produce a structured report:

{
  "dependencies": [
    {
      "group": "org.jetbrains.kotlinx",
      "artifact": "kotlinx-coroutines-core",
      "version": "1.8.1",
      "kmp": true,
      "platforms": ["common", "jvm", "native", "js"],
      "mavenRepo": "mavenCentral",
      "klibsUrl": "https://klibs.io/..."
    },
    {
      "group": "com.squareup.retrofit2",
      "artifact": "retrofit",
      "version": "2.9.0",
      "kmp": false,
      "latestVersion": "2.11.0",
      "kmpAtLatest": false,
      "mavenRepo": "mavenCentral",
      "klibsUrl": null
    }
  ]
}

This report tells you which dependencies are already KMP-ready and which need alternatives. That’s where AI comes in.

Expert KMP Readiness Assessment
Automated scripts are a great start, but production-grade migration requires a deeper look at you architecture and team readiness. Leverage Touchlab’s expertise to identify hidden risks and ensure your project is ready to adopt KMP at scale.

Layer 2: AI-Powered Alternative Research

For every dependency that isn’t KMP-compatible, you need to figure out:

  • Is there a KMP alternative? (e.g., Retrofit -> Ktor, Protobuf -> Wire)
  • Is this dependency truly Android-only? (e.g., android.car.content, Play Services In-App Updates, Android Benchmark, things tied to the Android OS with no cross-platform equivalent)
  • Does a newer version add KMP support? (e.g., Room 2.7+, Coil 3+)

AI Agents are good at this. They can pull together information from docs, GitHub issues, conference talks, and community discussions to surface alternatives you wouldn’t find quickly on your own.

Let AI Group Your Dependencies

Before researching alternatives, group related dependencies together. Libraries from the same ecosystem (e.g., all androidx.compose.* artifacts, or retrofit + okhttp + converter-gson) share the same KMP story and should be researched as one unit.

You also don’t need do this by hand. Feed your KMP report (the non-KMP dependencies) to an AI agent and ask it to group related libraries into logical migration units. It will cluster things like Hilt + Dagger + Hilt Compiler into “dependency injection,” or Retrofit + OkHttp + Moshi into “networking.”

This grouping becomes the input for parallel research.

Parallel Research with Subagents

AI coding tools like Claude Code support subagents, independent AI processes that research in parallel. Instead of researching one dependency at a time, you launch multiple subagents at once, each responsible for a dependency group.

Make this data-driven, not hardcoded. Your prompt should reference the grouped dependencies JSON file and let the agent figure out what to research:

Read the dependency groups from dependency-groups.json.
For each group, launch a subagent that:

1. Searches for KMP-compatible alternatives
2. Checks klibs.io and GitHub for the alternatives
3. Categorizes the group as:
   - "has_alternative" if a KMP replacement exists
   - "android_only" if inherently platform-specific
   - "no_alternative_found" if no replacement exists
4. Outputs findings in the research JSON schema defined below

You’re not telling the agent “research Compose, then Retrofit, then Hilt.” You’re saying “read the file, figure out what needs researching, and go do it in parallel.” The agent adapts to whatever your project actually uses.

Each subagent can web-search, check GitHub repos, read docs, all concurrently. With 15 dependency groups, this turns a 30-minute serial research session into about 5 minutes.

Define the Research Output Schema

Define a clear JSON schema for the research output and share it in your prompt so every subagent produces uniform results:

{
  "entries": [
    {
      "original": {
        "group": "com.squareup.retrofit2",
        "artifact": "retrofit"
      },
      "category": "has_alternative",
      "explanation": "Ktor is the standard KMP HTTP client, providing equivalent functionality to Retrofit with native coroutine support.",
      "alternatives": [
        {
          "group": "io.ktor",
          "artifact": "ktor-client-core",
          "klibsUrl": "https://klibs.io/...",
          "githubUrl": "https://github.com/ktorio/ktor",
          "documentationUrl": "https://ktor.io/docs/...",
          "supportsIos": true
        }
      ]
    },
    {
      "original": {
        "group": "androidx.benchmark",
        "artifact": "benchmark-macro-junit4"
      },
      "category": "android_only",
      "explanation": "Android macrobenchmark is tied to the Android runtime and instrumentation framework. There is no cross-platform equivalent."
    },
    {
      "original": {
        "group": "com.example",
        "artifact": "some-library"
      },
      "category": "no_alternative_found",
      "explanation": "No known KMP replacement. Consider writing a thin expect/actual wrapper."
    }
  ]
}

Valid categories are: has_alternative, android_only, no_alternative_found. When the category is has_alternative, the entry must include an alternatives array: one or more KMP-compatible replacements with Maven coordinates, source links, and iOS support status.

Custom Claude Code Skills

If you’re using Claude Code, you can create custom skills that orchestrate this research workflow. Skills are project-scoped prompt templates that live in .claude/skills/ and get invoked with a slash command.

Here’s an example skill for dependency research:

.claude/skills/kmp-dependency-research/SKILL.md
---
name: kmp-dependency-research
description: Research KMP alternatives for project dependencies that don't yet support KMP
argument-hint: [path-to-kmp-report]
---

Research KMP alternatives for all non-KMP dependencies.

## Input

Read the KMP report from $ARGUMENTS (a JSON file with the schema described
in our docs). Filter to dependencies where `kmp` is `false`.

## Steps

1. Group the non-KMP dependencies into logical migration units by library
   ecosystem (e.g., networking, DI, serialization, Compose, testing).

2. For each group, launch a subagent to research alternatives in parallel.
   Each subagent should:
   - Search the web for KMP-compatible alternatives
   - Verify alternatives exist on klibs.io or Maven Central
   - Categorize as: has_alternative, android_only, or no_alternative_found

3. Merge all subagent results into a single JSON file following the
   research output schema.

4. Validate the merged result by running
   `python tools/validate_schema.py kmp-alternatives-research.json --schema schemas/research-output.schema.json`.
   Re-research any entries that failed validation.

## Output

Write the results to `kmp-alternatives-research.json`.

Invoke it with /kmp-dependency-research kmp-report.json. The skill gives Claude your project structure, your scripts, and the expected output format, so results are much more accurate than a generic prompt.

A few things that make skills work well: use $ARGUMENTS so the same skill works across runs, reference your actual JSON schemas so the output format is pinned down, and include a validation step so the AI catches its own mistakes.

Layer 3: Schema-Driven Validation

Don’t blindly trust LLM output. Define JSON schemas upfront and validate every piece of AI-generated research before it enters your pipeline.

For every JSON file the AI produces (research results, alternative mappings, migration notes), define a schema with required fields, valid enum values, and structural constraints. Then write a validation script that checks every entry against it. (Or ask the AI to generate the validator for you.)

Express these rules as a JSON Schema document and validate with any standard validator. When entries fail, re-prompt only the failed ones instead of the whole batch.

This works at every stage: validating the dependency list from Layer 1, the research output from Layer 2, and the final report that combines both.

TL;DR

  1. List your dependencies. Parse libs.versions.toml or use a Gradle init script to extract all resolved coordinates.
  2. Check KMP support with scripts. Fetch Gradle Module Metadata (.module files) from Maven repos. No AI needed.
  3. Use AI agents for alternative research. Let the agent group dependencies, launch parallel subagents, and output structured JSON with alternatives and categories.
  4. Validate with JSON schemas. Enforce structure on all AI output before trusting it.
  5. Cache research as JSON files. Build a knowledge base that persists across sessions and feeds future work.

Scripts handle the facts. AI handles the research. Schemas keep it honest.