In recent years, efforts that leverage LLMs to support and construct entire applications—going beyond mere code completion—have drawn significant attention. This session shares insights gained from developing “ComposeFlow,” a desktop app built on Compose Multiplatform. ComposeFlow is an AI-first visual development environment aimed at lowering the barrier to mobile app development. When users issue instructions in natural language, an LLM interprets them and performs app structure modifications (e.g., adding and removing UI components, changing project settings). To enable this, every editor action is defined as a “tool” that the LLM can call directly. This session dives into the technical challenges and architectural design of building a development environment intended to be operated by AI. Specifically: - Building a visual editor for Android/iOS/Web apps with Compose Multiplatform - Treating app configuration as an intermediate representation (YAML) to enable AI-driven generation and editing - Defining each GUI action as an API so the LLM can invoke it as a tool - Allowing the AI to manipulate the GUI directly, instead of relying on code-completion tools like Cline or Cursor - Exposing toolsets to an agent, ensuring reliability, and implementing retry strategies after failures ComposeFlow is currently closed-source, but is planned to be open-source in the future. This session is not a product promotion; its goal is to share technical knowledge in the context of “AI × GUI × Compose Multiplatform × Agent API design.” (Translated by the DroidKaigi Committee)
thagikura ComposeFlow, founder
- Those interested in what Compose Multiplatform can do - Anyone curious about operating existing applications via LLMs - People looking for practical examples of LLM-based applications