In Android apps, sluggish startups and janky scrolling directly lead to user churn and lower conversion rates. However, as projects grow in complexity, it becomes difficult with common tools—such as Firebase Performance Monitoring—to pinpoint why performance metrics are dropping and where the actual bottlenecks lie. For example, if the time to first frame is slow, you might assume delays in data-layer execution or image decoding are to blame, but in reality the root cause can reside in asynchronous operations outside the app—such as TCP connection establishment or DNS resolution through a CDN. Moreover, I/O contention can also trigger performance issues, so investigating bottlenecks that cannot be reproduced locally is extremely challenging. In this session, we’ll cover the fundamentals of distributed tracing—widely used on the backend—and explain how to leverage it in Android apps. We’ll introduce techniques and monitoring strategies that clarify causal relationships between in-app processes and system-level operations, helping you identify performance issues more easily while maintaining high quality in the Pixiv app, which is beloved by users worldwide in diverse environments. Furthermore, we’ll discuss the design and operation of tracing with OpenTelemetry, highlight differences from Firebase Performance and Systrace, present examples of a mobile-customized SDK implementation, and share real-world case studies of performance improvements. By the end, you’ll understand the technologies and mechanisms needed to uncover what “slow loading” really means. Proposed Session Outline - Why traditional methods struggle to pinpoint performance issues - What is distributed tracing? - Core concepts: trace, span, and context propagation - OpenTelemetry and its ecosystem - Comparison with Firebase Performance and Systrace - Designing and introducing distributed tracing for Android apps - Granularity of measurement units - Designing and implementing a lightweight mobile SDK - Data collection and sampling - Visualization and operations - Observability with tools like Grafana - Best practices for alert design - Problem-solving flow and results - Case study: observability→performance metric degradation→investigation→improvement→re-observability - Reflections and future outlook - Changes in the development process before and after implementation - Future potential of distributed tracing (including AI-driven monitoring, etc.) (Translated by the DroidKaigi Committee)
andousan Android Engineer @ pixiv.inc
■Android Engineers - Those interested in observability of Android apps - Those interested in overall performance improvement of Android apps - Those who find it difficult to identify the root causes of Android app performance issues - Those who feel limited by Firebase Performance Monitoring ■Product Managers / Team Leaders - Those who want to understand their app’s current performance status - Those who want to enhance app quality ■QA Engineers - Those involved in monitoring performance indicators such as SLO/SLA/SLI for apps
No More Writing Test Code: Automated Design and Generation of Asynchronous Tests Delegated to JetBrains AI Assistant
makun
#@Experimental AI
How Android Uses Data Structures Behind the Scenes
HyunWoo Lee
#Android Framework
"Where Do I Start Reading?" A Practical Guide to Getting Up to Speed with Code and Culture — Onboarding Strategies to Empower New Team Members
richako (risako070310)
#Development Process