See the app a user sees. Ranked, personalized, served by the app itself. No scraping, no public API, just the real in-app experience.
Explore social app workflowsAct on any app, not just read from it. Post, reply, DM, react, follow. Your agent shows up the way a person does, because it is one.
See mobile QA testingNothing gets done in one app. Your agent takes a screenshot in one, posts it on another, and shares it with the team in a third. The same thing you would do, you just don't have to do it anymore.
See account operationsApps are the phone's native interface.
Now your agents can use them the way you would, enabling the work no API ever could.
Your BRIDGE between AI and the iPhone
Your agent can see what apps are on the phone. If it needs to do something and the app isn't there, it can download a new one from the App Store.
Your agent can use the Messages app directly, or send quick iMessages when a workflow needs a fast reply.
Push prompts, SMS codes, device checks, captchas. Your agent clears them and keeps going.
Your agent can work from the same files, photos, and documents you already keep in iCloud, so it has the context it needs without a new handoff.
Your agent pays when it needs to, using everyone's favorite payment method: Apple Pay.
Your agent unlocks the phone when there's work to do, and locks it when the work is done.
That is the idea behind TapKit: a real iPhone your AI can see, tap, type on, and use across apps. Learn what an iPhone agent is.
TapKit plugs into the tools your agent already uses.
Run it from Claude, Codex, Cursor, OpenCode, or any MCP-compatible client, then give that agent a real iPhone it can see, tap, type on, and use across apps.
Download the Mac app, configure your iPhone, and connect the two together.
Use TapKit from Claude, Codex, or any other MCP client that can call tools.
Tell your agent what to do. It uses your iPhone the way you would. Watch it work, or walk away. It'll keep going.
TapKit gives you programmatic control of real, physical iPhones through Apple's accessibility features. You send actions (tap, swipe, type) via our API, and they execute on your iPhone. For framework differences, compare TapKit with Appium.
No. Simulators run virtual iOS environments that can't install or use apps. TapKit runs on real iPhones. See the full TapKit vs iOS simulator comparison.
Anything on the App Store you can use.
Use our MCP server to connect directly to Claude Code or Cursor, our REST API with a Python SDK for custom integrations, or our Mac and Web apps for no-code control. The typical agent loop is: screenshot → send to your vision model → get action → execute on device. See how TapKit compares with iPhone Mirroring MCP.
Yes, TapKit is bring-your-own-device. You connect your own iPhone to your Mac and TapKit turns it into an API. We're exploring hosted phone options for teams that don't want to manage their own hardware. If you're evaluating hosted hardware, compare TapKit with device farms.
TapKit starts at $100/month per phone with volume discounts for larger fleets. Get in touch for custom enterprise pricing.
Unlimited usage for teams starting with one phone.
Running more devices, custom workflows, or production support? We will help your team scale the setup and keep it running.