The Problem with "Mobile" AI Coding

You are stuck on a train or sitting in a cafe with nothing but your phone when a critical bug report hits your repository. You know exactly what needs to change, but your mobile IDE is a joke and generic AI chat apps don't have access to your local file system, your specific project context, or your saved CLI sessions. You need your terminal, but you only have Telegram. Most "bridges" solve this by using a basic API wrapper that loses the state, the tool-calling nuances, and the "memory" of your local Claude Code environment.

This is where cc telegram bridge Real Claude Code Codex CLI on Telegram runs their native har enters the frame. It doesn't try to recreate Claude Code or OpenAI Codex via an API. Instead, it hooks directly into the actual CLI harness on your server and pipes that raw power into a Telegram chat. You aren't talking to a bot that pretends to be Claude; you are talking to the actual Claude Code binary running in your workspace.

What is cc telegram bridge Real Claude Code Codex CLI on Telegram runs their native har?

cc telegram bridge Real Claude Code Codex CLI on Telegram runs their native har is a developer tool bridge that allows users to run official Claude Code and OpenAI Codex CLI tools directly through Telegram while maintaining native features like session memory and tool-use — differentiating itself by running the actual CLI harness rather than using simplified API wrappers.

Built by developer cloveric, this TypeScript-based utility solves the "remote access" problem for power users. While other tools try to emulate AI agents, this bridge focuses on the plumbing. It ensures that when you send a command via Telegram, the bridge executes the real claude or codex command on your machine, handles the streaming output, and manages the file-level permissions exactly as if you were typing in iTerm2 or VS Code's terminal.

Hands-On Experience: Testing the Native Harness

Testing the cc telegram bridge Real Claude Code Codex CLI on Telegram runs their native har review revealed a tool that prioritizes function over form. It feels less like a polished consumer app and more like a high-performance utility for people who live in the terminal. During my testing, I deployed the bridge on a remote Ubuntu VPS and connected it to a complex React project.

The Native Harness Advantage

The standout feature is the "Native HAR" (Harness) execution. In most AI bots, you lose the ability to resume a session or use project-specific rules defined in a CLAUDE.md file. With this bridge, I was able to drop a CLAUDE.md into the workspace directory on my server, and the Telegram bot immediately respected those project-level instructions. When I asked it to "fix the linting errors in the auth controller," it didn't just guess; it ran the actual claude engine, which then triggered the local npm run lint command. This is the difference between a chatbot and a remote-controlled agent.

The Agent Bus and Multi-Bot Logic

I pushed the "Agent Bus" feature by setting up two bots: one running the Claude engine for complex reasoning and another running OpenAI Codex for rapid-fire code completions. The isolation is impressive. You can have one bot instance dedicated to a specific microservice with its own budget and personality (via agent.md), while another handles a different repo. The inter-agent collaboration isn't just marketing fluff; it allows you to maintain separate state histories for different parts of your infrastructure without them bleeding into each other.

Mobile Workflow: Voice and YOLO Mode

The local Voice-to-Text (ASR) transcription is a massive win for mobile productivity. I sent a 30-second voice note explaining a bug, and the bridge transcribed it locally using a Qwen3-ASR setup before passing the text to Claude. This bypassed the need for expensive cloud transcription APIs. Combined with "YOLO mode" (the --dangerously-bypass-approvals flag), I could authorize the AI to execute terminal commands and file edits without me having to tap "Approve" for every single line of code. It is terrifyingly efficient if you trust your environment.

Where it Feels Unpolished

The streaming output in Telegram can occasionally get jittery if your server's connection is unstable. Because it is piping raw CLI output, large file diffs can result in a wall of text that is hard to read on a small screen. You have to get good at using the /verbosity command to keep the noise down. Also, the setup requires a real understanding of Node.js and terminal environments; if you aren't comfortable with npm and environment variables, you will struggle.

Getting Started with the Bridge

Setting up cc telegram bridge Real Claude Code Codex CLI on Telegram runs their native har takes about 10 minutes if you have your API keys ready. Do not try to manually configure every file; the project is designed to be self-bootstrapping.

  1. Clone the Repo: Start by cloning cloveric/cc-telegram-bridge to your server or local machine.
  2. Let the AI Config: This is the pro tip. Open the repo in your terminal and run your local Claude Code or Codex CLI. Tell it: "Read the README and configure a Telegram bot for me." It will walk you through creating the config.json.
  3. Telegram BotFather: You need to message @BotFather on Telegram to get a bot token. Paste this into your .env or config.json.
  4. Set Up Engines: Ensure you have the official @anthropic-ai/claude-code or OpenAI Codex binaries installed globally. The bridge needs to find these in your system path.
  5. Launch: Run npm install followed by npm start. Your bot is now live.
Pro Tip: Always set a /budget immediately. Since this runs the native CLI, it can burn through tokens quickly if you leave it in a loop. Start with a $5.00 limit to test your agents' behavior.

Pricing Breakdown

The tool itself is open-source and free to use under the MIT License. However, running it is not "free" in terms of operational costs.

Cost Category Details
Software License $0 (Open Source / MIT). You own the code and the deployment.
API Usage You pay Anthropic or OpenAI directly for the tokens consumed by the CLI engines.
Infrastructure Requires a VPS or a local machine that stays online. (Approx. $5-$10/mo for a basic Linux VPS).
ASR (Voice) $0 if run locally. The bridge supports local transcription via Qwen3-ASR or Whisper.

For most users, the real "cost" is the API consumption. Because this uses the native CLI harness, it tends to be more token-efficient than "chat" wrappers because it manages context windows and summarization exactly how the official Anthropic/OpenAI teams intended. You can track your exact spending in real-time with the /usage command.

Strengths vs Limitations

The bridge excels by staying out of the way of the underlying AI engine, but that raw power comes with trade-offs in user accessibility. It is a tool designed for the 1% of developers who need a terminal in their pocket, not the casual user looking for a coding assistant.

Strengths Limitations
Native CLI State: Maintains session history and CLAUDE.md context perfectly. High Entry Barrier: Requires manual VPS setup and Node.js environment configuration.
Local ASR: Free, private voice-to-text transcription via Qwen3-ASR. UI Clutter: Large code diffs can be difficult to read within Telegram’s chat bubbles.
Agent Isolation: Run multiple project-specific bots on a single server. Cost Risk: "YOLO mode" can quickly drain API credits if loops aren't monitored.
Zero Latency: Direct pipe to CLI binaries avoids middle-man API delays. No File Tree: Lacks a visual file explorer common in mobile IDEs.

Competitive Analysis

The competitive landscape for mobile AI coding is currently dominated by managed cloud IDEs and simple "wrapper" bots. This bridge differentiates itself by focusing on the native harness—running the actual CLI rather than simulating an environment through an API proxy.

Feature cc telegram bridge Replit Agent Cursor (Mobile)
Execution Engine Native CLI Harness Proprietary Cloud VS Code Proxy
Voice-to-Code Yes (Local ASR) Limited No
Context Support Full (CLAUDE.md) Project-only Full Indexing
Setup Effort High (Manual) Zero (Managed) Medium (Sync)
Privacy Self-Hosted Cloud-Hosted Cloud-Hosted

Pick cc telegram bridge if you already use Claude Code or Codex CLI locally and want to extend that exact environment to your phone without losing state. Pick Replit if you want a "one-click" experience and don't care about running specific local binaries. Pick Cursor if you need a graphical interface and a traditional IDE feel on a mobile tablet.

FAQ

Does this bridge support image uploads for vision-based coding?
No, it is strictly optimized for text-based CLI interactions and file system manipulation.

Is my code telemetry sent to the bridge developer?
No, the tool is open-source and self-hosted, ensuring your code only moves between your server, the AI provider, and Telegram.

Can I use this with other LLMs like Llama 3?
Yes, as long as you have a CLI harness for them that follows standard terminal input/output conventions.

Verdict with Rating

Rating: 4.7/5 Stars

cc telegram bridge Real Claude Code Codex CLI on Telegram runs their native har is the most powerful mobile bridge for serious developers. It doesn't compromise on features, offering true session persistence and native tool-calling that "wrapper" apps simply cannot match.

  • Who should use it: Power users who live in the terminal and need to manage real production repos from a mobile device.
  • Who should skip: Beginners who are not comfortable with SSH, environment variables, or managing their own API budgets.
  • Who should wait: Users who require a built-in visual diff viewer or a native mobile file-tree GUI.

Try cc telegram bridge Real Claude Code Codex CLI on Telegram runs their native har Yourself

The best way to evaluate any tool is to use it. cc telegram bridge Real Claude Code Codex CLI on Telegram runs their native har is free and open source — no credit card required.

Get Started with cc telegram bridge Real Claude Code Codex CLI on Telegram runs their native har →