This tutorial walks through building a production-ready OAuth callback server that works across Node.js, Deno, and Bun. We'll cover everything from the basic HTTP server setup to handling edge cases that trip up most implementations.This tutorial walks through building a production-ready OAuth callback server that works across Node.js, Deno, and Bun. We'll cover everything from the basic HTTP server setup to handling edge cases that trip up most implementations.

How to Capture OAuth Callbacks in CLI and Desktop Apps with Localhost Servers

7 min read

When building CLI tools or desktop applications that integrate with OAuth providers, you face a unique challenge: how do you capture the authorization code when there's no public-facing server to receive the callback? The answer lies in a clever technique that's been right under our noses — spinning up a temporary localhost server to catch the OAuth redirect.

This tutorial walks through building a production-ready OAuth callback server that works across Node.js, Deno, and Bun. We'll cover everything from the basic HTTP server setup to handling edge cases that trip up most implementations.

Understanding the OAuth Callback Flow

Before diving into code, let's clarify what we're building. In a typical OAuth 2.0 authorization code flow, your application redirects users to an authorization server (like GitHub or Google), where they grant permissions. The authorization server then redirects back to your application with an authorization code.

For web applications, this redirect goes to a public URL. But for CLI tools and desktop apps, we use a localhost UR — typically http://localhost:3000/callback. The OAuth provider redirects to this local address, and our temporary server captures the authorization code from the query parameters.

This approach is explicitly blessed by OAuth 2.0 for Native Apps (RFC 8252) and is used by major tools like the GitHub CLI and Google's OAuth libraries.

Setting Up the Basic HTTP Server

The first step is creating an HTTP server that can listen on localhost. Modern JavaScript runtimes provide different APIs for this, but we can abstract them behind a common interface using Web Standards Request and Response objects.

interface CallbackServer {   start(options: ServerOptions): Promise<void>;   waitForCallback(path: string, timeout: number): Promise<CallbackResult>;   stop(): Promise<void>; }  function createCallbackServer(): CallbackServer {   // Runtime detection   if (typeof Bun !== "undefined") return new BunCallbackServer();   if (typeof Deno !== "undefined") return new DenoCallbackServer();   return new NodeCallbackServer(); } 

Each runtime implementation follows the same pattern: create a server, listen for requests, and resolve a promise when the callback arrives. Here's the Node.js version that bridges between Node's http module and Web Standards:

class NodeCallbackServer implements CallbackServer {   private server?: http.Server;   private callbackPromise?: {     resolve: (result: CallbackResult) => void;     reject: (error: Error) => void;   };    async start(options: ServerOptions): Promise<void> {     const { createServer } = await import("node:http");      return new Promise((resolve, reject) => {       this.server = createServer(async (req, res) => {         const request = this.nodeToWebRequest(req, options.port);         const response = await this.handleRequest(request);          res.writeHead(           response.status,           Object.fromEntries(response.headers.entries()),         );         res.end(await response.text());       });        this.server.listen(options.port, options.hostname, resolve);       this.server.on("error", reject);     });   }    private nodeToWebRequest(req: http.IncomingMessage, port: number): Request {     const url = new URL(req.url!, `http://localhost:${port}`);     const headers = new Headers();      for (const [key, value] of Object.entries(req.headers)) {       if (typeof value === "string") {         headers.set(key, value);       }     }      return new Request(url.toString(), {       method: req.method,       headers,     });   } } 

}

The beauty of this approach is that once we convert to Web Standards, the actual request handling logic is identical across all runtimes.

Capturing the OAuth Callback

The heart of our server is the callback handler. When the OAuth provider redirects back, we need to extract the authorization code (or error) from the query parameters:

private async handleRequest(request: Request): Promise<Response> {   const url = new URL(request.url);    if (url.pathname === this.callbackPath) {     const params: CallbackResult = {};      // Extract all query parameters     for (const [key, value] of url.searchParams) {       params[key] = value;     }      // Resolve the waiting promise     if (this.callbackPromise) {       this.callbackPromise.resolve(params);     }      // Return success page to the browser     return new Response(this.generateSuccessHTML(), {       status: 200,       headers: { "Content-Type": "text/html" }     });   }    return new Response("Not Found", { status: 404 }); } 

Notice how we capture all query parameters, not just the authorization code. OAuth providers send additional information like state for CSRF protection, and error responses include error and error_description fields. Our implementation preserves everything for maximum flexibility.

Handling Timeouts and Cancellation

Real-world OAuth flows can fail in numerous ways. Users might close the browser, deny permissions, or simply walk away. Our server needs robust timeout and cancellation handling:

async waitForCallback(path: string, timeout: number): Promise<CallbackResult> {   this.callbackPath = path;    return new Promise((resolve, reject) => {     let isResolved = false;      // Set up timeout     const timer = setTimeout(() => {       if (!isResolved) {         isResolved = true;         reject(new Error(`OAuth callback timeout after ${timeout}ms`));       }     }, timeout);      // Wrap resolve/reject to handle cleanup     const wrappedResolve = (result: CallbackResult) => {       if (!isResolved) {         isResolved = true;         clearTimeout(timer);         resolve(result);       }     };      this.callbackPromise = {       resolve: wrappedResolve,       reject: (error) => {         if (!isResolved) {           isResolved = true;           clearTimeout(timer);           reject(error);         }       }     };   }); } 

Supporting AbortSignal enables programmatic cancellation, essential for GUI applications where users might close a window mid-flow:

if (signal) {   if (signal.aborted) {     throw new Error("Operation aborted");   }    const abortHandler = () => {     this.stop();     if (this.callbackPromise) {       this.callbackPromise.reject(new Error("Operation aborted"));     }   };    signal.addEventListener("abort", abortHandler); } 

Providing User Feedback

When users complete the OAuth flow, they see a browser page indicating success or failure. Instead of a blank page or cryptic message, provide clear feedback with custom HTML:

function generateCallbackHTML(   params: CallbackResult,   templates: Templates, ): string {   if (params.error) {     // OAuth error - show error page     return templates.errorHtml       .replace(/{{error}}/g, params.error)       .replace(/{{error_description}}/g, params.error_description || "");   }    // Success - show confirmation   return (     templates.successHtml ||     `     <html>       <body style="font-family: system-ui; padding: 2rem; text-align: center;">         <h1>✅ Authorization successful!</h1>         <p>You can now close this window and return to your terminal.</p>       </body>     </html>   `   ); } 

For production applications, consider adding CSS animations, auto-close functionality, or deep links back to your desktop application.

Security Considerations

While localhost servers are inherently more secure than public endpoints, several security measures are crucial:

  1. Bind to localhost only: Never bind to 0.0.0.0 or public interfaces. This prevents network-based attacks:
this.server.listen(port, "localhost"); // NOT "0.0.0.0" 

2. Validate the state parameter: OAuth's state parameter prevents CSRF attacks. Generate it before starting the flow and validate it in the callback:

const state = crypto.randomBytes(32).toString("base64url"); const authUrl = `${provider}/authorize?state=${state}&...`;  // In callback handler if (params.state !== expectedState) {   throw new Error("State mismatch - possible CSRF attack"); } 

3. Close the server immediately: Once you receive the callback, shut down the server to minimize the attack surface:

const result = await server.waitForCallback("/callback", 30000); await server.stop(); // Always cleanup 

4. Use unpredictable ports when possible: If your OAuth provider supports dynamic redirect URIs, use random high ports to prevent port-squatting attacks.

Putting It All Together

Here's a complete example that ties everything together:

import { createCallbackServer } from "./server"; import { spawn } from "child_process";  export async function getAuthCode(authUrl: string): Promise<string> {   const server = createCallbackServer();    try {     // Start the server     await server.start({       port: 3000,       hostname: "localhost",       successHtml: "<h1>Success! You can close this window.</h1>",       errorHtml: "<h1>Error: {{error_description}}</h1>",     });      // Open the browser     const opener =       process.platform === "darwin"         ? "open"         : process.platform === "win32"           ? "start"           : "xdg-open";     spawn(opener, [authUrl], { detached: true });      // Wait for callback     const result = await server.waitForCallback("/callback", 30000);      if (result.error) {       throw new Error(`OAuth error: ${result.error_description}`);     }      return result.code!;   } finally {     // Always cleanup     await server.stop();   } }  // Usage const code = await getAuthCode(   "https://github.com/login/oauth/authorize?" +     "client_id=xxx&redirect_uri=http://localhost:3000/callback", ); 

Best Practices and Next Steps

Building a robust OAuth callback server requires attention to detail, but the patterns are consistent across implementations. Key takeaways:

  • Use Web Standards APIs for cross-runtime compatibility
  • Handle all error cases including timeouts and user cancellation
  • Provide clear user feedback with custom success/error pages
  • Implement security measures like state validation and localhost binding
  • Clean up resources by always stopping the server after use

This localhost callback approach has become the de facto standard for OAuth in CLI tools. Libraries like oauth-callback provide production-ready implementations with additional features like automatic browser detection, token persistence, and PKCE support.

Modern OAuth is moving toward even better solutions like Device Code Flow for headless environments and Dynamic Client Registration for eliminating pre-shared secrets. But for now, the localhost callback server remains the most widely supported and user-friendly approach for bringing OAuth to command-line tools.


Ready to implement OAuth in your CLI tool? Check out the complete oauth-callback library for a battle-tested implementation that handles all the edge cases discussed here.

This tutorial is part of a series on modern authentication patterns. Follow @koistya for more insights on building secure, user-friendly developer tools.

Market Opportunity
READY Logo
READY Price(READY)
$0.009365
$0.009365$0.009365
+1.27%
USD
READY (READY) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Trading time: Tonight, the US GDP and the upcoming non-farm data will become the market focus. Institutions are bullish on BTC to $120,000 in the second quarter.

Trading time: Tonight, the US GDP and the upcoming non-farm data will become the market focus. Institutions are bullish on BTC to $120,000 in the second quarter.

Daily market key data review and trend analysis, produced by PANews.
Share
PANews2025/04/30 13:50
Ethereum Fusaka Upgrade Set for December 3 Mainnet Launch, Blob Capacity to Double

Ethereum Fusaka Upgrade Set for December 3 Mainnet Launch, Blob Capacity to Double

Ethereum developers confirmed the Fusaka upgrade will activate on mainnet on December 3, 2025, following a systematic testnet rollout beginning on October 1 on Holesky. The major hard fork will implement around 11-12 Ethereum Improvement Proposals targeting scalability, node efficiency, and data availability improvements without adding new user-facing features. According to Christine Kim, the upgrade introduces a phased blob capacity expansion through Blob Parameter Only forks occurring two weeks after Fusaka activation. Initially maintaining current blob limits of 6/9 target/max, the first BPO fork will increase capacity to 10/15 blobs one week later. A second BPO fork will further expand limits to 14/21 blobs, more than doubling total capacity within two weeks. Strategic Infrastructure Overhaul Fusaka prioritizes backend protocol improvements over user-facing features, focusing on making Ethereum faster and less resource-intensive. The upgrade includes PeerDAS implementation through EIP-7594, allowing validator nodes to verify data by sampling small pieces rather than downloading entire blobs. This reduces bandwidth and storage requirements while enhancing Layer 2 rollup scalability. The upgrade builds on recent gas limit increases from 30 million to 45 million gas, with ongoing discussions for further expansion. EIP-7935 proposes increasing limits to 150 million gas, potentially enabling significantly higher transaction throughput. These improvements complement broader scalability efforts, including EIP-9698, which suggests a 100x gas limit increase over two years to reach 2,000 transactions per second. Fusaka removes the previously planned EVM Object Format redesign to reduce complexity while maintaining focus on essential infrastructure improvements. The upgrade introduces bounded base fees for blob transactions via EIP-7918, creating more predictable transaction costs for data-heavy applications. Enhanced spam resistance and security improvements strengthen network resilience against scalability bottlenecks and attacks. Technical Implementation and Testing Timeline The Fusaka rollout follows a conservative four-phase approach across Ethereum testnets before mainnet deployment. Holesky upgrade occurs October 1, followed by Sepolia on October 14 and Hoodi on October 28. Each testnet will undergo the complete BPO fork sequence to validate the blob capacity expansion mechanism. BPO forks activate automatically based on predetermined epochs rather than requiring separate hard fork processes. On mainnet, the first BPO fork launches December 17, increasing blob capacity to 10/15 target/max. The second BPO fork activates January 7, 2026, reaching the final capacity of 14/21 blobs. This automated approach enables flexible blob scaling without requiring full network upgrades. Notably, node operators face release deadlines ranging from September 25 for Holesky to November 3 for mainnet preparation. The staggered timeline, according to the developers, allows comprehensive testing while giving infrastructure providers sufficient preparation time. Speculatively, the developers use this backward-compatible approach to ensure smooth transitions with minimal disruption to existing applications. PeerDAS implementation reduces node resource demands, potentially increasing network decentralization by lowering barriers for smaller operators. The technology enables more efficient data availability sampling, crucial for supporting growing Layer 2 rollup adoption. Overall, these improvements, combined with increased gas limits, will enable Ethereum to handle higher transaction volumes while maintaining security guarantees. Addressing Network Scalability Pressures The Fusaka upgrade addresses mounting pressure for Ethereum base layer improvements amid criticism of Layer 2 fragmentation strategies. Critics argue that reliance on rollups has created isolated chains with limited interoperability, complicating user experiences. The upgrade’s focus on infrastructure improvements aims to enhance base layer capacity while supporting continued Layer 2 growth. The recent validator queue controversy particularly highlights ongoing network scalability challenges. According to a Cryptonews report covered yesterday, currently, over 2M ETH sits in exit queues facing 43-day delays, while entry queues process in just 7 days.Ethereum Validator Queue (Source: ValidatorQueue) However, Vitalik Buterin defended these delays as essential for network security, comparing validator commitments to military service requiring “friction in quitting.” The upgrade coincides with growing institutional interest in Ethereum infrastructure, with VanEck predicting that Layer 2 networks could reach $1 trillion market capitalization within six years. Fusaka’s emphasis on data availability and node efficiency supports Ethereum’s evolution toward seamless cross-chain interoperability. The upgrade complements initiatives like the Open Intents Framework, where Coinbase Payments recently joined as a core contributor. The initiative, if successful, will address the $21B surge in cross-chain crime. These coordinated efforts aim to unify the fragmented multichain experience while maintaining Ethereum’s security and decentralization principles
Share
CryptoNews2025/09/19 16:37
VectorUSA Achieves Fortinet’s Engage Preferred Services Partner Designation

VectorUSA Achieves Fortinet’s Engage Preferred Services Partner Designation

TORRANCE, Calif., Feb. 3, 2026 /PRNewswire/ — VectorUSA, a trusted technology solutions provider, specializes in delivering integrated IT, security, and infrastructure
Share
AI Journal2026/02/05 00:02