Back to Blog
Tutorial
January 15, 2025
8 min read

How to Build a ChatGPT Wrapper Without Exposing Your API Key

Learn how to create a secure ChatGPT-powered application using Conduit Link to protect your OpenAI API key from client-side exposure.

Conduit Link Team

Technical Content Writers

How to Build a ChatGPT Wrapper Without Exposing Your API Key

Building a ChatGPT-powered application is exciting, but exposing your OpenAI API key in client-side code is a critical security risk. In this tutorial, we'll show you how to build a secure ChatGPT wrapper using Conduit Link.

The Problem with Client-Side API Keys

When you include API keys directly in your React, Vue, or vanilla JavaScript application, they become visible to anyone who inspects your code. This exposure can lead to:

  • Unauthorized usage and unexpected bills
  • Rate limit abuse affecting your legitimate users
  • Data breaches if the API provides access to sensitive information
  • Service disruption when your key gets revoked
  • Enter Conduit Link

    Conduit Link acts as a secure proxy between your client application and the OpenAI API. Your API key stays safely on our servers, encrypted with AES-256 encryption. The best part? You can continue using the official OpenAI SDK - just point it to your Conduit Link proxy URL.

    Step-by-Step Implementation

    1. Set Up Your Conduit Link

    First, create a new conduit link in your dashboard:

    
    
  • Navigate to your project in the dashboard
  • Click "Create Link"
  • Name it "OpenAI ChatGPT"
  • Set the endpoint URL to: https://api.openai.com/v1
  • Add your OpenAI API key (starts with sk-)
  • Configure security settings:
  • - Rate limit: 100 requests/minute - Allowed paths: /v1/chat/completions, /v1/models - CORS origins: https://yourdomain.com, http://localhost:3000
  • Save and copy your access token (format: cl_xxxxx)
  • 2. Install the OpenAI SDK

    
    npm install openai
    

    3. Configure OpenAI Client with Your Proxy URL

    
    import OpenAI from 'openai';

    // Configure the OpenAI client to use your Conduit Link proxy const openai = new OpenAI({ baseURL: 'https://proxy.conduit.link/YOUR_LINK_ID/v1', apiKey: 'cl_dummy', // Any dummy value works - real key is in Conduit Link defaultHeaders: { 'X-Access-Token': 'cl_xxxxxxxxxxxxxx', // Your access token from dashboard // Optional: Add JWT for user-specific rate limiting 'Authorization': Bearer ${userJWT} // Only if JWT verification is enabled }, dangerouslyAllowBrowser: true // Required for client-side usage });

    // Now use the OpenAI SDK as normal async function chatWithGPT(messages) { try { const completion = await openai.chat.completions.create({ model: 'gpt-3.5-turbo', messages: messages, temperature: 0.7, max_tokens: 150 });

    return completion.choices[0].message; } catch (error) { console.error('Chat error:', error); throw error; } }

    // Streaming example async function streamChat(messages, onChunk) { const stream = await openai.chat.completions.create({ model: 'gpt-3.5-turbo', messages: messages, stream: true, temperature: 0.7 });

    for await (const chunk of stream) { onChunk(chunk.choices[0]?.delta?.content || ''); } }

    4. Use in Your React Application

    
    import React, { useState } from 'react';
    import OpenAI from 'openai';

    // Initialize OpenAI with your Conduit Link proxy const openai = new OpenAI({ baseURL: https://proxy.conduit.link/${process.env.REACT_APP_LINK_ID}/v1, apiKey: 'cl_dummy', // Any value - real key is secure defaultHeaders: { 'X-Access-Token': process.env.REACT_APP_ACCESS_TOKEN // cl_xxxxx format }, dangerouslyAllowBrowser: true });

    function ChatInterface() { const [messages, setMessages] = useState([]); const [input, setInput] = useState(''); const [loading, setLoading] = useState(false);

    const sendMessage = async () => { if (!input.trim()) return;

    const userMessage = { role: 'user', content: input }; const updatedMessages = [...messages, userMessage]; setMessages(updatedMessages); setInput(''); setLoading(true);

    try { const completion = await openai.chat.completions.create({ model: 'gpt-3.5-turbo', messages: updatedMessages });

    setMessages([...updatedMessages, completion.choices[0].message]); } catch (error) { console.error('Failed to send message:', error); // Handle rate limit errors if (error.status === 429) { alert('Rate limit exceeded. Please try again later.'); } } finally { setLoading(false); } };

    return (

    {messages.map((msg, idx) => (
    message \${msg.role}}> {msg.content}
    ))}
    setInput(e.target.value)} onKeyPress={(e) => e.key === 'Enter' && sendMessage()} placeholder="Type your message..." disabled={loading} />
    ); }

    Advanced Features

    Rate Limiting Per User

    Implement user-specific rate limits using JWT tokens:

    
    // Enable JWT verification in your conduit link settings
    // Then include the JWT token in your requests
    const openai = new OpenAI({
      baseURL: https://proxy.conduit.link/${LINK_ID}/v1,
      apiKey: 'cl_dummy',
      defaultHeaders: {
        'X-Access-Token': ACCESS_TOKEN, // cl_xxxxx format
        'Authorization': Bearer ${userJWT} // User's JWT token
      },
      dangerouslyAllowBrowser: true
    });
    

    Usage Analytics

    Track usage per user or feature by adding custom headers:

    
    const completion = await openai.chat.completions.create({
      model: 'gpt-3.5-turbo',
      messages: messages
    }, {
      headers: {
        'X-User-ID': userId,
        'X-Feature': 'chat-widget'
      }
    });
    

    Error Handling

    Implement robust error handling for rate limits and API errors:

    
    try {
      const completion = await openai.chat.completions.create({
        model: 'gpt-3.5-turbo',
        messages: messages
      });
    } catch (error) {
      if (error.status === 429) {
        // Handle rate limit
        const resetTime = error.headers?.['X-RateLimit-Reset'];
        showRateLimitMessage(resetTime);
      } else if (error.status === 401) {
        // Handle authentication error
        refreshAccessToken();
      }
    }
    

    Security Best Practices

  • Environment Variables: Store your Conduit Link access token in environment variables
  • CORS Configuration: Restrict allowed origins to your domains only
  • Rate Limiting: Set appropriate limits to prevent abuse
  • Path Restrictions: Limit access to only the OpenAI endpoints you need
  • Monitoring: Use Conduit Link's analytics to track usage patterns
  • Cost Optimization

    Monitor and control your OpenAI API costs:

  • Set spending alerts in your dashboard
  • Implement request queuing for rate-limited users
  • Cache common responses when appropriate
  • Use GPT-3.5-turbo for most requests, GPT-4 only when needed
  • Conclusion

    By using Conduit Link, you've successfully built a ChatGPT wrapper that:

  • Keeps your API key secure
  • Provides rate limiting and access control
  • Offers usage analytics and monitoring
  • Scales with your application

Ready to build your own secure ChatGPT application? Sign up for Conduit Link and get started in minutes.

Share this article

View all articles

Ready to Secure Your APIs?

Join thousands of developers who trust Conduit Link to protect their API keys and build secure applications.

Connect Securely. Ship Faster.

© 2025 Conduit Link. All rights reserved.