Rate Limits

The HeadshotPro API implements rate limiting to ensure fair usage and maintain service reliability.

Rate Limit Tiers

TierRequests per MinuteDescription
Standard300Default for all organizations
Professional300Standard commercial plans
Enterprise1,000High-volume enterprise plans

Contact sales@headshotpro.com to upgrade your rate limit tier.

Rate Limit Headers

Every API response includes rate limit information in the headers:

HeaderDescription
X-RateLimit-LimitMaximum requests allowed per minute
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetISO timestamp when the limit resets

Example Headers

X-RateLimit-Limit: 300
X-RateLimit-Remaining: 295
X-RateLimit-Reset: 2024-01-15T10:31:00.000Z

Rate Limit Exceeded

When you exceed your rate limit, the API returns:

HTTP/1.1 429 Too Many Requests

{
  "success": false,
  "error": "Rate limit exceeded",
  "code": "RATE_LIMIT_EXCEEDED"
}

Handling Rate Limits

1. Monitor Headers

async function apiRequest(url, options) {
  const response = await fetch(url, options);

  // Check remaining requests
  const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
  const limit = parseInt(response.headers.get('X-RateLimit-Limit'));

  if (remaining < limit * 0.1) {
    console.warn(`Rate limit warning: ${remaining}/${limit} requests remaining`);
  }

  return response;
}

2. Implement Backoff

async function apiRequestWithRetry(url, options, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const response = await fetch(url, options);

    if (response.status === 429) {
      const resetTime = new Date(response.headers.get('X-RateLimit-Reset'));
      const waitTime = Math.max(0, resetTime - new Date());

      console.log(`Rate limited. Waiting ${waitTime}ms...`);
      await new Promise(resolve => setTimeout(resolve, waitTime + 100));
      continue;
    }

    return response;
  }

  throw new Error('Rate limit retry attempts exhausted');
}

3. Queue Requests

class RateLimitedQueue {
  constructor(requestsPerMinute = 300) {
    this.queue = [];
    this.processing = false;
    this.interval = 60000 / requestsPerMinute;
  }

  async add(requestFn) {
    return new Promise((resolve, reject) => {
      this.queue.push({ requestFn, resolve, reject });
      this.process();
    });
  }

  async process() {
    if (this.processing || this.queue.length === 0) return;
    this.processing = true;

    while (this.queue.length > 0) {
      const { requestFn, resolve, reject } = this.queue.shift();

      try {
        const result = await requestFn();
        resolve(result);
      } catch (error) {
        reject(error);
      }

      await new Promise(r => setTimeout(r, this.interval));
    }

    this.processing = false;
  }
}

// Usage
const queue = new RateLimitedQueue(300);

const results = await Promise.all(
  emails.map(email =>
    queue.add(() => createInvite(email))
  )
);

Best Practices

Batch Operations

Instead of making individual requests, use batch endpoints where available:

// Instead of 100 individual requests:
for (const email of emails) {
  await createInvite(email);
}

// Use batch operations where possible:
await addTeamMembers(teamId, emails);

Caching

Cache responses to reduce API calls:

const cache = new Map();
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes

async function getOrganization() {
  const cached = cache.get('organization');
  if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    return cached.data;
  }

  const response = await fetch('/api/v2/organization', { ... });
  const data = await response.json();

  cache.set('organization', { data, timestamp: Date.now() });
  return data;
}

Efficient Polling

When polling for status updates, use exponential backoff:

async function pollModelStatus(modelId, maxAttempts = 60) {
  let delay = 5000; // Start with 5 seconds

  for (let attempt = 0; attempt < maxAttempts; attempt++) {
    const { model } = await getModel(modelId);

    if (model.status === 'active') {
      return model;
    }

    if (model.status === 'failed') {
      throw new Error('Model processing failed');
    }

    await new Promise(r => setTimeout(r, delay));
    delay = Math.min(delay * 1.5, 60000); // Max 1 minute
  }

  throw new Error('Polling timeout');
}

Use Webhooks

Instead of polling, configure webhooks to receive real-time notifications:

// Instead of polling:
while (model.status !== 'active') {
  await sleep(5000);
  model = await getModel(modelId);
}

// Use webhooks:
app.post('/webhooks/headshotpro', (req, res) => {
  const { event, object } = req.body;

  if (event === 'model.photos_ready') {
    // Photos are ready, fetch them
    processCompletedModel(object._id);
  }

  res.status(200).send('OK');
});

Invite Limits

In addition to API rate limits, invites have a separate spam protection limit to prevent abuse.

Pending Invite Cap

MetricFormula
Max pending invitescredits × 2

For example, if your organization has 10 credits, you can have up to 20 pending invites at a time.

Pending invites include:

  • V1 invites that are valid but not yet used
  • Models in onboarding, pending, waiting, or generatingHeadshots status

When Limit Is Exceeded

{
  "success": false,
  "error": "Too many pending invites. You have 20 pending invites and 10 credits. Wait for invites to be completed or revoke unused invites.",
  "code": "INVITE_LIMIT_EXCEEDED",
  "pendingInvites": 20,
  "credits": 10,
  "maxPendingInvites": 20
}

Solutions

  1. Wait for completions - Pending invites clear when users complete their headshots
  2. Revoke unused invites - Use POST /organization/invites/revoke to clear stale invites
  3. Purchase more credits - More credits increases your pending invite cap

Rate Limit by Endpoint

All endpoints share the same rate limit pool. There are no per-endpoint limits.

Burst Handling

The rate limiter uses a sliding window algorithm, so short bursts within the limit are allowed. However, sustained high-volume requests will trigger rate limiting.