Insights

Guide to Self-Hosting PostHog on AWS EC2

Written by Vaibhav Soni | Oct 24, 2025 1:50:18 PM
This document provides a complete, step-by-step guide to setting up PostHog (an open-source product analytics platform) on our own AWS EC2 instance using Docker Compose. It's designed for anyone new to the process or needing a refresher. We start from creating the EC2 instance and follow through to final access.
PostHog's official requirements for self-hosting with Docker Compose are: a Linux Ubuntu Virtual Machine equivalent to 4 vCPU, 16GB RAM, and more than 30GB storage. In our setup, we'll use an instance type that meets or exceeds this (e.g., t3.large with 2 vCPU/8GB RAM as a starting point, but upgrade to m5.xlarge or similar for full compliance if needed for production load). We'll configure 50GB storage as per our needs. The OS will be Ubuntu 22.04 LTS.
No Java or JAR files are involved—PostHog runs on Python/Node.js with Docker containers for services like ClickHouse, Postgres, Redis, etc.
 
Estimated Time: 30-60 minutes.
 
Warnings: This is for self-hosting under PostHog's MIT license. Ensure we have an AWS account and a domain pointed to the EC2 (for SSL). Back up data regularly.
 
 
1. Prerequisites
  • AWS account.
  • SSH client (e.g., OpenSSH on Linux/Mac, PuTTY on Windows).
  • A domain name with an A record ready to point to the EC2 public IP (for HTTPS via Let's Encrypt).
  • Basic terminal knowledge.
Our Server Setup:
  • OS: Ubuntu 22.04 LTS.
  • Instance Type: t3.large (2 vCPU, 8GB RAM) or better to approach PostHog's 4 vCPU/16GB recommendation.
  • Storage: 50GB EBS (General Purpose SSD gp3).
  • Network: Open ports 22 (SSH), 80/443 (HTTP/HTTPS), and 8000 (PostHog web, if not using reverse proxy).
2. Creating the EC2 Instance
Follow these steps in the AWS Management Console (console.aws.amazon.com).
  1. Log in to AWS Console and navigate to EC2 > Launch Instance.
  2. Name and Tags: Enter a name like "PostHog-Server".
  3. Application and OS Images (AMI): Search for "Ubuntu Server 22.04 LTS". Select the 64-bit (x86) version.
  4. Instance Type: Select t3.large (or m5.xlarge to better match PostHog's 4 vCPU/16GB recommendation for heavier use).
  5. Key Pair (Login): Create a new key pair (RSA, .pem format) or use an existing one. Download and store securely (chmod 400 on Linux/Mac).
  6. Network Settings:
    • Allow SSH traffic from our IP (or Anywhere for testing—less secure).
    • Allow HTTP/HTTPS traffic from Anywhere.
  7. Configure Storage:
    • Delete the default 8GB volume.
    • Add a new root volume: 50GB, gp3 (General Purpose SSD).
  8. Launch Instance: Review and click Launch Instance. Wait 2-3 minutes for it to start.
  9. Note the Public IPv4 address (e.g., 3.123.45.67)—this is our server's IP.
3. Connecting to the Instance
  1. Open a terminal and connect via SSH:

    ssh -i /path/to/our-key.pem ubuntu@our-ec2-public-ip
    Example: ssh -i posthog-key.pem ubuntu@3.123.45.67
  2. If prompted, type "yes" to accept the host key. We should now be logged in as the ubuntu user.
Troubleshooting Connection:
  • Ensure security group allows port 22 from our IP.
  • Check key permissions: chmod 400 our-key.pem.
4. Installing Docker and Docker Compose
PostHog runs entirely in Docker containers, so install Docker first.
  1. Update packages:

    sudo apt update && sudo apt upgrade -y

Install Docker:

sudo apt install docker.io -y
sudo systemctl start docker
  1. sudo systemctl enable docker

  2. Add our user to the Docker group (to run without sudo):

    sudo usermod -aG docker ubuntu
    Log out and back in for changes to take effect.
  3. Install Docker Compose (v2+):

    sudo apt install docker-compose-plugin -y
    Verify: docker compose version (note the space in "compose").
5. Cloning PostHog Repository and Running with Docker Compose
We previously cloned the repo and ran it—here's the full process. PostHog provides a docker-compose.yml in their repo for self-hosting.
 
 
  1. Install Git if needed:

    sudo apt install git -y

Clone the PostHog repository:

git clone https://github.com/PostHog/posthog.git
 
  1. cd posthog

    Note: For production, checkout a stable release tag: git checkout release-XX (check tags on GitHub).

Use the official hobby deployment script for a pre-configured docker-compose.yml:
  1. This fetches the latest compose file and starts everything. 
    Run:

    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/PostHog/posthog/HEAD/bin/deploy-hobby)"

    • Enter the release tag (e.g., release-1.100.0—check Docker Hub for latest).
    • Enter our domain (e.g., analytics.ourdomain.com).

  2. If preferring manual (matching our previous clone):

    • The repo includes a docker-compose.yml (or fetch it: curl -o docker-compose.yml https://raw.githubusercontent.com/PostHog/posthog/master/docker-compose.yml).
    • Edit .env if needed (create from .env.example).

  3. Start the services:

    docker compose up -d

    This pulls images (PostHog web/worker, ClickHouse, Postgres, Redis, Kafka, Zookeeper, MinIO, Caddy) and starts in detached mode. Wait 5-10 minutes for migrations.

  4. Verify containers are running:

    docker ps

    We should see 8+ healthy containers (no recent restarts).

Key Services in docker-compose.yml: Web (PostHog app), Worker (background tasks), ClickHouse (analytics DB), Postgres (main DB), Redis (cache), etc. Volumes persist data in /posthog (our 50GB storage covers this).
 
 
6. Configuration
  • Environment Variables: Edit .env for custom settings (e.g., SECRET_KEY=our-secret, DATABASE_URL=...). Full list: PostHog Env Vars.
  • Domain/SSL: The Caddy container auto-handles Let's Encrypt if our domain points to the EC2 IP.
  • Scaling: For larger setups, adjust resources in docker-compose.yml (e.g., more ClickHouse replicas).
  • Restart after changes: docker compose down && docker compose up -d.
Update PostHog: git pull then docker compose down && docker compose up -d --pull always.
 
7. Troubleshooting Common Docker Issues
Based on our previous Docker running issues (e.g., containers failing to start):
  • Containers Not Starting: Run docker compose logs for errors. Common: Port conflicts (stop ufw: sudo ufw disable), low RAM (upgrade instance to meet PostHog's 16GB recommendation).
  • Migration Errors: Check docker logs posthog_web-1 for DB issues. Restart: docker compose restart web.
  • Storage Full: Monitor with df -h. Prune old images: docker system prune -f.
  • Auto-Start on Reboot: Add to crontab: crontab -e then @reboot cd /home/ubuntu/posthog && docker compose up -d.
  • Network/Domain Issues: Ensure security group allows 80/443. Test: curl http://localhost:8000.
  • Out of Memory: If using below PostHog's 16GB RAM, we may see OOM on heavy queries—monitor with htop (install: sudo apt install htop).
For more, check PostHog community: PostHog Questions.
 
Previous Setup Notes (from our conversation): We cloned the repo, hit Docker pull/start issues (likely network or perms—resolved by adding user to docker group).
 
 
8. Accessing and Using PostHog
  1. Point our domain A record to the EC2 public IP (update in our DNS provider).
  2. Wait for SSL (2-5 min), then visit https://ourdomain.com.
  3. Complete onboarding: Create account, validate setup (9 checks), install JS snippet on our site.
  4. Dashboard ready! Send events via PostHog SDKs.
Cleanup: Stop: docker compose down. Terminate EC2 in AWS Console when not in use.
If issues persist, share logs (docker compose logs > logs.txt) for help.



Disk Space Issues and Monitoring
In our setup, we faced repeated disk space issues: started with 30GB, expanded to 50GB, then 70GB, and finally to 120GB. Containers stopped when the disk filled up, causing server downtime. Since this isn't production and we don't need multi-server scaling for load balancing, we implemented a monitoring solution instead of constant upgrades.
We created a Node.js script with cron jobs: one runs every hour to check disk usage on the root partition—if it reaches or exceeds 90%, it sends an urgent email alert to specified addresses (using Nodemailer with Gmail). This reminds us to upgrade storage, delete old logs, or clear caches to free space and avoid instance stops.
Additionally, another cron in the script runs weekly to automatically delete older logs (e.g., logs older than 7 days in common directories like /var/log and PostHog volumes) to reclaim space proactively.
To set this up on the EC2 instance:
 
  1. Install Node.js:
    text
    sudo apt install nodejs npm -y

  2. Create a directory for the script, e.g., mkdir ~/disk-monitor && cd ~/disk-monitor.
Install dependencies:
text
npm init -y
  1. npm install node-cron diskusage nodemailer rimraf

Create monitor.js with the following code (replace placeholders like email credentials—use app password for Gmail; store securely, e.g., in env vars):
javascript
const cron = require('node-cron');
const diskusage = require('diskusage');
const nodemailer = require('nodemailer');
const rimraf = require('rimraf');
const fs = require('fs');
const path = require('path');
 
// Email config (use environment variables in production)
const transporter = nodemailer.createTransport({
  service: 'gmail',
  auth: {
    user: 'your-email@gmail.com', // Replace with our Gmail
    pass: 'your-app-password' // App password, not regular password
  }
});
 
const mailOptions = {
  from: 'your-email@gmail.com',
  to: 'recipient1@example.com, recipient2@example.com', // Add our emails
  subject: 'URGENT: Disk Space Alert on PostHog Server'
};
 
// Paths to clean logs (adjust as needed, e.g., PostHog logs in volumes)
const logPaths = [
  '/var/log', // System logs
  '/home/ubuntu/posthog/logs' // Assuming PostHog logs here; check actual path
];
 
// Function to check disk usage
async function checkDisk() {
  try {
    const { total, available } = await diskusage.check('/');
    const usedPercent = ((total - available) / total) * 100;
    if (usedPercent >= 90) {
      mailOptions.text = `Disk usage is at ${usedPercent.toFixed(2)}%! Take action to free space.`;
      await transporter.sendMail(mailOptions);
      console.log('Alert email sent!');
    } else {
      console.log(`Disk usage: ${usedPercent.toFixed(2)}% - OK`);
    }
  } catch (err) {
    console.error('Error checking disk:', err);
  }
}
 
// Function to delete old logs (older than 7 days)
function deleteOldLogs() {
  const sevenDaysAgo = new Date(Date.now() - 7 * 24 * 60 * 60 * 1000);
  logPaths.forEach(dir => {
    if (fs.existsSync(dir)) {
      fs.readdirSync(dir).forEach(file => {
        const filePath = path.join(dir, file);
        const stat = fs.statSync(filePath);
        if (stat.isFile() && stat.mtime < sevenDaysAgo && file.endsWith('.log')) {
          rimraf.sync(filePath);
          console.log(`Deleted old log: ${filePath}`);
        }
      });
    }
  });
}
 
// Schedule: Every hour for disk check
cron.schedule('0 * * * *', checkDisk);
 
// Schedule: Every Sunday at midnight for log cleanup
cron.schedule('0 0 * * 0', deleteOldLogs);
 
  1. console.log('Disk monitor running...');

  2. Run the script in background: node monitor.js & (or use pm2 for persistence: npm install -g pm2 && pm2 start monitor.js && pm2 startup)