AWS VPC Endpoints: Cut NAT Costs + Boost Security with Private AWS Access
Most teams don’t set out to expose their workloads to the internet. It happens accidentally.
A typical AWS VPC is designed as a private environment: application servers, workers, and databases communicate over internal IP addresses, isolated from the outside world. That part is usually deliberate and well understood. The distortion begins when those same private workloads need to talk to AWS itself.
Accessing services like Amazon S3, DynamoDB, Secrets Manager, KMS, SQS, or CloudWatch has historically required an unexpected detour: traffic must exit the VPC, traverse internet-facing infrastructure, and re-enter AWS through public service endpoints - even though both sides of the connection live inside the same provider network.
In practice, the path looks like this:
- A private resource sends traffic to an AWS service’s public API endpoint.
- The traffic leaves the VPC through an Internet Gateway.
- If the resource has no public IP, it is translated through a NAT Gateway.
- The packet traverses the public internet.
- The request re-enters AWS at the service’s public edge.
This design works, but it quietly introduces three problems most architectures never account for: unnecessary cost, expanded attack surface, and an architectural dependency on internet-facing components for internal service communication.
VPC endpoints exist to remove that distortion.
The Real Costs of the “Public Path”
- Financial cost: NAT Gateways are billed per hour and per gigabyte processed. Modest workloads (e.g., daily ECS/ECR image refresh + logs) can easily hit $200+/month. For workloads that frequently talk to S3, ECR, or Secrets Manager, NAT traffic becomes a silent permanent tax on internal AWS-to-AWS communication.
- Security exposure: Even though traffic is encrypted, it now traverses the public internet. This increases attack surface, operational scrutiny, and reliance on perimeter controls that shouldn’t be required for internal service calls.
- Architectural complexity and latency: Public-facing infrastructure (IGWs, NATs, elastic IPs) must be designed and maintained just to let private resources communicate with AWS itself. You also inherit the variability and unpredictability of the public internet.
What Is a VPC Endpoint?
A VPC Endpoint provides private connectivity between your VPC and AWS services. Traffic never leaves the AWS network and does not require an Internet or NAT Gateway.
Conceptually, it is a private utility line between your building and AWS’s internal service network. Operationally, AWS implements this idea in two ways, because not all services have the same networking characteristics.
Gateway vs Interface Endpoints: Two Strategies
VPC Endpoints come in exactly two forms:
- Gateway Endpoints – for S3 and DynamoDB, using route table redirection. Free and infrastructure-light.
- Interface Endpoints – for all other services, using Elastic Network Interfaces (ENIs) and PrivateLink. Paid, highly configurable, and security-conscious.
This split is deliberate: AWS matches architecture to service type - massive storage versus control-plane APIs.
Gateway Endpoints: S3/DynamoDB Routing
Gateway endpoints exist only for S3 and DynamoDB. This is a design choice, not a limitation.
How It Works
- AWS provides a managed prefix list of IP ranges for the service in the region.
- You associate the endpoint with one or more route tables in your VPC.
- The route table redirects traffic for that service through the endpoint.
Traffic Flow
- Your instance sends traffic to a service IP.
- The route table matches the prefix list.
- Traffic flows internally through AWS’s backbone.
- Return traffic follows the same private path.
Important Considerations
- Gateway endpoints are non-transitive - they only work within the VPC where they are created.
- They cannot be shared across peered VPCs, Transit Gateways, VPNs, or Direct Connect.
Interface Endpoints: PrivateLink and ENIs
All other services use Interface Endpoints, powered by AWS PrivateLink.
What AWS Creates
- One or more Elastic Network Interfaces (ENIs) per Availability Zone.
- Each ENI gets a private IP from your subnet.
- ENIs are attached to security groups.
- ENIs connect via PrivateLink to a Network Load Balancer on the service side, which has no public IPs.
This creates a private entry point into AWS services from within your VPC.
The Single Most Important Detail: DNS
Private DNS is decisive.
Applications connect to hostnames, not endpoints. For example:
secretsmanager.us-east-1.amazonaws.com
- If the hostname resolves to a public IP, traffic will exit the VPC (NAT/IGW).
- If it resolves to a private IP, traffic stays inside AWS and flows through the endpoint.
Without Private DNS enabled, the endpoint exists but is never used by default—a common misconfiguration.
How Traffic Actually Flows: EC2 → Secrets Manager
Outbound
- Application calls
secretsmanager.us-east-1.amazonaws.com. - DNS resolves the hostname to the ENI’s private IP.
- Packet is routed locally to that IP.
- The ENI forwards the connection over AWS PrivateLink to Secrets Manager.
Return
- Response comes back via the same PrivateLink connection.
- ENI delivers the packet to the originating source.
No NAT translation, no dynamic routing, no public internet exposure. The endpoint is a first-class private destination.
Security Is Layered
VPC Endpoints reduce exposure but don’t replace authorization:
- Endpoint Policies – restrict who can access the endpoint and which resources.
- Security Groups – control which workloads can reach Interface Endpoints.
- IAM & Resource Policies – remain authoritative.
Every request must pass all layers to succeed.
Advanced Design Patterns
Multiple Gateway Endpoints
- Separate Gateway Endpoints can be created for S3 with distinct policies and route tables.
- Useful for network-level separation between workloads.
- Limitation: VPC-specific, non-transitive.
Centralized Interface Endpoints (Hub-and-Spoke)
- A shared services VPC hosts all Interface Endpoints.
- Other VPCs connect via Transit Gateway.
- Reduces duplication and cost but increases potential blast radius.
- Requires careful DNS resolution and security group management.
Verification: Trust, but Verify
Packet Inspection
- With an Interface Endpoint:
10.0.1.28 → 10.0.2.30:443(private path). - Without an Endpoint:
10.0.1.22 → 52.x.x.x:443(public path).
DNS Resolution
nslookup s3.amazonaws.com→ Private IP means traffic is private.- Public IP indicates misconfiguration.
Boundaries and Constraints
- Region-scoped: Endpoints work only in the VPC region.
- IPv6 supported via dualstack subnets.
- Gateway Endpoints non-transitive; Interface Endpoints TCP-only.
- Bandwidth scales per AZ; service-specific quotas apply.
These are intentional design choices to ensure predictable, secure, and scalable behavior.
Conclusion: Designing with Clarity and Control
VPC Endpoints are foundational tools for secure, private, and predictable AWS networking. They:
- Keep traffic inside the AWS backbone.
- Reduce NAT costs and public exposure.
- Provide layered security through endpoint policies, security groups, and IAM.
- Enable architectures that are observable, auditable, and deliberate.
Key takeaways:
- Know the types: Gateway Endpoints for S3/DynamoDB; Interface Endpoints (PrivateLink) for all others.
- DNS drives behavior: Private DNS must be enabled.
- Security is layered: Policies, security groups, IAM all work together.
- Observe, don’t assume: Verify via packet inspection and DNS resolution.
- Respect boundaries: Regional scope, bandwidth limits, and non-transitivity are intentional.
Well-architected VPC endpoints reduce operational complexity, enhance security, and improve cost efficiency. The key is to plan intentionally, configure deliberately, and verify continuously.



