A Brief Guide to AWS Services: Navigating the Cloud Landscape
A practical walkthrough of essential AWS services for building modern web applications, from compute to storage to deployment.

The AWS maze#
I remember my first time logging into the AWS console. 200+ services. I had no idea where to start. EC2? Lambda? ECS? Fargate? Beanstalk?
Three years and countless projects later, I've learned something important: You don't need to know every service. You need to know the 20% that covers 80% of use cases.
Let me give you that map.
The Core Services You Actually Need#
1. Compute: Where Your Code Runs#
EC2 (Elastic Compute Cloud) - Virtual servers
When to use: Traditional applications, persistent workloads, full control
# Launch an EC2 instance with AWS CLI
aws ec2 run-instances \
--image-id ami-0c55b159cbfafe1f0 \
--instance-type t3.micro \
--key-name my-key-pair \
--security-group-ids sg-0123456789abcdef \
--subnet-id subnet-0123456789abcdef \
--user-data file://init-script.sh \
--tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=WebServer}]'User data script for auto-configuration:
#!/bin/bash
# init-script.sh
# Update system
yum update -y
# Install Node.js
curl -sL https://rpm.nodesource.com/setup_20.x | bash -
yum install -y nodejs
# Install PM2
npm install -g pm2
# Clone app
cd /home/ec2-user
git clone https://github.com/myuser/myapp.git
cd myapp
# Install dependencies
npm install
# Start with PM2
pm2 start npm --name "app" -- start
pm2 startup
pm2 save
# Configure nginx
amazon-linux-extras install nginx1 -y
systemctl start nginx
systemctl enable nginxLambda - Serverless functions
When to use: Event-driven tasks, APIs, scheduled jobs, cost optimization
// lambda/hello.ts
import { APIGatewayProxyEvent, APIGatewayProxyResult } from "aws-lambda";
export const handler = async (
event: APIGatewayProxyEvent,
): Promise<APIGatewayProxyResult> => {
console.log("Event:", JSON.stringify(event, null, 2));
const name = event.queryStringParameters?.name || "World";
return {
statusCode: 200,
headers: {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*",
},
body: JSON.stringify({
message: `Hello, ${name}!`,
timestamp: new Date().toISOString(),
}),
};
};Lambda with database connection (RDS):
// lambda/db-query.ts
import { Client } from "pg";
// Connection pool (reused across invocations)
let client: Client | null = null;
async function getDbClient() {
if (!client) {
client = new Client({
host: process.env.DB_HOST,
port: Number(process.env.DB_PORT),
database: process.env.DB_NAME,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
});
await client.connect();
}
return client;
}
export const handler = async (event: any) => {
const db = await getDbClient();
const result = await db.query("SELECT * FROM posts WHERE slug = $1", [
event.pathParameters.slug,
]);
return {
statusCode: 200,
body: JSON.stringify(result.rows[0]),
};
};Deploy with SAM (Serverless Application Model):
# template.yaml
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Globals:
Function:
Timeout: 30
Runtime: nodejs20.x
Environment:
Variables:
DB_HOST: !Ref DBHost
DB_NAME: !Ref DBName
Resources:
HelloFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: dist/
Handler: hello.handler
Events:
HelloApi:
Type: Api
Properties:
Path: /hello
Method: get
PostsFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: dist/
Handler: db-query.handler
VpcConfig:
SecurityGroupIds:
- !Ref LambdaSecurityGroup
SubnetIds:
- !Ref PrivateSubnet1
- !Ref PrivateSubnet2
Events:
GetPost:
Type: Api
Properties:
Path: /posts/{slug}
Method: get
Parameters:
DBHost:
Type: String
Description: Database host
DBName:
Type: String
Description: Database name# Deploy
sam build
sam deploy --guidedFargate - Containers without managing servers
When to use: Microservices, containerized apps, no server management
# Dockerfile
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]# docker-compose.yml (for local testing)
version: "3.8"
services:
app:
build: .
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgresql://user:pass@db:5432/mydb
depends_on:
- db
db:
image: postgres:16
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: mydb
ports:
- "5432:5432"ECS Task Definition:
{
"family": "my-app",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "256",
"memory": "512",
"containerDefinitions": [
{
"name": "app",
"image": "123456789012.dkr.ecr.us-east-1.amazonaws.com/my-app:latest",
"portMappings": [
{
"containerPort": 3000,
"protocol": "tcp"
}
],
"environment": [
{
"name": "NODE_ENV",
"value": "production"
}
],
"secrets": [
{
"name": "DATABASE_URL",
"valueFrom": "arn:aws:secretsmanager:us-east-1:123456789012:secret:db-url"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/my-app",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
}
}
}
]
}2. Storage: Where Your Data Lives#
S3 (Simple Storage Service) - Object storage
When to use: File uploads, static assets, backups, data lakes
// lib/s3.ts
import {
S3Client,
PutObjectCommand,
GetObjectCommand,
} from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
const s3Client = new S3Client({ region: "us-east-1" });
export async function uploadFile(
file: Buffer,
key: string,
contentType: string,
): Promise<string> {
await s3Client.send(
new PutObjectCommand({
Bucket: process.env.S3_BUCKET!,
Key: key,
Body: file,
ContentType: contentType,
// Make it private by default
ACL: "private",
// Cache for 1 year
CacheControl: "max-age=31536000",
}),
);
return `https://${process.env.S3_BUCKET}.s3.amazonaws.com/${key}`;
}
export async function getSignedDownloadUrl(key: string): Promise<string> {
const command = new GetObjectCommand({
Bucket: process.env.S3_BUCKET!,
Key: key,
});
// URL expires in 1 hour
return getSignedUrl(s3Client, command, { expiresIn: 3600 });
}
export async function getSignedUploadUrl(
key: string,
contentType: string,
): Promise<string> {
const command = new PutObjectCommand({
Bucket: process.env.S3_BUCKET!,
Key: key,
ContentType: contentType,
});
// URL expires in 5 minutes
return getSignedUrl(s3Client, command, { expiresIn: 300 });
}
// Usage in API route
export async function POST(request: Request) {
const formData = await request.formData();
const file = formData.get("file") as File;
const buffer = Buffer.from(await file.arrayBuffer());
const key = `uploads/${Date.now()}-${file.name}`;
const url = await uploadFile(buffer, key, file.type);
return NextResponse.json({ url });
}S3 bucket policy for public read access:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/public/*"
}
]
}CloudFront + S3 for fast global delivery:
// lib/cloudfront.ts
import {
CloudFrontClient,
CreateInvalidationCommand,
} from "@aws-sdk/client-cloudfront";
const cfClient = new CloudFrontClient({ region: "us-east-1" });
export async function invalidateCache(paths: string[]) {
await cfClient.send(
new CreateInvalidationCommand({
DistributionId: process.env.CLOUDFRONT_DISTRIBUTION_ID!,
InvalidationBatch: {
CallerReference: Date.now().toString(),
Paths: {
Quantity: paths.length,
Items: paths,
},
},
}),
);
}
// Usage after updating files
await invalidateCache(["/index.html", "/assets/*"]);RDS (Relational Database Service) - Managed PostgreSQL/MySQL
When to use: Relational data, ACID transactions
# Create RDS instance with AWS CLI
aws rds create-db-instance \
--db-instance-identifier mydb \
--db-instance-class db.t3.micro \
--engine postgres \
--engine-version 16.1 \
--master-username admin \
--master-user-password SecurePass123! \
--allocated-storage 20 \
--storage-type gp3 \
--vpc-security-group-ids sg-0123456789abcdef \
--db-subnet-group-name my-subnet-group \
--backup-retention-period 7 \
--preferred-backup-window "03:00-04:00" \
--preferred-maintenance-window "mon:04:00-mon:05:00" \
--enable-cloudwatch-logs-exports '["postgresql"]' \
--tags Key=Environment,Value=ProductionConnect from Lambda:
// Lambda needs to be in same VPC as RDS
import { Client } from "pg";
import {
SecretsManagerClient,
GetSecretValueCommand,
} from "@aws-sdk/client-secrets-manager";
async function getDbCredentials() {
const client = new SecretsManagerClient({ region: "us-east-1" });
const response = await client.send(
new GetSecretValueCommand({
SecretId: process.env.DB_SECRET_ARN,
}),
);
return JSON.parse(response.SecretString!);
}
export const handler = async () => {
const credentials = await getDbCredentials();
const db = new Client({
host: credentials.host,
port: credentials.port,
database: credentials.dbname,
user: credentials.username,
password: credentials.password,
});
await db.connect();
// Use db...
await db.end();
};DynamoDB - NoSQL database
When to use: Key-value data, high throughput, serverless
// lib/dynamodb.ts
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import {
DynamoDBDocumentClient,
PutCommand,
GetCommand,
QueryCommand,
UpdateCommand,
} from "@aws-sdk/lib-dynamodb";
const client = new DynamoDBClient({ region: "us-east-1" });
const docClient = DynamoDBDocumentClient.from(client);
const TABLE_NAME = process.env.DYNAMODB_TABLE!;
export async function createPost(post: {
id: string;
userId: string;
title: string;
content: string;
}) {
await docClient.send(
new PutCommand({
TableName: TABLE_NAME,
Item: {
PK: `USER#${post.userId}`,
SK: `POST#${post.id}`,
id: post.id,
title: post.title,
content: post.content,
createdAt: new Date().toISOString(),
},
}),
);
}
export async function getPost(userId: string, postId: string) {
const response = await docClient.send(
new GetCommand({
TableName: TABLE_NAME,
Key: {
PK: `USER#${userId}`,
SK: `POST#${postId}`,
},
}),
);
return response.Item;
}
export async function getUserPosts(userId: string) {
const response = await docClient.send(
new QueryCommand({
TableName: TABLE_NAME,
KeyConditionExpression: "PK = :pk AND begins_with(SK, :sk)",
ExpressionAttributeValues: {
":pk": `USER#${userId}`,
":sk": "POST#",
},
}),
);
return response.Items;
}
export async function incrementPostViews(userId: string, postId: string) {
await docClient.send(
new UpdateCommand({
TableName: TABLE_NAME,
Key: {
PK: `USER#${userId}`,
SK: `POST#${postId}`,
},
UpdateExpression: "SET #views = if_not_exists(#views, :zero) + :inc",
ExpressionAttributeNames: {
"#views": "views",
},
ExpressionAttributeValues: {
":zero": 0,
":inc": 1,
},
}),
);
}DynamoDB table definition (CloudFormation):
Resources:
PostsTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: posts
BillingMode: PAY_PER_REQUEST # On-demand pricing
AttributeDefinitions:
- AttributeName: PK
AttributeType: S
- AttributeName: SK
AttributeType: S
- AttributeName: GSI1PK
AttributeType: S
- AttributeName: GSI1SK
AttributeType: S
KeySchema:
- AttributeName: PK
KeyType: HASH
- AttributeName: SK
KeyType: RANGE
GlobalSecondaryIndexes:
- IndexName: GSI1
KeySchema:
- AttributeName: GSI1PK
KeyType: HASH
- AttributeName: GSI1SK
KeyType: RANGE
Projection:
ProjectionType: ALL
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES
PointInTimeRecoverySpecification:
PointInTimeRecoveryEnabled: true
Tags:
- Key: Environment
Value: Production3. Networking: How Everything Connects#
VPC (Virtual Private Cloud) - Your private network
# vpc-stack.yaml
Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0/16
EnableDnsHostnames: true
EnableDnsSupport: true
Tags:
- Key: Name
Value: my-vpc
InternetGateway:
Type: AWS::EC2::InternetGateway
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref VPC
InternetGatewayId: !Ref InternetGateway
PublicSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: 10.0.1.0/24
AvailabilityZone: !Select [0, !GetAZs ""]
MapPublicIpOnLaunch: true
PublicSubnet2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: 10.0.2.0/24
AvailabilityZone: !Select [1, !GetAZs ""]
MapPublicIpOnLaunch: true
PrivateSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: 10.0.11.0/24
AvailabilityZone: !Select [0, !GetAZs ""]
PrivateSubnet2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: 10.0.12.0/24
AvailabilityZone: !Select [1, !GetAZs ""]
PublicRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
PublicRoute:
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
# Security Groups
WebServerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allow HTTP/HTTPS
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
DatabaseSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allow database access
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 5432
ToPort: 5432
SourceSecurityGroupId: !Ref WebServerSecurityGroupApplication Load Balancer:
Resources:
LoadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: my-alb
Type: application
Scheme: internet-facing
SecurityGroups:
- !Ref LoadBalancerSecurityGroup
Subnets:
- !Ref PublicSubnet1
- !Ref PublicSubnet2
TargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Name: my-targets
Port: 3000
Protocol: HTTP
VpcId: !Ref VPC
HealthCheckPath: /health
HealthCheckIntervalSeconds: 30
HealthCheckTimeoutSeconds: 5
HealthyThresholdCount: 2
UnhealthyThresholdCount: 3
Listener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
LoadBalancerArn: !Ref LoadBalancer
Port: 443
Protocol: HTTPS
Certificates:
- CertificateArn: !Ref Certificate
DefaultActions:
- Type: forward
TargetGroupArn: !Ref TargetGroup4. Security & Identity#
IAM (Identity and Access Management) - Who can do what
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject", "s3:DeleteObject"],
"Resource": "arn:aws:s3:::my-bucket/uploads/*"
},
{
"Effect": "Allow",
"Action": ["dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:Query"],
"Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/posts"
}
]
}Secrets Manager:
// lib/secrets.ts
import {
SecretsManagerClient,
GetSecretValueCommand,
} from "@aws-sdk/client-secrets-manager";
const client = new SecretsManagerClient({ region: "us-east-1" });
export async function getSecret(secretName: string): Promise<any> {
const response = await client.send(
new GetSecretValueCommand({
SecretId: secretName,
}),
);
return JSON.parse(response.SecretString!);
}
// Usage
const dbConfig = await getSecret("prod/database");5. Deployment & CI/CD#
CodePipeline + CodeBuild:
# buildspec.yml
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_REGISTRY
build:
commands:
- echo Build started on `date`
- docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $ECR_REGISTRY/$IMAGE_REPO_NAME:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- docker push $ECR_REGISTRY/$IMAGE_REPO_NAME:$IMAGE_TAG
- echo Writing image definitions file...
- printf '[{"name":"app","imageUri":"%s"}]' $ECR_REGISTRY/$IMAGE_REPO_NAME:$IMAGE_TAG > imagedefinitions.json
artifacts:
files:
- imagedefinitions.jsonGitHub Actions + AWS:
# .github/workflows/deploy.yml
name: Deploy to AWS
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Login to ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Build and push image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: my-app
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
- name: Deploy to ECS
run: |
aws ecs update-service \
--cluster my-cluster \
--service my-service \
--force-new-deploymentCost Optimization Tips#
// 1. Use S3 lifecycle policies
// Create policy in console or via SDK
const lifecycleConfig = {
Rules: [
{
Id: "MoveOldFiles",
Status: "Enabled",
Transitions: [
{
Days: 30,
StorageClass: "STANDARD_IA", // Infrequent Access
},
{
Days: 90,
StorageClass: "GLACIER", // Archive
},
],
Expiration: {
Days: 365, // Delete after 1 year
},
},
],
};
// 2. Use Lambda provisioned concurrency only when needed
// 3. Enable RDS instance right-sizing recommendations
// 4. Use Auto Scaling for EC2
// 5. Set up CloudWatch billing alarms
const alarm = {
AlarmName: "BillingAlert",
ComparisonOperator: "GreaterThanThreshold",
EvaluationPeriods: 1,
MetricName: "EstimatedCharges",
Namespace: "AWS/Billing",
Period: 21600, // 6 hours
Statistic: "Maximum",
Threshold: 100, // $100
};Monitoring & Logging#
CloudWatch:
// lib/cloudwatch.ts
import {
CloudWatchClient,
PutMetricDataCommand,
} from "@aws-sdk/client-cloudwatch";
const client = new CloudWatchClient({ region: "us-east-1" });
export async function recordMetric(
metricName: string,
value: number,
unit: string = "Count",
) {
await client.send(
new PutMetricDataCommand({
Namespace: "MyApp",
MetricData: [
{
MetricName: metricName,
Value: value,
Unit: unit,
Timestamp: new Date(),
},
],
}),
);
}
// Usage
await recordMetric("ApiRequests", 1);
await recordMetric("ResponseTime", 245, "Milliseconds");CloudWatch Logs Insights queries:
-- Find errors in last hour
fields @timestamp, @message
| filter @message like /ERROR/
| sort @timestamp desc
| limit 20
-- API latency statistics
fields @timestamp, duration
| stats avg(duration), max(duration), pct(duration, 95)
| sort @timestamp desc
-- Count requests by endpoint
fields @timestamp, endpoint
| stats count() by endpoint
| sort count() descInfrastructure as Code#
AWS CDK (TypeScript):
// lib/app-stack.ts
import * as cdk from "aws-cdk-lib";
import * as ec2 from "aws-cdk-lib/aws-ec2";
import * as ecs from "aws-cdk-lib/aws-ecs";
import * as elbv2 from "aws-cdk-lib/aws-elasticloadbalancingv2";
export class AppStack extends cdk.Stack {
constructor(scope: cdk.App, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// VPC
const vpc = new ec2.Vpc(this, "VPC", {
maxAzs: 2,
});
// ECS Cluster
const cluster = new ecs.Cluster(this, "Cluster", {
vpc,
});
// Fargate Service
const taskDefinition = new ecs.FargateTaskDefinition(this, "Task", {
cpu: 256,
memoryLimitMiB: 512,
});
taskDefinition.addContainer("app", {
image: ecs.ContainerImage.fromRegistry("my-app:latest"),
portMappings: [{ containerPort: 3000 }],
logging: ecs.LogDrivers.awsLogs({ streamPrefix: "app" }),
});
const service = new ecs.FargateService(this, "Service", {
cluster,
taskDefinition,
desiredCount: 2,
});
// Load Balancer
const lb = new elbv2.ApplicationLoadBalancer(this, "LB", {
vpc,
internetFacing: true,
});
const listener = lb.addListener("Listener", {
port: 80,
});
listener.addTargets("ECS", {
port: 3000,
targets: [service],
healthCheck: {
path: "/health",
interval: cdk.Duration.seconds(30),
},
});
new cdk.CfnOutput(this, "LoadBalancerDNS", {
value: lb.loadBalancerDnsName,
});
}
}The AWS Learning Path#
Week 1-2: Fundamentals
- IAM (users, roles, policies)
- EC2 (launch instance, connect via SSH)
- S3 (upload files, set permissions)
Week 3-4: Deeper Dive
- Lambda (create function, trigger from API Gateway)
- RDS (create database, connect from Lambda)
- VPC (understand subnets, security groups)
Week 5-6: Production Skills
- ECS/Fargate (deploy containerized app)
- CloudFormation/CDK (infrastructure as code)
- CloudWatch (monitoring, alarms)
Week 7-8: Advanced
- Auto Scaling
- Multi-region deployments
- Cost optimization
Key Takeaways#
- Start small: Don't over-engineer. Lambda + RDS can get you far.
- Use managed services: Let AWS handle undifferentiated heavy lifting.
- Security first: Use IAM roles, Secrets Manager, VPC.
- Monitor everything: CloudWatch is your friend.
- Automate deployment: CI/CD from day one.
AWS is deep. You don't need to know everything. Master the core services, understand the patterns, and expand from there.
What AWS service are you most curious about? Let's explore the cloud together.
