Introduction
When I first started my systems engineer role, I was introduced to the cloud for the first time and a few CI/CD tools that were being used. To better understand how these tools worked I decided to create a type of pentesting lab in AWS that could be spun up and torn down quickly. For this project I used Docker, Terraform, Ansible and of course AWS. This was all a steep learning curve for me but there were a few helpful sites that I will link at the bottom. It was very simple and I would do a lot of things differently now but it was a great way for me to get introduced to these tools.
Lab Setup
Terraform
I first started with my terraform code. I added the providers.tf
file to my terraform directory to initially begin the project. Here is where you specify the Terraform version and configure you AWS credentials. I created a terraform
profile in my `~/.aws/config’ file ot use throughout this project. This helps to ensure you are building in the correct environment because I have two AWS accounts I work on, one being my work one, which I did not want to test on.
providers.tf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
# Configure the AWS Provider
provider "aws" {
region = "us-east-2"
shared_credentials_files = ["~/.aws/credentials"]
profile = "terraform"
}
Next, I created the main.tf
file to create all of the AWS infrastructure that I would need to deploy EC2 instances on.
main.tf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
# Create a VPC
resource "aws_vpc" "home-lab" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
tags = {
Name = "home-lab-vpc"
}
}
# Create a subnet for EC2
resource "aws_subnet" "home-lab-pub-subnet" {
vpc_id = aws_vpc.home-lab.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
availability_zone = "us-east-2c"
tags = {
Name = "home-lab-public-subnet"
}
}
# Create internet gateway
resource "aws_internet_gateway" "home-lab-gw" {
vpc_id = aws_vpc.home-lab.id
tags = {
Name = "home-lab-igw"
}
}
# Create a route table
resource "aws_route_table" "home-lab-route" {
vpc_id = aws_vpc.home-lab.id
tags = {
Name = "home-lab-route-table"
}
}
resource "aws_route" "default-route" {
route_table_id = aws_route_table.home-lab-route.id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.home-lab-gw.id
}
# Create route table association
resource "aws_route_table_association" "home-lab-association" {
subnet_id = aws_subnet.home-lab-pub-subnet.id
route_table_id = aws_route_table.home-lab-route.id
}
# Create security group
resource "aws_security_group" "home-lab-SG" {
name = "home-lab-SG"
description = "home lab rules"
vpc_id = aws_vpc.home-lab.id
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "home-lab-SG"
}
}
The last of my terraform code lived in instances.tf
. This file contained the code to provision the actual EC2 instances. In this example I only provisioned one T2.micro
to stay within the free tier. This file could easily be updated to deploy multiple instances with different AMIs and instance types if needed.
instances.tf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# Create AMI to use
data "aws_ami" "amazon_linux" {
most_recent = true
owners = ["137112412989"]
filter {
name = "name"
values = ["amzn2-ami-kernel-5.10-hvm-*"]
}
}
# Key pair for instances
resource "aws_key_pair" "home-lab-key" {
key_name = "home-lab-key"
public_key = file("~/.ssh/home-lab.pub")
}
# Create EC2 instance
resource "aws_instance" "home-lab" {
instance_type = "t2.micro"
ami = data.aws_ami.amazon_linux.id
key_name = aws_key_pair.home-lab-key.id
vpc_security_group_ids = [aws_security_group.home-lab-SG.id]
subnet_id = aws_subnet.home-lab-pub-subnet.id
root_block_device {
volume_size = 28
}
tags = {
Name = "home-lab"
}
}
To run the terraform code, go into your terraform directory and run terraform init
to initialize the repository, terraform plan
to see the changes and lastly terraform apply --auto-approve
to build the infrastructure.
Docker
I then moved on to creating a dockerfile that I used to create a custom Kali Linux image. I placed a Kali
folder in my project and placed a Dockerfile
file in it. The Dockerfile pulls the latest Kali Linux image and installs the kali-linux-headless
tool set.
Dockerfile
1
2
3
4
5
6
7
8
9
FROM kalilinux/kali-rolling
ENV DEBIAN_FRONTEND noninteractive
ENV TERM xterm-256color
#install kali-linux-headless
RUN apt update && apt -y install kali-linux-headless
CMD ["/bin/bash"]
Then I created the docker-compose.yml
file in the main directory of my project. In this file I create a Metasploitable container to be the target for my Kali Linux container. I build my Kali Linux image using my Dockerfile and lastly create a network so I know the containers can communicate.
docker-compose.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
version: '2.2'
services:
metasploitable2:
image: cyberacademylabs/metasploitable2:latest
container_name: metsploitable
ports:
- 8080:80
hostname: metasploitable2
networks:
vulhubnet:
ipv4_address: 10.1.1.5
kali:
build: ./kali
# Note: If you don't want to build everything, use the image then apt install what you want/need
# Also note: Building everything, even kali-linux-headless, takes a really long time to build
#image: kalilinux/kali-rolling
container_name: kali
hostname: kali
tty: true
networks:
vulhubnet:
ipv4_address: 10.1.1.10
networks:
vulhubnet:
driver: bridge
ipam:
config:
- subnet: 10.1.1.0/24
gateway: 10.1.1.1
These files will be placed on the EC2 instance and Ansible will run the Docker commands.
Ansible
I created an ansible
directory and in there I created playbooks
and inventory
folders. In the inventory
folder I placed a hosts
file. This file contains the IP address of whatever machine that you want Ansible to perform all the tasks on. In my case it was the EC2 instance that terraform had just created. There must be a better way to input the hosts in this file instead of spinning up your EC2, grabbing the IP, and copying it to this hosts
file. That would not be a great CI/CD pipeline in a production environment.
I then created provision.yml
in the playbooks
folder to install the basic dependencies that I wanted on the instance.
provision.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
- hosts: "*"
become: true
tasks:
- name: Upgrade all packages
ansible.builtin.yum:
name: '*'
update_cache: yes
state: latest
- name: install Amazon utils
ansible.builtin.yum:
name: yum-utils
state: latest
update_cache: yes
- name: install Amazon extras
ansible.builtin.yum:
name: amazon-linux-extras
state: latest
update_cache: yes
- name: install pip2 for built in ansbile
ansible.builtin.yum:
name: python2-pip
state: latest
update_cache: yes
- name: Install Docker
ansible.builtin.yum:
name: docker
state: latest
update_cache: true
- name: Start Docker
ansible.builtin.service:
name: "docker"
enabled: true
state: started
- name: Install docker-compose
ansible.builtin.pip:
name: docker-compose
- name: Install git
ansible.builtin.yum:
name: git
estate: latest
update_cache: true
Next I created compose.yml
in the same folder to install docker-compose on the instance, place my Dockerfile in the local directory and run docker-compose
to build my docker environment.
compose.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
- hosts: *
gather_facts: no
tasks:
- name: Copy docker-compose.yml
copy:
src: home-lab/docker-compose.yml
dest: /home/ec2-user/docker-compose.yml
become: yes
become_user: ec2-user
- name: Copy Kali folder
ansible.builtin.copy:
src: /home-lab/Kali
dest: /home/ec2-user/Kali
mode: "recursive"
become: yes
become_user: ec2-user
- name: Run docker-compose up
command:
cmd: docker-compose up -d
chdir: /home/ec2-user/
become: yes
become_user: ec2-user
Now that the Ansible files have been created the last thing to do is run the playbook. Make sure you are in your Ansible directory and run the commands ansible-playbook -i inventory.ini playbooks/provision.yml
then ansible-playbook -i inventory.ini playbooks/compose.yml
. You could probably combine these files into one, but at the time of writing it made more sense for me to split them up for testing purposes.
Now you have a functional pentesting homelab hosted in AWS.
When you are done with your lab be sure to tear it down using terraform destroy --auto-approve
command to avoid getting any unnecessary AWS charges.
Conclusion
The goal of this project was to learn more about cloud tools. I most likely did not complete any of this in the best or most correct way, but I got it to work and I learned a lot throughout. I was able to learn more about Terraform, Docker and Ansible then I ever would have by just reading documentation. This gave me a much better perspective on how my company implements these tools and how I can address any security concerns with much more confidence. I have used this quite a few times to add more containers on the instance and test against them. I do think the having at least a homelab with VMs is more robust, but this was a great way for me to gain experience and apply it towards a cyber related project.