From 464514ed7f6db7accc4c3fcf8b02285d081392f6 Mon Sep 17 00:00:00 2001 From: devloop Date: Sat, 2 Mar 2024 23:28:59 +0100 Subject: [PATCH] Add some solutions for SadServers Medium challenges --- ...on-du-challenge-Cape-Town-de-SadServers.md | 172 +++++++++++++ ...ution-du-challenge-Lisbon-de-SadServers.md | 196 ++++++++++++++ ...on-du-challenge-Manhattan-de-SadServers.md | 243 ++++++++++++++++++ ...on-du-challenge-Melbourne-de-SadServers.md | 221 ++++++++++++++++ ...ution-du-challenge-Oaxaca-de-SadServers.md | 77 ++++++ ...lution-du-challenge-Salta-de-SadServers.md | 176 +++++++++++++ ...lution-du-challenge-Tokyo-de-SadServers.md | 94 +++++++ ...lution-des-scenarios-Easy-de-SadServers.md | 2 +- 8 files changed, 1180 insertions(+), 1 deletion(-) create mode 100644 _posts/2024-02-03-Solution-du-challenge-Cape-Town-de-SadServers.md create mode 100644 _posts/2024-02-03-Solution-du-challenge-Lisbon-de-SadServers.md create mode 100644 _posts/2024-02-03-Solution-du-challenge-Manhattan-de-SadServers.md create mode 100644 _posts/2024-02-03-Solution-du-challenge-Melbourne-de-SadServers.md create mode 100644 _posts/2024-02-03-Solution-du-challenge-Oaxaca-de-SadServers.md create mode 100644 _posts/2024-02-03-Solution-du-challenge-Salta-de-SadServers.md create mode 100644 _posts/2024-02-03-Solution-du-challenge-Tokyo-de-SadServers.md diff --git a/_posts/2024-02-03-Solution-du-challenge-Cape-Town-de-SadServers.md b/_posts/2024-02-03-Solution-du-challenge-Cape-Town-de-SadServers.md new file mode 100644 index 0000000..a92f8a6 --- /dev/null +++ b/_posts/2024-02-03-Solution-du-challenge-Cape-Town-de-SadServers.md @@ -0,0 +1,172 @@ +--- +title: "Solution du challenge Cape Town de SadServers.com" +tags: [CTF,AdminSys,SadServers +--- + +**Scenario:** "Cape Town": Borked Nginx + +**Level:** Medium + +**Type:** Fix + +**Tags:** [nginx](https://sadservers.com/tag/nginx) [realistic-interviews](https://sadservers.com/tag/realistic-interviews) + +**Description:** There's an Nginx web server installed and managed by systemd. Running `curl -I 127.0.0.1:80` returns `curl: (7) Failed to connect to localhost port 80: Connection refused` , fix it so when you curl you get the default Nginx page. + +**Test:** `curl -Is 127.0.0.1:80|head -1` returns `HTTP/1.1 200 OK` + +**Time to Solve:** 15 minutes. + +Déjà la base : est-ce que le port est en écoute (et donc le service lancé) ? + +```console +admin@i-06cb61539b1effc2b:/$ ss -lntp +State Recv-Q Send-Q Local Address:Port Peer Address:Port Process +LISTEN 0 128 0.0.0.0:22 0.0.0.0:* +LISTEN 0 4096 *:6767 *:* users:(("sadagent",pid=567,fd=7)) +LISTEN 0 4096 *:8080 *:* users:(("gotty",pid=566,fd=6)) +LISTEN 0 128 [::]:22 [::]:* +``` + +Non, alors voyons son état systemctl : + +```console +admin@i-06cb61539b1effc2b:/$ systemctl status nginx +● nginx.service - The NGINX HTTP and reverse proxy server + Loaded: loaded (/etc/systemd/system/nginx.service; enabled; vendor preset: enabled) + Active: failed (Result: exit-code) since Sat 2024-03-02 09:52:55 UTC; 1min 16s ago + Process: 574 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=1/FAILURE) + CPU: 29ms + +Mar 02 09:52:55 i-06cb61539b1effc2b systemd[1]: Starting The NGINX HTTP and reverse proxy server... +Mar 02 09:52:55 i-06cb61539b1effc2b nginx[574]: nginx: [emerg] unexpected ";" in /etc/nginx/sites-enabled/default:1 +Mar 02 09:52:55 i-06cb61539b1effc2b nginx[574]: nginx: configuration file /etc/nginx/nginx.conf test failed +Mar 02 09:52:55 i-06cb61539b1effc2b systemd[1]: nginx.service: Control process exited, code=exited, status=1/FAILURE +Mar 02 09:52:55 i-06cb61539b1effc2b systemd[1]: nginx.service: Failed with result 'exit-code'. +Mar 02 09:52:55 i-06cb61539b1effc2b systemd[1]: Failed to start The NGINX HTTP and reverse proxy server. +``` + +Erreur à la première ligne du fichier de configuration, voyons ça : + +```console +admin@i-06cb61539b1effc2b:/etc/nginx/sites-enabled$ head default +; +## +# You should look at the following URL's in order to grasp a solid understanding +# of Nginx configuration files in order to fully unleash the power of Nginx. +# https://www.nginx.com/resources/wiki/start/ +# https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/ +# https://wiki.debian.org/Nginx/DirectoryStructure +# +# In most cases, administrators will remove this file from sites-enabled/ and +# leave it as reference inside of sites-available where it will continue to be +``` + +La correction est vite faite, mais sans trop de surprises, on tombe sur un problème plus gros : + +```console +admin@i-06cb61539b1effc2b:/etc/nginx/sites-enabled$ sudo vi default +admin@i-06cb61539b1effc2b:/etc/nginx/sites-enabled$ sudo systemctl start nginx +admin@i-06cb61539b1effc2b:/etc/nginx/sites-enabled$ systemctl status nginx +● nginx.service - The NGINX HTTP and reverse proxy server + Loaded: loaded (/etc/systemd/system/nginx.service; enabled; vendor preset: enabled) + Active: active (running) since Sat 2024-03-02 09:56:14 UTC; 7s ago + Process: 862 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS) + Process: 863 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS) + Main PID: 864 (nginx) + Tasks: 2 (limit: 524) + Memory: 2.4M + CPU: 31ms + CGroup: /system.slice/nginx.service + ├─864 nginx: master process /usr/sbin/nginx + └─865 nginx: worker process + +Mar 02 09:56:14 i-06cb61539b1effc2b systemd[1]: Starting The NGINX HTTP and reverse proxy server... +Mar 02 09:56:14 i-06cb61539b1effc2b nginx[862]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok +Mar 02 09:56:14 i-06cb61539b1effc2b nginx[862]: nginx: configuration file /etc/nginx/nginx.conf test is successful +Mar 02 09:56:14 i-06cb61539b1effc2b systemd[1]: Started The NGINX HTTP and reverse proxy server. +admin@i-06cb61539b1effc2b:/etc/nginx/sites-enabled$ curl -Is 127.0.0.1:80|head -1 +HTTP/1.1 500 Internal Server Error +``` + +On va devoir regarder dans les logs du serveur. + +```console +admin@i-06cb61539b1effc2b:/etc/nginx/sites-enabled$ tail /var/log/nginx/error.log +2022/09/11 16:39:11 [emerg] 5875#5875: unexpected ";" in /etc/nginx/sites-enabled/default:1 +2022/09/11 16:54:26 [emerg] 5931#5931: unexpected ";" in /etc/nginx/sites-enabled/default:1 +2022/09/11 16:55:00 [emerg] 5961#5961: unexpected ";" in /etc/nginx/sites-enabled/default:1 +2022/09/11 17:02:07 [emerg] 6066#6066: unexpected ";" in /etc/nginx/sites-enabled/default:1 +2022/09/11 17:07:03 [emerg] 6146#6146: unexpected ";" in /etc/nginx/sites-enabled/default:1 +2024/03/02 09:52:55 [emerg] 574#574: unexpected ";" in /etc/nginx/sites-enabled/default:1 +2024/03/02 09:56:14 [alert] 864#864: socketpair() failed while spawning "worker process" (24: Too many open files) +2024/03/02 09:56:14 [emerg] 865#865: eventfd() failed (24: Too many open files) +2024/03/02 09:56:14 [alert] 865#865: socketpair() failed (24: Too many open files) +2024/03/02 09:56:40 [crit] 865#865: *1 open() "/var/www/html/index.nginx-debian.html" failed (24: Too many open files), client: 127.0.0.1, server: _, request: "HEAD / HTTP/1.1", host: "127.0.0.1" +``` + +Le serveur ne parvient pas à ouvrir le fichier d'index, car trop de fichiers sont ouverts sur le système. + +Ma première réaction a été de voir du côté de `ulimit` : + +```console +admin@i-05a9742de4b12737a:/$ ulimit -a +real-time non-blocking time (microseconds, -R) unlimited +core file size (blocks, -c) 0 +data seg size (kbytes, -d) unlimited +scheduling priority (-e) 0 +file size (blocks, -f) unlimited +pending signals (-i) 1748 +max locked memory (kbytes, -l) 64 +max memory size (kbytes, -m) unlimited +open files (-n) 1024 +pipe size (512 bytes, -p) 8 +POSIX message queues (bytes, -q) 819200 +real-time priority (-r) 0 +stack size (kbytes, -s) 8192 +cpu time (seconds, -t) unlimited +max user processes (-u) 1748 +virtual memory (kbytes, -v) unlimited +file locks (-x) unlimited +``` + +Tout semble standard ici... + +Si la limite n'est pas globale au système, elle concerne alors probablement que le processus. Encore un hack de chez systemd ? + +```console +root@i-05a9742de4b12737a:/etc/security# cat /etc/systemd/system/nginx.service +[Unit] +Description=The NGINX HTTP and reverse proxy server +After=syslog.target network-online.target remote-fs.target nss-lookup.target +Wants=network-online.target + +[Service] +Type=forking +PIDFile=/run/nginx.pid +ExecStartPre=/usr/sbin/nginx -t +ExecStart=/usr/sbin/nginx +ExecReload=/usr/sbin/nginx -s reload +ExecStop=/bin/kill -s QUIT $MAINPID +PrivateTmp=true +LimitNOFILE=10 + +[Install] +WantedBy=multi-user.target +``` + +Bingo ! La directive `LimitNOFILE` limite le nombre de fichiers ouverts par le service. + +Il suffit de retirer l'option et de redémarrer le service : + +```console +root@i-05a9742de4b12737a:/etc/security# vi /etc/systemd/system/nginx.service +root@i-05a9742de4b12737a:/etc/security# systemctl restart nginx +Warning: The unit file, source configuration file or drop-ins of nginx.service changed on disk. Run 'systemctl daemon-reload' to reload units. +root@i-05a9742de4b12737a:/etc/security# systemctl daemon-reload +root@i-05a9742de4b12737a:/etc/security# curl -Is 127.0.0.1:80|head -1 +HTTP/1.1 500 Internal Server Error +root@i-05a9742de4b12737a:/etc/security# systemctl restart nginx +root@i-05a9742de4b12737a:/etc/security# curl -Is 127.0.0.1:80|head -1 +HTTP/1.1 200 OK +``` diff --git a/_posts/2024-02-03-Solution-du-challenge-Lisbon-de-SadServers.md b/_posts/2024-02-03-Solution-du-challenge-Lisbon-de-SadServers.md new file mode 100644 index 0000000..894e07b --- /dev/null +++ b/_posts/2024-02-03-Solution-du-challenge-Lisbon-de-SadServers.md @@ -0,0 +1,196 @@ +--- +title: "Solution du challenge Lisbon de SadServers.com" +tags: [CTF,AdminSys,SadServers +--- + +**Scenario:** "Lisbon": etcd SSL cert troubles + +**Level:** Medium + +**Type:** Fix + +**Tags:** [etcd](https://sadservers.com/tag/etcd) [ssl](https://sadservers.com/tag/ssl) [realistic-interviews](https://sadservers.com/tag/realistic-interviews) + +**Description:** There's an *etcd* server running on https://localhost:2379 , get the value for the key "foo", ie `etcdctl get foo` or `curl https://localhost:2379/v2/keys/foo` + +**Test:** etcdctl get foo returns bar. + +**Time to Solve:** 20 minutes. + +Ce scénario a été un vrai casse-tête. J'en ai résolu une partie, mais j'ai dû jeter l'éponge à la fin et consulter la solution. + +Voyons ce qu'il se passe avec la commande : + +```console +admin@i-08f438fe20e16868c:/$ etcdctl get foo +Error: client: etcd cluster is unavailable or misconfigured; error #0: x509: certificate has expired or is not yet valid: current time 2025-03-02T17:18:38Z is after 2023-01-30T00:02:48Z + +error #0: x509: certificate has expired or is not yet valid: current time 2025-03-02T17:18:38Z is after 2023-01-30T00:02:48Z +``` + +Le certificat a expiré. En plus l'erreur nous indique que nous sommes en 2025 au lieu de 2024. + +On peut aussi constater le problème de certificat avec `openssl` : + +```console +admin@i-08f438fe20e16868c:/etc/default$ openssl s_client -connect 127.0.0.1:2379 +CONNECTED(00000003) +Can't use SSL_get_servername +depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = localhost +verify error:num=10:certificate has expired +notAfter=Jan 30 00:02:48 2023 GMT +verify return:1 +depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = localhost +notAfter=Jan 30 00:02:48 2023 GMT +verify return:1 +--- +Certificate chain + 0 s:C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = localhost + i:C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = localhost +--- +Server certificate +-----BEGIN CERTIFICATE----- +MIIDkzCCAnugAwIBAgIUH9str4OD0GJuoYSEBWSMjLvDZyIwDQYJKoZIhvcNAQEL +BQAwWTELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM +GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDESMBAGA1UEAwwJbG9jYWxob3N0MB4X +DTIyMTIzMTAwMDI0OFoXDTIzMDEzMDAwMDI0OFowWTELMAkGA1UEBhMCQVUxEzAR +BgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoMGEludGVybmV0IFdpZGdpdHMgUHR5 +IEx0ZDESMBAGA1UEAwwJbG9jYWxob3N0MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A +MIIBCgKCAQEA4Q6WAutMU7NwZNVedwkkFu2vElGXt4UNhraRauCtD9XzP7RSm8UG +IXg5ddqFhOmi06LtSybimbA9K9763y5T5ncpluuYBN+Z9h8t83ZRV+QYW3gO5YRD +WfZjIRBhHXW4cfHOu2oOJd0rD95V87p1u1zxuqbDjh+4vWvgzzyuCRqlWyuKPmGk +XbmM4+qxlq62VhukhL1q46DKmSBE9zL1Oe23bermvp8XSPdfaWgNx4ChitJddvV4 +eXOQw6VmA9Lf/WibMbYaubwsjhx+y2du20GcDqG8wk0IO2SyLgZrLsV/JiGqBnT2 +49u33gDW+CP/2YUlPCURAkxt4sftu4sKeQIDAQABo1MwUTAdBgNVHQ4EFgQUiXpO +MNVRg1O+yM+Gvvw2TjN/zX0wHwYDVR0jBBgwFoAUiXpOMNVRg1O+yM+Gvvw2TjN/ +zX0wDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAJlr9IQXzWfZo +PVwYu1hZTWkCR3UUKy9mvy3t5JX+2evQXZOhDfycq6CNkxfg6EXEjhqPrmoSosMU +Z9miIvbQMyWn4o6ORQpE3wacJri6GhLBjpyNfoMQivJMhJ0BXUrGyvZPD+wQF2Jc +Iwhj45Xtn+wluh9AmqCGy6S/Zf1QNjdnpbFImwzviuY/lqHhnSsIPQTaX5wYxrER +UbryzP5/4HF9kHEQJXxeHS3/URIp8otpviq3H7UODHeIviZgLdBtJGlkBXmub30p +7xgnGw/YHOlJcxUes0u8kbiTUQvFPj2OS0oYpV/txHvdiC9lqmfxcE28smTdMoV9 +1F4bJMqqSQ== +-----END CERTIFICATE----- +subject=C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = localhost + +issuer=C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = localhost + +--- +No client certificate CA names sent +Peer signing digest: SHA256 +Peer signature type: RSA-PSS +Server Temp Key: X25519, 253 bits +--- +SSL handshake has read 1475 bytes and written 363 bytes +Verification error: certificate has expired +--- snip --- +``` + +Le programme `etcd` est lancé avec des options permettant d'utiliser un certificat SSL : + +```console +admin@i-08f438fe20e16868c:/etc/default$ ps aux | grep etcd +etcd 578 0.8 5.2 11729040 24668 ? Ssl 17:17 0:04 /usr/bin/etcd --cert-file /etc/ssl/certs/localhost.crt --key-file /etc/ssl/certs/localhost.key --advertise-client-urls=https://localhost:2379 --listen-client-urls=https://localhost:2379 +``` + +Tout est lancé depuis systemd : + +```console +root@i-08f438fe20e16868c:/etc/default# cat ../systemd/system/etcd2.service +[Unit] +Description=etcd - highly-available key value store +Documentation=https://etcd.io/docs +Documentation=man:etcd +After=network.target +Wants=network-online.target + +[Service] +Environment=DAEMON_ARGS= +Environment=ETCD_NAME=%H +Environment=ETCD_DATA_DIR=/var/lib/etcd/default +EnvironmentFile=-/etc/default/%p +Type=notify +User=etcd +PermissionsStartOnly=true +#ExecStart=/bin/sh -c "GOMAXPROCS=$(nproc) /usr/bin/etcd $DAEMON_ARGS" +ExecStart=/usr/bin/etcd $DAEMON_ARGS \ + --cert-file /etc/ssl/certs/localhost.crt \ + --key-file /etc/ssl/certs/localhost.key \ + --advertise-client-urls=https://localhost:2379 \ + --listen-client-urls=https://localhost:2379 +Restart=on-abnormal +#RestartSec=10s +LimitNOFILE=65536 + +[Install] +WantedBy=multi-user.target +Alias=etcd2.service +``` + +J'ai d'abord tenté de supprimer toutes les options données à etcd afin qu'il écoute en _clair_. + +Malgré cela le client continuait de tomber sur un certificat... étrange. + +J'ai donc plutôt choisi de mettre le certificat auto-signé à jour. + +Déjà, il fallait corriger la date du système qui avance d'une année : + +```bash +sudo date -s "last year" +``` + +Puis régénérer un certificat : + +```console +admin@i-0b35d84e135f65ac8:/$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/certs/localhost.key -out /etc/ssl/certs/localhost.crt +Generating a RSA private key +............+++++ +....+++++ +writing new private key to '/etc/ssl/certs/localhost.key' +----- +You are about to be asked to enter information that will be incorporated +into your certificate request. +What you are about to enter is what is called a Distinguished Name or a DN. +There are quite a few fields but you can leave some blank +For some fields there will be a default value, +If you enter '.', the field will be left blank. +----- +Country Name (2 letter code) [AU]: +State or Province Name (full name) [Some-State]: +Locality Name (eg, city) []: +Organization Name (eg, company) [Internet Widgits Pty Ltd]: +Organizational Unit Name (eg, section) []: +Common Name (e.g. server FQDN or YOUR name) []:localhost +Email Address []: +admin@i-0b35d84e135f65ac8:/$ sudo systemctl restart etcd2 +admin@i-0b35d84e135f65ac8:/$ etcdctl get foo +Error: client: etcd cluster is unavailable or misconfigured; error #0: x509: certificate has expired or is not yet valid: current time 2024-03-02T17:53:17Z is after 2023-01-30T00:02:48Z + +error #0: x509: certificate has expired or is not yet valid: current time 2024-03-02T17:53:17Z is after 2023-01-30T00:02:48Z +``` + +C'est la grosse claque : le serveur continue d'utiliser l'ancien certificat alors qu'on a écrasé les fichiers... WTF ! + +La blague qui était en place sur ce challenge, c'est qu'une règle iptables redirige le port de `etcd` vers un Nginx qui utilise le vieux certificat : + +```console +admin@i-02f3860338fe0d7a3:/$ sudo iptables -t nat -L OUTPUT --line-numbers +Chain OUTPUT (policy ACCEPT) +num target prot opt source destination +1 REDIRECT tcp -- anywhere anywhere tcp dpt:2379 redir ports 443 +2 DOCKER all -- anywhere !ip-127-0-0-0.us-east-2.compute.internal/8 ADDRTYPE match dst-type LOCAL +``` + +Une fois la règle retirée : + +```bash +sudo iptables -t nat -D OUTPUT 1 +``` + +On pouvait finalement accéder aux données. + +```console +admin@i-02f3860338fe0d7a3:/$ sudo systemctl restart etcd2 +admin@i-02f3860338fe0d7a3:/$ etcdctl get foo +bar +``` diff --git a/_posts/2024-02-03-Solution-du-challenge-Manhattan-de-SadServers.md b/_posts/2024-02-03-Solution-du-challenge-Manhattan-de-SadServers.md new file mode 100644 index 0000000..36a3da3 --- /dev/null +++ b/_posts/2024-02-03-Solution-du-challenge-Manhattan-de-SadServers.md @@ -0,0 +1,243 @@ +--- +title: "Solution du challenge Manhattan de SadServers.com" +tags: [CTF,AdminSys,SadServers +--- + +**Scenario:** "Manhattan": can't write data into database. + +**Level:** Medium + +**Type:** Fix + +**Tags:** [disk volumes](https://sadservers.com/tag/disk%20volumes) [postgres](https://sadservers.com/tag/postgres) [realistic-interviews](https://sadservers.com/tag/realistic-interviews) + +**Description:** Your objective is to be able to insert a row in an existing Postgres database. The issue is not specific to Postgres and you don't need to know details about it (although it may help). + +Helpful Postgres information: it's a service that listens to a port (:5432) and writes to disk in a data directory, the location of which is defined in the *data_directory* parameter of the configuration file `/etc/postgresql/14/main/postgresql.conf`. In our case Postgres is managed by *systemd* as a unit with name *postgresql*. + +**Test:** (from default admin user) `sudo -u postgres psql -c "insert into persons(name) values ('jane smith');" -d dt ` + +Should return: `INSERT 0 1` + +**Time to Solve:** 20 minutes. + +On entre ici dans les scénarios de niveau intermédiaire proposés sur [sadservers.com](https://sadservers.com/). + +On a une commande qui doit fonctionner pour résoudre le challenge, on va donc la lancer pour voir pourquoi ça ne passe pas. + +```console +root@i-0b8ce19730a071b11:/# sudo -u postgres psql -c "insert into persons(name) values ('jane smith');" -d dt +psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory + Is the server running locally and accepting connections on that socket? +``` + +Le client ne parvient pas à se connecter au socket serveur qui est de type Unix. + +Je regarde le fichier de configuration de Postgresql. Ce dernier est plein de lignes commentées, mais avec l'aide de `grep` je peux faire le tri. + +```console +root@i-0b8ce19730a071b11:/# cat /etc/postgresql/14/main/postgresql.conf | grep -v "^\s*#" | grep -v "^$" +data_directory = '/opt/pgdata/main' # use data in another directory +hba_file = '/etc/postgresql/14/main/pg_hba.conf' # host-based authentication file +ident_file = '/etc/postgresql/14/main/pg_ident.conf' # ident configuration file +external_pid_file = '/var/run/postgresql/14-main.pid' # write an extra PID file +port = 5432 # (change requires restart) +max_connections = 100 # (change requires restart) +unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories +ssl = on +ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem' +ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key' +shared_buffers = 128MB # min 128kB +dynamic_shared_memory_type = posix # the default is the first option +max_wal_size = 1GB +min_wal_size = 80MB +log_line_prefix = '%m [%p] %q%u@%d ' # special values: +log_timezone = 'Etc/UTC' +cluster_name = '14/main' # added to process titles if nonempty +stats_temp_directory = '/var/run/postgresql/14-main.pg_stat_tmp' +datestyle = 'iso, mdy' +timezone = 'Etc/UTC' +lc_messages = 'C.UTF-8' # locale for system error message +lc_monetary = 'C.UTF-8' # locale for monetary formatting +lc_numeric = 'C.UTF-8' # locale for number formatting +lc_time = 'C.UTF-8' # locale for time formatting +default_text_search_config = 'pg_catalog.english' +include_dir = 'conf.d' # include files ending in '.conf' from +``` + +On voit un numéro de port et une mention des sockets Unix. Ça semble raccord avec le nom de fichier auquel tente d'accéder le client. + +A tout hasard on peut vérifier les ports TCP : + +```console +root@i-0b8ce19730a071b11:/# ss -lntp +State Recv-Q Send-Q Local Address:Port Peer Address:Port +LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=616,fd=3)) +LISTEN 0 128 *:6767 *:* users:(("sadagent",pid=590,fd=7)) +LISTEN 0 128 *:8080 *:* users:(("gotty",pid=582,fd=6)) +LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=616,fd=4)) +``` + +Rien du tout ici. Et comme on s'y attend, rien de plus pour le socket Unix : + +```console +root@i-0b8ce19730a071b11:/# ls /var/run/postgresql/ +14-main.pg_stat_tmp +``` + +En fin de compte, est-ce que PostgreSQL tourne ? + +```console +root@i-0b8ce19730a071b11:/# ps aux | grep -i postgr +root 890 0.0 0.1 4964 820 pts/0 S+ 09:09 0:00 grep -i postgr +``` + +Non. Voyons voir la liste des unités systemd avec la commande `systemctl list-units` : + +``` + postgresql.service loaded active exited PostgreSQL RDBMS +● postgresql@14-main.service loaded failed failed PostgreSQL Cluster 14-main +``` + +Il y a deux services dont l'un qui est en échec. On va se renseigner sur le status de chacun. + +```console +root@i-0b8ce19730a071b11:/# systemctl status postgresql.service +● postgresql.service - PostgreSQL RDBMS + Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled) + Active: active (exited) since Sat 2024-03-02 09:02:30 UTC; 9min ago + Process: 670 ExecStart=/bin/true (code=exited, status=0/SUCCESS) + Main PID: 670 (code=exited, status=0/SUCCESS) + +Mar 02 09:02:30 i-0b8ce19730a071b11 systemd[1]: Starting PostgreSQL RDBMS... +Mar 02 09:02:30 i-0b8ce19730a071b11 systemd[1]: Started PostgreSQL RDBMS. +``` + +On voit sur le second service que la création d'un fichier échoue en raison d'un disque plein : + +```console +root@i-0b8ce19730a071b11:/# systemctl status postgresql@14-main.service +● postgresql@14-main.service - PostgreSQL Cluster 14-main + Loaded: loaded (/lib/systemd/system/postgresql@.service; enabled-runtime; vendor preset: enabled) + Active: failed (Result: protocol) since Sat 2024-03-02 09:12:49 UTC; 1min 45s ago + Process: 901 ExecStart=/usr/bin/pg_ctlcluster --skip-systemctl-redirect 14-main start (code=exited, status=1/FAILURE) + +Mar 02 09:12:49 i-0b8ce19730a071b11 systemd[1]: Starting PostgreSQL Cluster 14-main... +Mar 02 09:12:49 i-0b8ce19730a071b11 postgresql@14-main[901]: Error: /usr/lib/postgresql/14/bin/pg_ctl /usr/lib/postgresql/14/bin/pg_ctl start -D /opt/pgdata/main -l /var/log/postgresql/postgresql-14-main.log -s +Mar 02 09:12:49 i-0b8ce19730a071b11 postgresql@14-main[901]: 2024-03-02 09:12:49.122 UTC [906] FATAL: could not create lock file "postmaster.pid": No space left on device +Mar 02 09:12:49 i-0b8ce19730a071b11 postgresql@14-main[901]: pg_ctl: could not start server +Mar 02 09:12:49 i-0b8ce19730a071b11 postgresql@14-main[901]: Examine the log output. +Mar 02 09:12:49 i-0b8ce19730a071b11 systemd[1]: postgresql@14-main.service: Can't open PID file /run/postgresql/14-main.pid (yet?) after start: No such file or directory +Mar 02 09:12:49 i-0b8ce19730a071b11 systemd[1]: postgresql@14-main.service: Failed with result 'protocol'. +Mar 02 09:12:49 i-0b8ce19730a071b11 systemd[1]: Failed to start PostgreSQL Cluster 14-main. +``` + +J'ai pu récupérer la commande exacte que le service tente de lancer : + +```bash +/usr/lib/postgresql/14/bin/pg_ctl start -D /opt/pgdata/main \ + -l /var/log/postgresql/postgresql-14-main.log -s -o \ + -c 'config_file="/etc/postgresql/14/main/postgresql.conf"' +``` + +Voici l'aide pour l'explication des options : + +```console +postgres@i-0b8ce19730a071b11:/$ /usr/lib/postgresql/14/bin/pg_ctl --help +pg_ctl is a utility to initialize, start, stop, or control a PostgreSQL server. + +Usage: + pg_ctl init[db] [-D DATADIR] [-s] [-o OPTIONS] + pg_ctl start [-D DATADIR] [-l FILENAME] [-W] [-t SECS] [-s] + [-o OPTIONS] [-p PATH] [-c] + pg_ctl stop [-D DATADIR] [-m SHUTDOWN-MODE] [-W] [-t SECS] [-s] + pg_ctl restart [-D DATADIR] [-m SHUTDOWN-MODE] [-W] [-t SECS] [-s] + [-o OPTIONS] [-c] + pg_ctl reload [-D DATADIR] [-s] + pg_ctl status [-D DATADIR] + pg_ctl promote [-D DATADIR] [-W] [-t SECS] [-s] + pg_ctl logrotate [-D DATADIR] [-s] + pg_ctl kill SIGNALNAME PID + +Common options: + -D, --pgdata=DATADIR location of the database storage area + -s, --silent only print errors, no informational messages + -t, --timeout=SECS seconds to wait when using -w option + -V, --version output version information, then exit + -w, --wait wait until operation completes (default) + -W, --no-wait do not wait until operation completes + -?, --help show this help, then exit +If the -D option is omitted, the environment variable PGDATA is used. + +Options for start or restart: + -c, --core-files allow postgres to produce core files + -l, --log=FILENAME write (or append) server log to FILENAME + -o, --options=OPTIONS command line options to pass to postgres + (PostgreSQL server executable) or initdb + -p PATH-TO-POSTGRES normally not necessary + +Options for stop or restart: + -m, --mode=MODE MODE can be "smart", "fast", or "immediate" + +Shutdown modes are: + smart quit after all clients have disconnected + fast quit directly, with proper shutdown (default) + immediate quit without complete shutdown; will lead to recovery on restart + +Allowed signal names for kill: + ABRT HUP INT KILL QUIT TERM USR1 USR2 + +Report bugs to . +PostgreSQL home page: +``` + +Lançons la commande directement pour voir si on reproduit puis jetons un œil aux disques : + +```console +postgres@i-0ae5a99cc5d8a1de9:/$ /usr/lib/postgresql/14/bin/pg_ctl start -D /opt/pgdata/main -l /var/log/postgresql/postgresql-14-main.log -s -o '-c config_file="/etc/postgresql/14/main/postgresql.conf"' +pg_ctl: could not start server +Examine the log output. +postgres@i-0ae5a99cc5d8a1de9:/$ tail /var/log/postgresql/postgresql-14-main.log +2024-03-02 09:34:19.689 UTC [904] FATAL: could not create lock file "postmaster.pid": No space left on device +postgres@i-0ae5a99cc5d8a1de9:/$ df -h +Filesystem Size Used Avail Use% Mounted on +udev 224M 0 224M 0% /dev +tmpfs 47M 1.5M 46M 4% /run +/dev/nvme1n1p1 7.7G 1.2G 6.1G 17% / +tmpfs 233M 0 233M 0% /dev/shm +tmpfs 5.0M 0 5.0M 0% /run/lock +tmpfs 233M 0 233M 0% /sys/fs/cgroup +/dev/nvme1n1p15 124M 278K 124M 1% /boot/efi +/dev/nvme0n1 8.0G 8.0G 28K 100% /opt/pgdata +tmpfs 47M 0 47M 0% /run/user/108 +``` + +Où se trouve normalement ce fichier `postmaster.pid` ? D'après [cette documentation](https://docs.postgresql.fr/current/server-start.html) : + +> Tant que le serveur est lancé, son pid est stocké dans le fichier `postmaster.pid` du répertoire de données. C'est utilisé pour empêcher plusieurs instances du serveur d'être exécutées dans le même répertoire de données et peut aussi être utilisé pour arrêter le processus le serveur. + +Par conséquent, ça correspond bien au dossier `/opt/pgdata` qui est plein. + +```console +postgres@i-0ae5a99cc5d8a1de9:/$ cd /opt/pgdata +postgres@i-0ae5a99cc5d8a1de9:/opt/pgdata$ du -h --max-depth 1 +50M ./main +8.0G . +postgres@i-0ae5a99cc5d8a1de9:/opt/pgdata$ ls -alh +total 8.0G +drwxr-xr-x 3 postgres postgres 82 May 21 2022 . +drwxr-xr-x 3 root root 4.0K May 21 2022 .. +-rw-r--r-- 1 root root 69 May 21 2022 deleteme +-rw-r--r-- 1 root root 7.0G May 21 2022 file1.bk +-rw-r--r-- 1 root root 923M May 21 2022 file2.bk +-rw-r--r-- 1 root root 488K May 21 2022 file3.bk +drwx------ 19 postgres postgres 4.0K May 21 2022 main +``` + +Il y a trois fichiers `.bk` qui prennent de la place. Il suffit de les supprimer. Après ça le service fonctionne normalement : + +```console +root@i-0ae5a99cc5d8a1de9:/# systemctl restart postgresql.service +root@i-0ae5a99cc5d8a1de9:/# sudo -u postgres psql -c "insert into persons(name) values ('jane smith');" -d dt +INSERT 0 1 +``` diff --git a/_posts/2024-02-03-Solution-du-challenge-Melbourne-de-SadServers.md b/_posts/2024-02-03-Solution-du-challenge-Melbourne-de-SadServers.md new file mode 100644 index 0000000..5560ab3 --- /dev/null +++ b/_posts/2024-02-03-Solution-du-challenge-Melbourne-de-SadServers.md @@ -0,0 +1,221 @@ +--- +title: "Solution du challenge Melbourne de SadServers.com" +tags: [CTF,AdminSys,SadServers +--- + +**Scenario:** "Melbourne": WSGI with Gunicorn + +**Level:** Medium + +**Type:** Fix + +**Tags:** [gunicorn](https://sadservers.com/tag/gunicorn) [nginx](https://sadservers.com/tag/nginx) [realistic-interviews](https://sadservers.com/tag/realistic-interviews) + +**Description:** There is a Python [WSGI](https://en.wikipedia.org/wiki/Web_Server_Gateway_Interface) web application file at `/home/admin/wsgi.py` , the purpose of which is to serve the string "Hello, world!". This file is served by a [Gunicorn](https://docs.gunicorn.org/en/stable/) server which is fronted by an nginx server (both servers managed by systemd). So the flow of an HTTP request is: Web Client (curl) -> Nginx -> Gunicorn -> wsgi.py . The objective is to be able to curl the localhost (on default port :80) and get back "Hello, world!", using the current setup. + +**Test:** `curl -s http://localhost` returns Hello, world! (serving the wsgi.py file via Gunicorn and Nginx) + +**Time to Solve:** 20 minutes. + +Voyons voir pourquoi ce serveur web ne fonctionne pas :) + +```console +admin@i-0488d0c89dff19acb:/$ curl -v http://localhost +* Trying 127.0.0.1:80... +* connect to 127.0.0.1 port 80 failed: Connection refused +* Failed to connect to localhost port 80: Connection refused +* Closing connection 0 +curl: (7) Failed to connect to localhost port 80: Connection refused +admin@i-0488d0c89dff19acb:/$ ps aux | grep -i "nginx|gunicorn" +admin 864 0.0 0.1 5276 704 pts/0 S<+ 15:50 0:00 grep -i nginx|gunicorn +``` + +Pour le moment, rien n'est lancé ! + +Le fichier `wsgi.py` est le suivant : + +```python +def application(environ, start_response): + start_response('200 OK', [('Content-Type', 'text/html'), ('Content-Length', '0'), ]) + return [b'Hello, world!'] +``` + +La directive `proxy_pass` définie dans `/etc/nginx/sites-enabled/default` m'a semblé étrange avec ses deux schemes mais elle est finalement légitime. + +```nginx +server { + listen 80; + + location / { + include proxy_params; + proxy_pass http://unix:/run/gunicorn.socket; + } +} +``` + +J'ai trouvé une documentation qui correspond parfaitement à notre situation : + +[Deploying Gunicorn — Gunicorn 21.2.0 documentation](https://docs.gunicorn.org/en/stable/deploy.html#systemd) + +Le fichier de configuration `proxy_params` n'apporte rien de bien intéressant : + +```nginx +proxy_set_header Host $http_host; +proxy_set_header X-Real-IP $remote_addr; +proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; +proxy_set_header X-Forwarded-Proto $scheme; +``` + +Allons voir du côté de systemd avec le fichier `/etc/systemd/system/gunicorn.service` : + +```systemd +[Unit] +Description=gunicorn daemon +Requires=gunicorn.socket +After=network.target + +[Service] +User=admin +Group=admin +WorkingDirectory=/home/admin +ExecStart=/usr/local/bin/gunicorn \ + --bind unix:/run/gunicorn.sock \ + wsgi +Restart=on-failure + +[Install] +WantedBy=multi-user.target +``` + +Essayons de lancer la commande manuellement : + +```console +admin@i-0488d0c89dff19acb:~$ /usr/local/bin/gunicorn --bind unix:/run/gunicorn.sock wsgi +[2024-03-02 15:59:08 +0000] [963] [INFO] Starting gunicorn 20.1.0 +[2024-03-02 15:59:08 +0000] [963] [ERROR] Retrying in 1 second. +[2024-03-02 15:59:09 +0000] [963] [ERROR] Retrying in 1 second. +[2024-03-02 15:59:10 +0000] [963] [ERROR] Retrying in 1 second. +[2024-03-02 15:59:11 +0000] [963] [ERROR] Retrying in 1 second. +[2024-03-02 15:59:12 +0000] [963] [ERROR] Retrying in 1 second. +[2024-03-02 15:59:13 +0000] [963] [ERROR] Can't connect to /run/gunicorn.sock +``` + +L'entrée systemd repose sur une autre unité qui est `/etc/systemd/system/gunicorn.socket` : + +```systemd +[Unit] +Description=gunicorn socket + +[Socket] +ListenStream=/run/gunicorn.sock +# SocketUser=nginx + +[Install] +WantedBy=sockets.target +``` + +Comme `SocketUser` est commenté, systemctl va créer `/run/gunicorn.sock` avec le compte root : + +```console +admin@i-0488d0c89dff19acb:~$ sudo systemctl enable --now gunicorn.socket +admin@i-0488d0c89dff19acb:~$ ls -al /run/gunicorn.sock +srw-rw-rw- 1 root root 0 Mar 2 15:48 /run/gunicorn.sock +admin@i-0488d0c89dff19acb:~$ /usr/local/bin/gunicorn --bind unix:/run/gunicorn.sock wsgi +[2024-03-02 16:01:23 +0000] [992] [INFO] Starting gunicorn 20.1.0 +[2024-03-02 16:01:23 +0000] [992] [ERROR] Retrying in 1 second. +[2024-03-02 16:01:24 +0000] [992] [ERROR] Retrying in 1 second. +[2024-03-02 16:01:25 +0000] [992] [ERROR] Retrying in 1 second. +[2024-03-02 16:01:26 +0000] [992] [ERROR] Retrying in 1 second. +[2024-03-02 16:01:27 +0000] [992] [ERROR] Retrying in 1 second. +[2024-03-02 16:01:28 +0000] [992] [ERROR] Can't connect to /run/gunicorn.sock +``` + +Pour obtenir plus d'information, on relance la commande avec `--log-level DEBUG` : + +``` +[2024-03-02 16:23:51 +0000] [870] [DEBUG] connection to /run/gunicorn.sock failed: [Errno 13] Permission denied: '/run/gunicorn.sock' +[2024-03-02 16:23:51 +0000] [870] [ERROR] Retrying in 1 second. +[2024-03-02 16:23:52 +0000] [870] [ERROR] Can't connect to /run/gunicorn.sock +``` + +Je peux éditer `/etc/systemd/system/gunicorn.socket` pour mettre l'utilisateur `admin` comme `SocketUser` : + +```console +admin@i-097995a6b9078f235:/$ sudo systemctl disable --now gunicorn.socket +Removed /etc/systemd/system/sockets.target.wants/gunicorn.socket. +admin@i-097995a6b9078f235:/$ vi /etc/systemd/system/gunicorn.socket +admin@i-097995a6b9078f235:/$ sudo vi /etc/systemd/system/gunicorn.socket +admin@i-097995a6b9078f235:/$ sudo systemctl enable --now gunicorn.socket +Created symlink /etc/systemd/system/sockets.target.wants/gunicorn.socket → /etc/systemd/system/gunicorn.socket. +admin@i-097995a6b9078f235:/$ ls /run/gunicorn.sock -al +srw-rw-rw- 1 admin admin 0 Mar 2 16:28 /run/gunicorn.sock +``` + +Ça marche en théorie ! Il faut aussi que le dossier `/run` soit disponible en écriture pour d'autres utilisateurs que `root` : + +```console +admin@i-097995a6b9078f235:/$ sudo chmod o+w run/ +admin@i-097995a6b9078f235:/$ /usr/local/bin/gunicorn --log-level DEBUG --bind unix:/run/gunicorn.sock wsgi +[2024-03-02 16:32:35 +0000] [10936] [DEBUG] Current configuration: + config: ./gunicorn.conf.py + wsgi_app: None + bind: ['unix:/run/gunicorn.sock'] +--- snip --- +[2024-03-02 16:36:55 +0000] [10958] [INFO] Starting gunicorn 20.1.0 +[2024-03-02 16:36:55 +0000] [10958] [DEBUG] Arbiter booted +[2024-03-02 16:36:55 +0000] [10958] [INFO] Listening at: unix:/run/gunicorn.sock (10958) +[2024-03-02 16:36:55 +0000] [10958] [INFO] Using worker: sync +[2024-03-02 16:36:55 +0000] [10959] [INFO] Booting worker with pid: 10959 +[2024-03-02 16:36:55 +0000] [10958] [DEBUG] 1 workers +``` + +Je peux maintenant lancer Nginx : + +```console +admin@i-097995a6b9078f235:~$ sudo systemctl start nginx +admin@i-097995a6b9078f235:~$ curl -s http://localhost + +502 Bad Gateway + +

502 Bad Gateway

+
nginx/1.18.0
+ + +``` + +Si j'avais fait plus attention j'aurais vu dans sa configuration qu'il utilise un nom de socket un peu différent : + +```console +admin@i-097995a6b9078f235:~$ sudo tail /var/log/nginx/error.log +2024/03/02 16:38:27 [crit] 10976#10976: *1 connect() to unix:/run/gunicorn.socket failed (2: No such file or directory) while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://unix:/run/gunicorn.socket:/", host: "localhost" +``` + +Je remplace `gunicorn.socket` par `gunicorn.sock` dans la configuration du Nginx et je relance : + +```console +admin@i-00529e3f60fcd6fa9:/$ curl -D- http://127.0.0.1/ +HTTP/1.1 200 OK +Server: nginx/1.18.0 +Date: Sat, 02 Mar 2024 16:49:11 GMT +Content-Type: text/html +Content-Length: 0 +Connection: keep-alive +``` + +Pas de contenu. C'est parce que le script `wsgi.py` met `Content-Length` à 0. + +Une fois l'entête supprimé ça fonctionne : + +```console +admin@i-00529e3f60fcd6fa9:/$ sudo systemctl restart gunicorn.service +admin@i-00529e3f60fcd6fa9:/$ sudo systemctl restart nginx.service +admin@i-00529e3f60fcd6fa9:/$ curl -D- http://127.0.0.1/ +HTTP/1.1 200 OK +Server: nginx/1.18.0 +Date: Sat, 02 Mar 2024 16:51:34 GMT +Content-Type: text/html +Transfer-Encoding: chunked +Connection: keep-alive + +Hello, world! +``` diff --git a/_posts/2024-02-03-Solution-du-challenge-Oaxaca-de-SadServers.md b/_posts/2024-02-03-Solution-du-challenge-Oaxaca-de-SadServers.md new file mode 100644 index 0000000..9f6da16 --- /dev/null +++ b/_posts/2024-02-03-Solution-du-challenge-Oaxaca-de-SadServers.md @@ -0,0 +1,77 @@ +--- +title: "Solution du challenge Oaxaca de SadServers.com" +tags: [CTF,AdminSys,SadServers +--- + +**Scenario:** "Oaxaca": Close an Open File + +**Level:** Medium + +**Type:** Fix + +**Tags:** [bash](https://sadservers.com/tag/bash) [unusual-tricky](https://sadservers.com/tag/unusual-tricky) + +**Description:** The file `/home/admin/somefile` is open for writing by some process. Close this file without killing the process. + +**Test:** `lsof /home/admin/somefile` returns nothing. + +**Time to Solve:** 15 minutes. + +Voyons le processus qui utilise le fichier : + +```console +admin@i-05fd8a4af5206358f:/$ sudo lsof /home/admin/somefile +COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME +bash 794 admin 77w REG 259,1 0 272875 /home/admin/somefile +sudo 806 root 77w REG 259,1 0 272875 /home/admin/somefile +admin@i-05fd8a4af5206358f:/$ ps aux | grep "screen|tmux" +admin 811 0.0 0.1 5208 652 pts/0 S<+ 10:53 0:00 grep screen|tmux +``` + +Il s'agit de bash et je ne vois aucune session `screen` ni `tmux` présente... + +Je fouille dans le dossier de l'administrateur : + +```console +admin@i-05fd8a4af5206358f:/$ cd home/admin/ +admin@i-05fd8a4af5206358f:~$ ls -a +. .. .bash_history .bash_logout .bashrc .local .profile .selected_editor .ssh agent openfile.sh somefile +admin@i-05fd8a4af5206358f:~$ cat openfile.sh +#!/bin/bash +exec 66> /home/admin/somefile +``` + +La commande `exec` de bash a été utilisée pour ouvrir un descripteur de fichier. + +On peut normalement le fermer de cette façon : + +```console +admin@i-05fd8a4af5206358f:~$ exec 66>&- +admin@i-05fd8a4af5206358f:~$ sudo lsof /home/admin/somefile +COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME +bash 794 admin 77w REG 259,1 0 272875 /home/admin/somefile +sudo 833 root 77w REG 259,1 0 272875 /home/admin/somefile +``` + +Hmmm, ça n'a pas fonctionné. Pas le bon descripteur ? + +Voyons voir les descripteurs du PID courant (bash) : + +```console +admin@i-05fd8a4af5206358f:~$ echo $$ +794 +admin@i-05fd8a4af5206358f:~$ ls /proc/794/fd -al +total 0 +dr-x------ 2 admin admin 0 Mar 2 10:52 . +dr-xr-xr-x 9 admin admin 0 Mar 2 10:52 .. +lrwx------ 1 admin admin 64 Mar 2 10:52 0 -> /dev/pts/0 +lrwx------ 1 admin admin 64 Mar 2 10:52 1 -> /dev/pts/0 +lrwx------ 1 admin admin 64 Mar 2 10:52 2 -> /dev/pts/0 +lrwx------ 1 admin admin 64 Mar 2 10:52 255 -> /dev/pts/0 +l-wx------ 1 admin admin 64 Mar 2 10:52 77 -> /home/admin/somefile +admin@i-05fd8a4af5206358f:~$ exec 77>&- +admin@i-05fd8a4af5206358f:~$ sudo lsof /home/admin/somefile +admin@i-05fd8a4af5206358f:~$ +``` + +Ici, il s'agissait du descripteur 77. diff --git a/_posts/2024-02-03-Solution-du-challenge-Salta-de-SadServers.md b/_posts/2024-02-03-Solution-du-challenge-Salta-de-SadServers.md new file mode 100644 index 0000000..358676c --- /dev/null +++ b/_posts/2024-02-03-Solution-du-challenge-Salta-de-SadServers.md @@ -0,0 +1,176 @@ +--- +title: "Solution du challenge Salta de SadServers.com" +tags: [CTF,AdminSys,SadServers +--- + +**Scenario:** "Salta": Docker container won't start. + +**Level:** Medium + +**Type:** Fix + +**Tags:** [docker](https://sadservers.com/tag/docker) [realistic-interviews](https://sadservers.com/tag/realistic-interviews) + +**Description:** There's a "dockerized" Node.js web application in the `/home/admin/app` directory. Create a Docker container so you get a web app on port *:8888* and can *curl* to it. For the solution to be valid, there should be only one running Docker container. + +**Test:** `curl localhost:8888` returns `Hello World!` from a running container. + +**Time to Solve:** 15 minutes. + +C'est parti pour du Docker ! + +```console +admin@i-0d07b2079856860d3:/$ cd home/admin/app/ +admin@i-0d07b2079856860d3:~/app$ ls -a +. .. .dockerignore Dockerfile package-lock.json package.json server.js +admin@i-0d07b2079856860d3:~/app$ cat .dockerignore +node_modules +npm-debug.log + +admin@i-0d07b2079856860d3:~/app$ cat Dockerfile +# documentation https://nodejs.org/en/docs/guides/nodejs-docker-webapp/ + +# most recent node (security patches) and alpine (minimal, adds to security, possible libc issues) +FROM node:15.7-alpine + +# Create app directory & copy app files +WORKDIR /usr/src/app + +# we copy first package.json only, so we take advantage of cached Docker layers +COPY ./package*.json ./ + +# RUN npm ci --only=production +RUN npm install + +# Copy app source +COPY ./* ./ + +# port used by this app +EXPOSE 8880 + +# command to run +CMD [ "node", "serve.js" ] +``` + +À première vua tout semble bon. Voyons voir s'il y a encore un conteneur présent : + +```console +admin@i-0d07b2079856860d3:~/app$ sudo docker images -a +REPOSITORY TAG IMAGE ID CREATED SIZE + a6ee5c4d5a96 17 months ago 124MB + 0b18357df7c9 17 months ago 124MB +app latest 1d782b86d6f2 17 months ago 124MB + 5cad5aa08c7a 17 months ago 124MB + acfb467c80ba 17 months ago 110MB + 463b1571f18e 17 months ago 110MB +node 15.7-alpine 706d12284dd5 3 years ago 110MB +admin@i-0d07b2079856860d3:~/app$ sudo docker ps -a +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +124a4fb17a1c app "docker-entrypoint.s…" 17 months ago Exited (1) 17 months ago elated_taussig +admin@i-0d07b2079856860d3:~/app$ sudo docker logs 124a4fb17a1c +node:internal/modules/cjs/loader:928 + throw err; + ^ + +Error: Cannot find module '/usr/src/app/serve.js' + at Function.Module._resolveFilename (node:internal/modules/cjs/loader:925:15) + at Function.Module._load (node:internal/modules/cjs/loader:769:27) + at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:76:12) + at node:internal/main/run_main_module:17:47 { + code: 'MODULE_NOT_FOUND', + requireStack: [] +``` + +Là, il fallait être attentif au détail : Node tente d'exécuter `serve.js` alors que le fichier s'appelle `server.js` (une lettre en plus). + +On corrige le Dockerfile, on recrée l'image et on relance : + +```console +root@i-0817bf54eaf7b54d1:/home/admin/app# docker build -t app . +Sending build context to Docker daemon 101.9kB +Step 1/7 : FROM node:15.7-alpine + ---> 706d12284dd5 +Step 2/7 : WORKDIR /usr/src/app + ---> Using cache + ---> 463b1571f18e +Step 3/7 : COPY ./package*.json ./ + ---> Using cache + ---> acfb467c80ba +Step 4/7 : RUN npm install + ---> Using cache + ---> 5cad5aa08c7a +Step 5/7 : COPY ./* ./ + ---> 853722d2e8fc +Step 6/7 : EXPOSE 8880 + ---> Running in ef29a0a042f6 +Removing intermediate container ef29a0a042f6 + ---> 509af769a64d +Step 7/7 : CMD [ "node", "server.js" ] + ---> Running in 6c71c9461ed1 +Removing intermediate container 6c71c9461ed1 + ---> f171ac81321b +Successfully built f171ac81321b +Successfully tagged app:latest +root@i-0817bf54eaf7b54d1:/home/admin/app# docker run -d app +be3744800d849d25d5a801c5a32f7a271d99b9b7bf02cb04620972e3bca10939 +root@i-0817bf54eaf7b54d1:/home/admin/app# curl localhost:8888 +these are not the droids you're looking for +``` + +Cette fois, on n'obtient pas le contenu espéré... + +Le script est pourtant tout simple : + +```js +var express = require('express'), + app = express(), + port = process.env.PORT || 8888, + bodyParser = require('body-parser'); + +app.use(bodyParser.urlencoded({ extended: true })); +app.use(bodyParser.json()); + +app.get('/', function (req, res) { + res.send('Hello World!') + }) + +app.use(function(req, res) { + res.status(404).send({url: req.originalUrl + ' not found'}) + }); + +app.listen(port); + +console.log('Server Started on: ' + port); +``` + +Ah, mais oui ! J'ai oublié de rendre le port du conteneur accessible. Actuellement le port est utilisé par un autre service : + +```console +root@i-0817bf54eaf7b54d1:/home/admin/app# ss -lntp +State Recv-Q Send-Q Local Address:Port Peer Address:Port Process +LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=588,fd=3)) +LISTEN 0 511 0.0.0.0:8888 0.0.0.0:* users:(("nginx",pid=608,fd=6),("nginx",pid=607,fd=6),("nginx",pid=606,fd=6)) +LISTEN 0 4096 *:6767 *:* users:(("sadagent",pid=561,fd=7)) +LISTEN 0 4096 *:8080 *:* users:(("gotty",pid=560,fd=6)) +LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=588,fd=4)) +LISTEN 0 511 [::]:8888 [::]:* users:(("nginx",pid=608,fd=7),("nginx",pid=607,fd=7),("nginx",pid=606,fd=7)) +``` + +Actuellement, on tape sur Nginx : + +```console +root@i-0817bf54eaf7b54d1:/home/admin/app# cat /var/www/html/index.nginx-debian.html +these are not the droids you're looking for +``` + +Du coup, je stoppe Nginx et relance Docker comme il faut : + +```console +root@i-0817bf54eaf7b54d1:/home/admin/app# systemctl stop nginx +root@i-0817bf54eaf7b54d1:/home/admin/app# docker stop be3744800d84 +be3744800d84 +root@i-0817bf54eaf7b54d1:/home/admin/app# docker run -p 8888:8888 -d app +77a84a67da3fd733c38b59c5e1ea366cbf6a1b190cceaca50050ee767438b6af +root@i-0817bf54eaf7b54d1:/home/admin/app# curl localhost:8888 +Hello World! +``` diff --git a/_posts/2024-02-03-Solution-du-challenge-Tokyo-de-SadServers.md b/_posts/2024-02-03-Solution-du-challenge-Tokyo-de-SadServers.md new file mode 100644 index 0000000..9e77f3c --- /dev/null +++ b/_posts/2024-02-03-Solution-du-challenge-Tokyo-de-SadServers.md @@ -0,0 +1,94 @@ +--- +title: "Solution du challenge Tokyo de SadServers.com" +tags: [CTF,AdminSys,SadServers] +--- + +**Scenario:** "Tokyo": can't serve web file + +**Level:** Medium + +**Type:** Fix + +**Tags:** [apache](https://sadservers.com/tag/apache) [realistic-interviews](https://sadservers.com/tag/realistic-interviews) + +**Description:** There's a web server serving a file `/var/www/html/index.html` with content "hello sadserver" but when we try to check it locally with an HTTP client like `curl 127.0.0.1:80`, nothing is returned. This scenario is not about the particular web server configuration and you only need to have general knowledge about how web servers work. + +**Test:** `curl 127.0.0.1:80` should return: `hello sadserver` + +**Time to Solve:** 15 minutes. + +Voyons ce qu'il se passe avec ce serveur web... + +Un `curl` montre que le serveur ne répond pas (timeout). Deux possibilités, soit le serveur fait tourner un script qui prend une éternité, soit le pare-feu bloque notre requête. + +```console +root@ip-172-31-21-14:/# curl -v 127.0.0.1:80 +* Trying 127.0.0.1:80... +^C +root@ip-172-31-21-14:/# iptables -L +Chain INPUT (policy ACCEPT) +target prot opt source destination +DROP tcp -- anywhere anywhere tcp dpt:http + +Chain FORWARD (policy ACCEPT) +target prot opt source destination + +Chain OUTPUT (policy ACCEPT) +target prot opt source destination +``` + +C'était donc la seconde hypothèse. On va supprimer les règles du pare-feu et retenter : + +```console +root@ip-172-31-21-14:/# iptables -F +root@ip-172-31-21-14:/# curl -v 127.0.0.1:80 +* Trying 127.0.0.1:80... +* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0) +> GET / HTTP/1.1 +> Host: 127.0.0.1 +> User-Agent: curl/7.81.0 +> Accept: */* +> +* Mark bundle as not supporting multiuse +< HTTP/1.1 403 Forbidden +< Date: Sat, 02 Mar 2024 09:43:44 GMT +< Server: Apache/2.4.52 (Ubuntu) +< Content-Length: 274 +< Content-Type: text/html; charset=iso-8859-1 +< + + +403 Forbidden + +

Forbidden

+

You don't have permission to access this resource.

+
+
Apache/2.4.52 (Ubuntu) Server at 127.0.0.1 Port 80
+ +* Connection #0 to host 127.0.0.1 left intact +``` + +Cette fois, j'ai un problème d'accès. On doit avoir des pistes dans `/etc/apache2/apache2.conf`. + +```apacheconf + + Options FollowSymLinks + AllowOverride None + Require all denied + +``` + +Ce `denied` ne me dit rien de bon, je le change par `granted`. + +Je vois aussi que le fichier d'index est lisible uniquement par root, on va corriger ça : + +```console +root@ip-172-31-21-14:/etc/apache2# ls -al /var/www/html/ +total 12 +drwxr-xr-x 2 root root 4096 Aug 1 2022 . +drwxr-xr-x 3 root root 4096 Aug 1 2022 .. +-rw------- 1 root root 16 Aug 1 2022 index.html +root@ip-172-31-21-14:/etc/apache2# chmod o+r /var/www/html/index.html +root@ip-172-31-21-14:/etc/apache2# curl 127.0.0.1:80 +hello sadserver +``` diff --git a/_posts/2024-03-01-Solution-des-scenarios-Easy-de-SadServers.md b/_posts/2024-03-01-Solution-des-scenarios-Easy-de-SadServers.md index 462e7fe..4585a6a 100644 --- a/_posts/2024-03-01-Solution-des-scenarios-Easy-de-SadServers.md +++ b/_posts/2024-03-01-Solution-des-scenarios-Easy-de-SadServers.md @@ -1,6 +1,6 @@ --- title: "Solution des scénarios Easy de SadServers.com" -tags: [CTF,AdminSys] +tags: [CTF,AdminSys,SadServers] --- [SadServers](https://sadservers.com/) change des CTF de hacking classique : ici, vous avez à disposition un serveur cassé (avec une fonction qui ne marche pas comme il faut) et votre mission est de le réparer.