Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[API_PARSER][CROWDSTRIKE] Fix offset in case of >=100 logs gathering #498

Merged
merged 11 commits into from
Feb 26, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Fixed
- [API_PARSER] [NETSKOPE] Correctly update the last_collected_timestamp, even when no logs are received
- [API_PARSER] [CYBEREASON] Avoid None evaluation of log objects in format_log for malops
- [API_PARSER] [CROWDSTRIKE] Fix infinite loop when we encounters more than 100 logs (default limit) in one request


## [2.22.0] - 2025-02-25
Expand Down
15 changes: 7 additions & 8 deletions vulture_os/toolkit/api_parser/crowdstrike/crowdstrike.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,10 +143,9 @@ def __execute_query(self, method, url, query, timeout=10):
break # no error we break from the loop

if response.status_code not in [200, 201]:
logger.error(
f"[{__parser__}][__execute_query]: Error at Crowdstrike API Call URL: {url} Code: {response.status_code} Content: {response.content}", extra={'frontend': str(self.frontend)}
)
return {}
msg = f"[{__parser__}][__execute_query]: Error at Crowdstrike API Call URL: {url} Code: {response.status_code} Content: {response.content}"
logger.error(msg, extra={'frontend': str(self.frontend)})
raise Exception(msg)
return response.json()

def unionDict(self, dictBase, dictToAdd):
Expand All @@ -171,9 +170,9 @@ def execute_query(self, method, url, query={}, timeout=10):
# we retrieved enough data
if(customLimit > 0 and customLimit <= len(jsonResp['resources'])):
break
query['offset'] = int(jsonResp['meta']['pagination']['offset'])
jsonAdditionalResp = self.__execute_query(
method, url, query, timeout=timeout)
query['offset'] = len(jsonResp['resources'])
jsonAdditionalResp = self.__execute_query(method, url, query, timeout=timeout)
self.update_lock()
jsonResp = self.unionDict(jsonResp, jsonAdditionalResp)
#jsonResp += [jsonAdditionalResp]
return jsonResp
Expand Down Expand Up @@ -294,7 +293,7 @@ def test(self):
try:
logger.debug(f"[{__parser__}][test]:Running tests...", extra={'frontend': str(self.frontend)})

query_time = (timezone.now() - timedelta(days=3)).strftime("%Y-%m-%dT%H:%M:%SZ")
query_time = (timezone.now() - timedelta(days=1)).strftime("%Y-%m-%dT%H:%M:%SZ")
to = timezone.now().strftime("%Y-%m-%dT%H:%M:%SZ")

logs = self.get_logs("details", query_time, to)
Expand Down