-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Reporting] Stabilize CSV export tests #112204
[Reporting] Stabilize CSV export tests #112204
Conversation
bb66b32
to
2f96ad3
Compare
a9e102b
to
2fd45fd
Compare
f748f0a
to
4b545a9
Compare
be8350d
to
c3dd0ba
Compare
const fromTime = 'Apr 27, 2019 @ 23:56:51.374'; | ||
const toTime = 'Aug 23, 2019 @ 16:18:51.821'; | ||
const fromTime = 'Jun 20, 2019 @ 00:00:00.000'; | ||
const toTime = 'Jun 25, 2019 @ 00:00:00.000'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The earlier time range created a search that matched 4675 hits, now it is 720
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was only able to reliably reproduce the buggy behaviour with larger CSV exports. I am concerned that if we reduce the number of hits from the search we might hide issues like this in future.
I understand this runs against the changes here that introduced a full snapshot check, but perhaps we could do both tests? One checking a full snapshot (high resolution) and one checking a large export count (with lower resolution snapshot, first 10 and last 10 lines as we had it before).
Let me know what you think!
|
||
describe('Generation from Job Params', () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of this removed code used the deprecated export type, and was moved to the new test file: x-pack/test/reporting_api_integration/reporting_and_security/generate_csv_discover_deprecated.ts
2812074
to
afd4336
Compare
@@ -165,23 +164,21 @@ export default function ({ getService }: FtrProviderContext) { | |||
describe('Discover: Generate CSV report', () => { | |||
it('does not allow user that does not have the role-based privilege', async () => { | |||
const res = await reportingAPI.generateCsv( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Decided to move the username
and password
parameters to the end of the arguments. This will be useful for more csv_searchsource
testing in later PRs.
Pinging @elastic/kibana-app-services (Team:AppServices) |
Pinging @elastic/kibana-reporting-services (Team:Reporting Services) |
@@ -345,6 +345,15 @@ export class CsvGenerator { | |||
break; | |||
} | |||
|
|||
// TODO check for shard failures, log them and add a warning if found |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍🏻
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking great @tsullivan ! I left a few comments that I'd like to get your thoughts on before merging.
}); | ||
|
||
it('generates a report from a new search with data: discover:searchFieldsFromSource', async () => { | ||
await setFieldsFromSource(true); | ||
await PageObjects.discover.clickNewSearchButton(); | ||
await PageObjects.reporting.setTimepickerInDataRange(); | ||
await PageObjects.reporting.setTimepickerInEcommerceDataRange(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just after this line it might be useful to add an assertion against the number of hits returned by the search:
expect(await PageObjects.discover.getHitCount()).to.equal('4,675');
That way, if this fails in future, we will know that at least the Discover search is probably working as expected.
const fromTime = 'Apr 27, 2019 @ 23:56:51.374'; | ||
const toTime = 'Aug 23, 2019 @ 16:18:51.821'; | ||
const fromTime = 'Jun 20, 2019 @ 00:00:00.000'; | ||
const toTime = 'Jun 25, 2019 @ 00:00:00.000'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was only able to reliably reproduce the buggy behaviour with larger CSV exports. I am concerned that if we reduce the number of hits from the search we might hide issues like this in future.
I understand this runs against the changes here that introduced a full snapshot check, but perhaps we could do both tests? One checking a full snapshot (high resolution) and one checking a large export count (with lower resolution snapshot, first 10 and last 10 lines as we had it before).
Let me know what you think!
As much as I don't like to use Reporting tests for finding issues with other domains or services, I agree with you since it seems there aren't any other features (or tests) that use _scroll the way that Reporting uses it. |
@elasticmachine merge upstream |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing my feedback @tsullivan !
As much as I don't like to use Reporting tests for finding issues with other domains or services, I agree with you since it seems there aren't any other features (or tests) that use _scroll the way that Reporting uses it.
That concern makes sense to me, perhaps this is an improvement we should follow up on ES side about (i.e., adding tests there for larger scrolls). We could open an issue about it. What do you think?
💚 Build Succeeded
Metrics [docs]
History
To update your PR or re-run it, just comment with: |
* [Reporting] Stabilize CSV export tests * add debugging logging for results metadata * restore accidentally deleted tests * restore "large export" test * remove redundant availability test * do not filter and re-save * fix getHitCount * fix large export test * skip large export test :( Co-authored-by: Kibana Machine <[email protected]>
I'm not sure we need an issue to follow up on things right now, since it looks like the teams are working actively on resolving the issue. Here is another issue where the work is happening: https://github.com/elastic/machine-learning-qa/issues/1125 |
* [Reporting] Stabilize CSV export tests (#112204) * [Reporting] Stabilize CSV export tests * add debugging logging for results metadata * restore accidentally deleted tests * restore "large export" test * remove redundant availability test * do not filter and re-save * fix getHitCount * fix large export test * skip large export test :( Co-authored-by: Kibana Machine <[email protected]> * update snapshots for discover tests * update test snapshots Co-authored-by: Kibana Machine <[email protected]>
We have added a test that verifies scroll and search_after requests against a large index. |
A CSV Export issue was confirmed to changing the test server's reporting settings to their defaults. Under the default settings (no artificially low
maxSizeBytes
setting), most CSV Export tests worked on 4675 documents. That amount of data triggered some weak performance in Elasticsearch for the_scroll
API, and Reporting tests were failing essentially due to shard failures.This PR stabilizes the CSV tests. It does so by adjusting the tests use a smaller date range and return back a smaller number of total records. This goal is from here