Large threaded, kubernetes scrape = Target page, context or browser has been closed #2563
JoshuaPerk
started this conversation in
General
Replies: 1 comment 1 reply
-
It's hard to help with such a production issue without any reproduction, what I would try is using different playwright versions, as the |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Which package is this bug report for? If unsure which one to select, leave blank
@crawlee/playwright (PlaywrightCrawler)
Issue description
I'm:
Each thread:
The Problem
This works flawlessly... for about 60 minutes... afterwards, I get plagued with Target page, context or browser has been closed. It appears at the ~ hour mark is when this first presents itself and then incrementally increases in frequency until I'm getting more failed records than successful (at which point, I kill the cluster or restart it).
What I've tried:
browserPoolOptions
likeretireBrowserAfterPageCount: 100
andcloseInactiveBrowserAfterSecs: 200
await crawler.teardown();
in hopes that this would clear and sort of cache/memory that could be stacking upI suspect there's a leak or store not being cleared out since it happens gradually?
Code sample
No response
Package version
"crawlee": "^3.9.2",
Node.js version
v20.14.0
Operating system
linux/amd64
Apify platform
I have tested this on the
next
releaseNo response
Other context
No response
Beta Was this translation helpful? Give feedback.
All reactions