-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add try/except to catch when vm is off #135
Conversation
Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.8.5 to 3.9.0. - [Release notes](https://github.com/aio-libs/aiohttp/releases) - [Changelog](https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst) - [Commits](aio-libs/aiohttp@v3.8.5...v3.9.0) --- updated-dependencies: - dependency-name: aiohttp dependency-type: indirect ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.16 to 1.26.18. - [Release notes](https://github.com/urllib3/urllib3/releases) - [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst) - [Commits](urllib3/urllib3@1.26.16...1.26.18) --- updated-dependencies: - dependency-name: urllib3 dependency-type: indirect ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [pyarrow](https://github.com/apache/arrow) from 13.0.0 to 14.0.1. - [Commits](apache/arrow@go/v13.0.0...go/v14.0.1) --- updated-dependencies: - dependency-name: pyarrow dependency-type: indirect ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [scipy](https://github.com/scipy/scipy) from 1.9.3 to 1.10.0. - [Release notes](https://github.com/scipy/scipy/releases) - [Commits](scipy/scipy@v1.9.3...v1.10.0) --- updated-dependencies: - dependency-name: scipy dependency-type: indirect ... Signed-off-by: dependabot[bot] <[email protected]>
Bumps [langchain](https://github.com/langchain-ai/langchain) from 0.0.327 to 0.0.329. - [Release notes](https://github.com/langchain-ai/langchain/releases) - [Commits](langchain-ai/langchain@v0.0.327...v0.0.329) --- updated-dependencies: - dependency-name: langchain dependency-type: direct:production ... Signed-off-by: dependabot[bot] <[email protected]>
…http-3.9.0 Bump aiohttp from 3.8.5 to 3.9.0
…gchain-0.0.329 Bump langchain from 0.0.327 to 0.0.329
…py-1.10.0 Bump scipy from 1.9.3 to 1.10.0
…rrow-14.0.1 Bump pyarrow from 13.0.0 to 14.0.1
…lib3-1.26.18 Bump urllib3 from 1.26.16 to 1.26.18
reginald/slack_bot/bot.py
Outdated
if model_response.status_code != 200: | ||
raise ValueError("Unable to get response.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need this bit?
I guess there are two scenarios here: 1) If the VM is off (gives connection timeout error) and 2) If the VM is on but for some reason the model api isn't working (what is returned error here?).
Either way maybe this would be good to become a logged error too, something like:
if model_response.status_code != 200:
err = f"Unable to get response. Returned response code: {model_response.status_code}"
logging.error(err)
raise ValueError(err)
Also (I know I was the one who put ValueError but) maybe it should be model_response.raise_for_status()
as I think this just raises the error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point! I think actually we do need something to raise an error so that we can catch it and log it. I think we could just replace all of this snippet with model_response.raise_for_status
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so if there is an error, we jump to the except Exception as err:
block and log it and create an appropriate response
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, I just had a go and seems to work.
The only thing is the out of office response is kind of slow, I wonder if we could have a ping-type thing happen before we call the actual model to get a quicker response. I tried adding a timeout but because the model can be quite slow to reply it gave me a timeout response even though the model was working.
Working to finish #127 by adding in a try/except to catch timeout errors when the VM for the LLM is off.