You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have dynolog daemon running in container A. Python process is running in Container B. Is there a way for dyno gputrace from container A to collect profile for python process running in Container B? If its possible, can you please share an example?
The text was updated successfully, but these errors were encountered:
By using a hostPath volume, you can mount the dynolog socket from Container A into the diagnostic Container B, and then inject the necessary environment variables to complete the setup.
env:
- name: KINETO_USE_DAEMON
value: "1"
- name: KINETO_IPC_SOCKET_DIR
value: /run/dyno
volumes:
- hostPath:
path: /run/dyno
type: DirectoryOrCreate
name: dyno-socket-dir
Hi guys, this issue is still open. Just wanted to share, an alternative is to have the dynolog binary within the Container A itself. This would conceptually be something like dynolog ... & ./myapp, thus ensuring you have the same namespace for both processes. To trigger trace you would need to point to the IP Address of the Container A essentially.
Sharing sockets also works :) Please let us know if there are any other concerns.
I have dynolog daemon running in container A. Python process is running in Container B. Is there a way for dyno gputrace from container A to collect profile for python process running in Container B? If its possible, can you please share an example?
The text was updated successfully, but these errors were encountered: