mirror of
https://github.com/XuehaiPan/nvitop.git
synced 2026-05-15 14:15:55 -06:00
[GH-ISSUE #103] [Question] Memory bandwidth utilization of GPUs? #62
Labels
No labels
api
bug
bug
cli / tui
dependencies
documentation
documentation
documentation
duplicate
enhancement
exporter
invalid
pull-request
pynvml
question
question
upstream
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/nvitop#62
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @walkieq on GitHub (Oct 24, 2023).
Original GitHub issue: https://github.com/XuehaiPan/nvitop/issues/103
Originally assigned to: @XuehaiPan on GitHub.
Required prerequisites
Questions
Is there a way to measure the runtime memory bandwidth utilization of GPUs?
Or is it possible to estimate/calculate the memory bandwidth utilization using the numbers reported in nvitop?
@XuehaiPan commented on GitHub (Oct 24, 2023):
@walkieq The memory bandwidth utilization rate can be retrieved by:
Also, you can get the memory throughput via:
@walkieq commented on GitHub (Oct 25, 2023):
Thank you so much!
I have tested the API but I got some interesting results.
I am running the following:
and the results are:
It looks like the device.memory_utilization() is different to the device.pcie_throughput(). Which one shall I refer to?
@XuehaiPan commented on GitHub (Oct 25, 2023):
Device.memory_utilization refers to:
Device.pcie_throughput refers to:
You can see the detailed definition via:
Both the memory utilization rate and the PCIe throughput have their own meaning. You can also calculate the PCIe bandwidth utilization rate by:
@XuehaiPan commented on GitHub (Nov 29, 2023):
Closing due to inactivity. Please feel free to ask for a reopening if you have more questions.