mirror of
https://github.com/XuehaiPan/nvitop.git
synced 2026-05-15 14:15:55 -06:00
[GH-ISSUE #35] [Enhancement] Monitoring % of tensor cores #23
Labels
No labels
api
bug
bug
cli / tui
dependencies
documentation
documentation
documentation
duplicate
enhancement
exporter
invalid
pull-request
pynvml
question
question
upstream
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: github-starred/nvitop#23
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @johnnynunez on GitHub (Sep 4, 2022).
Original GitHub issue: https://github.com/XuehaiPan/nvitop/issues/35
Nvidia told me to use nvidia profiler to monitor the tensor cores or nvprof. But could you add to this great tool to know if my RTX 3090 is really using the tensor cores?
https://developer.nvidia.com/blog/using-nsight-compute-nvprof-mixed-precision-deep-learning-models/
@XuehaiPan commented on GitHub (Sep 4, 2022):
Hi @johnnynunez,
nvitopis built on top of the NVIDIA Management Library (NVML), which is instantly usable after installing the NVIDIA driver. The only APIs from NVML to get GPU utilization rates are:Per device:
nvmlDeviceGetUtilizationRates(%GPU + %MEM bandwidth)nvmlDeviceGetEncoderUtilization(%ENC)nvmlDeviceGetDecoderUtilization(%DEC)Per process:
nvmlDeviceGetProcessUtilization(%SM (GPU) + %MEM bandwidth + %ENC + %DEC)nvitopdo provide per process GPU utilization usage in the%SMcolumn. I found this blog:The GA100 streaming multiprocessor (SM).
said:
The SM unit is consist of multiple tensor cores. Does this resolve your request?
The NVML can only retrieve the SM (streaming multiprocessor) usage in total rather than fine-grained details for the tensor cores. If you want to profile your program, I think using
nvprofis the best practice as NVIDIA documented.@XuehaiPan commented on GitHub (Sep 18, 2022):
Closing due to inactivity. Please feel free to ask for a reopening.
@johnnynunez commented on GitHub (Mar 22, 2023):
Hi, @XuehaiPan pytorch has the capability to watch tensor cores percentatge. Is it possible to use here?

@XuehaiPan commented on GitHub (Mar 23, 2023):
Hi @johnnynunez, the PyTorch Kineto library calculates the tensor core ratio from the kernel times.
6e81ce05c4/tb_plugin/torch_tb_profiler/profiler/tensor_cores_parser.py (L16-L45)That needs the users explicitly modify their code to register event callbacks.
I don't think there is something we can do in
nvitopas a monitor tool rather than a profiler. The profiler needs in-process injection to the user program.nvitopis based on the NVML library and runs in a separate process.nvidia-ml-pyversion #124