[GH-ISSUE #35] [Enhancement] Monitoring % of tensor cores #23

Closed
opened 2026-05-05 03:22:20 -06:00 by gitea-mirror · 4 comments
Owner

Originally created by @johnnynunez on GitHub (Sep 4, 2022).
Original GitHub issue: https://github.com/XuehaiPan/nvitop/issues/35

Nvidia told me to use nvidia profiler to monitor the tensor cores or nvprof. But could you add to this great tool to know if my RTX 3090 is really using the tensor cores?

https://developer.nvidia.com/blog/using-nsight-compute-nvprof-mixed-precision-deep-learning-models/

Originally created by @johnnynunez on GitHub (Sep 4, 2022). Original GitHub issue: https://github.com/XuehaiPan/nvitop/issues/35 Nvidia told me to use nvidia profiler to monitor the tensor cores or nvprof. But could you add to this great tool to know if my RTX 3090 is really using the tensor cores? https://developer.nvidia.com/blog/using-nsight-compute-nvprof-mixed-precision-deep-learning-models/
Author
Owner

@XuehaiPan commented on GitHub (Sep 4, 2022):

Hi @johnnynunez, nvitop is built on top of the NVIDIA Management Library (NVML), which is instantly usable after installing the NVIDIA driver. The only APIs from NVML to get GPU utilization rates are:

Per device:

Per process:

nvitop do provide per process GPU utilization usage in the %SM column. I found this blog:

SM
The GA100 streaming multiprocessor (SM).

said:

A100 GPU streaming multiprocessor

The new streaming multiprocessor (SM) in the NVIDIA Ampere architecture-based A100 Tensor Core GPU significantly increases performance, builds upon features introduced in both the Volta and Turing SM architectures, and adds many new capabilities.

The SM unit is consist of multiple tensor cores. Does this resolve your request?

The NVML can only retrieve the SM (streaming multiprocessor) usage in total rather than fine-grained details for the tensor cores. If you want to profile your program, I think using nvprof is the best practice as NVIDIA documented.

<!-- gh-comment-id:1236325711 --> @XuehaiPan commented on GitHub (Sep 4, 2022): Hi @johnnynunez, `nvitop` is built on top of the NVIDIA Management Library (NVML), which is instantly usable after installing the NVIDIA driver. The only APIs from NVML to get GPU utilization rates are: Per device: - [`nvmlDeviceGetUtilizationRates`](https://docs.nvidia.com/deploy/nvml-api/group__nvmlDeviceQueries.html#group__nvmlDeviceQueries_1g540824faa6cef45500e0d1dc2f50b321) (%GPU + %MEM bandwidth) - [`nvmlDeviceGetEncoderUtilization`](https://docs.nvidia.com/deploy/nvml-api/group__nvmlDeviceQueries.html#group__nvmlDeviceQueries_1ga5c77a2154a20d4e660221d8592d21fb) (%ENC) - [`nvmlDeviceGetDecoderUtilization`](https://docs.nvidia.com/deploy/nvml-api/group__nvmlDeviceQueries.html#group__nvmlDeviceQueries_1g0e3420045bc9d04dc37690f4701ced8a) (%DEC) Per process: - [`nvmlDeviceGetProcessUtilization`](https://docs.nvidia.com/deploy/nvml-api/group__nvmlGridQueries.html#group__nvmlGridQueries_1gb0ea5236f5e69e63bf53684a11c233bd) (%SM (GPU) + %MEM bandwidth + %ENC + %DEC) `nvitop` do provide per process GPU utilization usage in the `%SM` column. I found this blog: - [NVIDIA Ampere Architecture In-Depth](https://developer.nvidia.com/blog/nvidia-ampere-architecture-in-depth) <p align="center"> <img width="60%" src="https://developer-blogs.nvidia.com/wp-content/uploads/2021/guc/raD52-V3yZtQ3WzOE0Cvzvt8icgGHKXPpN2PS_5MMyZLJrVxgMtLN4r2S2kp5jYI9zrA2e0Y8vAfpZia669pbIog2U9ZKdJmQ8oSBjof6gc4IrhmorT2Rr-YopMlOf1aoU3tbn5Q.png" alt="SM"> </br> The GA100 streaming multiprocessor (SM). </p> said: > ## A100 GPU streaming multiprocessor > > The new streaming multiprocessor (SM) in the NVIDIA Ampere architecture-based A100 Tensor Core GPU significantly increases performance, builds upon features introduced in both the Volta and Turing SM architectures, and adds many new capabilities. The SM unit is consist of multiple tensor cores. Does this resolve your request? The NVML can only retrieve the SM (streaming multiprocessor) usage in total rather than fine-grained details for the tensor cores. If you want to profile your program, I think using `nvprof` is the best practice as NVIDIA documented.
Author
Owner

@XuehaiPan commented on GitHub (Sep 18, 2022):

Closing due to inactivity. Please feel free to ask for a reopening.

<!-- gh-comment-id:1250263804 --> @XuehaiPan commented on GitHub (Sep 18, 2022): Closing due to inactivity. Please feel free to ask for a reopening.
Author
Owner

@johnnynunez commented on GitHub (Mar 22, 2023):

Hi, @XuehaiPan pytorch has the capability to watch tensor cores percentatge. Is it possible to use here?
image

<!-- gh-comment-id:1480210770 --> @johnnynunez commented on GitHub (Mar 22, 2023): Hi, @XuehaiPan pytorch has the capability to watch tensor cores percentatge. Is it possible to use here? <img width="411" alt="image" src="https://user-images.githubusercontent.com/22727137/227028287-f1240090-fd35-47c6-baa3-d551f45bc7f4.png">
Author
Owner

@XuehaiPan commented on GitHub (Mar 23, 2023):

Hi @johnnynunez, the PyTorch Kineto library calculates the tensor core ratio from the kernel times.

6e81ce05c4/tb_plugin/torch_tb_profiler/profiler/tensor_cores_parser.py (L16-L45)

That needs the users explicitly modify their code to register event callbacks.

with torch.profiler.profile(
    activities=[
        torch.profiler.ProfilerActivity.CPU,
        torch.profiler.ProfilerActivity.CUDA,
    ]
) as p:
    code_to_profile()

I don't think there is something we can do in nvitop as a monitor tool rather than a profiler. The profiler needs in-process injection to the user program. nvitop is based on the NVML library and runs in a separate process.

<!-- gh-comment-id:1480850347 --> @XuehaiPan commented on GitHub (Mar 23, 2023): Hi @johnnynunez, the [PyTorch Kineto](https://github.com/pytorch/kineto) library calculates the tensor core ratio from the kernel times. https://github.com/pytorch/kineto/blob/6e81ce05c4d9898194fc5432624242cb47a77050/tb_plugin/torch_tb_profiler/profiler/tensor_cores_parser.py#L16-L45 That needs the users explicitly modify their code to register event callbacks. ```python with torch.profiler.profile( activities=[ torch.profiler.ProfilerActivity.CPU, torch.profiler.ProfilerActivity.CUDA, ] ) as p: code_to_profile() ``` I don't think there is something we can do in `nvitop` as a monitor tool rather than a profiler. The profiler needs in-process injection to the user program. `nvitop` is based on the NVML library and runs in a separate process.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: github-starred/nvitop#23
No description provided.