2

I'm on Ubuntu 22.04 trying to run a PyTorch script and get

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 86.00 MiB (GPU 0; 3.81 GiB total capacity; 3.18 GiB already allocated; 80.81 MiB free; 3.18 GiB reserved in total by PyTorch)

prime-select query yields the following:

on-demand

lspci | egrep 'VGA|3D' yields the following:

00:02.0 VGA compatible controller: Intel Corporation Alder Lake-P Integrated Graphics Controller (rev 0c)
01:00.0 3D controller: NVIDIA Corporation GA107M [GeForce RTX 3050 Ti Mobile] (rev a1)

nvidia-smi yields the following:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.05    Driver Version: 525.85.05    CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   37C    P0    N/A /  35W |      7MiB /  4096MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 1464 G /usr/lib/xorg/Xorg 4MiB | | 0 N/A N/A 1743 G ...ome-remote-desktop-daemon 1MiB | +-----------------------------------------------------------------------------+

In my Ubuntu About settings, "Graphichs" is shown as:

NVIDIA Corporation GA107M [GeForce RTX 3050 Ti Mobile] / Mesa IntelĀ® Graphics (ADL GT2)

How can i stop my gpu being used for Xorg and for the gnome-remote-desktop-daemon? I only want to use my gpu with PyTorch. Using the torch.cuda.memory_allocated and torch.cuda.memory_reserved functions shows that I'm using 0.0GB of my gpu despite recognizing the Nvidia gpu, so I'm confused haha.

If anyone needs more info please let me know (:

Thanks!

0 Answers0