id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st179968 | In theory, yes. As long as you get cards with a higher bandwidth than your Ethernet setup it should result in an improvement. But since NCCL is built for using GPUDirect, I’m not sure if it will work with NCCL out of the box. If it doesn’t, you could try and experiment with IPoIB and fall back to using NCCL’s TCP transport.
Good luck! |
st179969 | Hello.
I’m trying to use “all_reduce_multigpu” function with shared memory (list created by multiprocess module). However, it does not work with the shared list object (only working with a normal list).
I want to use the shared list to get return values from the child processes. Is there any alternative way?
Thank you in advance |
st179970 | My Pytorch distributed code (similar to ImageNet example, but without multiprocessing) was working on distributed nodes, using the GLOO backend (Python 3.7, Pytorch 1.0.1). There was some system updates, and now all of a sudden I have intermittent success with the code, with the init_process_group throwing various errors often. I can usually get it work on 2 nodes (4 GPU’s each node), but when I try > 2 compute nodes, almost always there is an error thrown in the init_process_group call (see below for some of the errors).
I put “python -X faulthandler”, and it tells me for the Segmentation faults, that the error is in ProcessGroupGloo in distributed_c10d.py (line 360).
I’m a bit at a loss of how to debug this, and what to check. I have reinstalled the Pytorch packages after the system updates, and tried going to Python 3.6 also, but no luck. I haven’t tried compiling from source yet. I started looking at the GLOO repo, and saw there are some tests, not sure if they would help pinpoint the cause.
Error #1:
srun: error: tiger-i21g6: task 0: Segmentation fault
Error #2:
Traceback (most recent call last):
File “disruptcnn/main.py”, line 595, in <module>
main()
File “disruptcnn/main.py”, line 161, in main
main_worker(args.gpu, ngpus_per_node, args)
File “disruptcnn/main.py”, line 177, in main_worker
world_size=args.world_size, rank=args.rank)
File “~/.conda/envs/python3a/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py”, line 360, in init_process_group
timeout=timeout)
RuntimeError: read: Bad address
terminate called after throwing an instance of 'std::system_error’
what(): read: Bad address
Error #3:
*** Error in `~/.conda/envs/python36/bin/python’: free(): invalid next size (fast): 0x000055b8f75ea9b0 *** |
st179971 | Hi!
Can you share how you’re calling init_process_group, which initialization method you’re using, etc? If you get the first error on one machine and the second error on another, it is possible that the second error is caused by the first process crashing.
I’m asking because the read: Bad address error makes me think there is something going on with the TCP store. If you’re using the TCP initialization method, and try to use an IP that is no longer valid, for example, this is the type of error that could happen. |
st179972 | I’m using the file method, on a parallel file system:
jobid = os.environ['SLURM_JOB_ID']
world_size = int(os.environ['SLURM_NTASKS'])
rank = int(os.environ['SLURM_PROCID'])
dist.init_process_group(backend='gloo', init_method='file:///scratch/gpfs/me/main_'+jobid+'.txt',
world_size=world_size, rank=rank)
The errors don’t occur one on each machine, rather if I try running this various times, one of those errors will be thrown (but not both at the same time). |
st179973 | When you run this multiple times, do those runs use the same SLURM_JOB_ID? PyTorch makes an attempt to remove the file that is used for initialization at exit, but if any of the processes crashes it may stick around, and cause problems. You can “fix” this by force removing the file before starting a new run. |
st179974 | No, the SLURM system gives you a unique SLURM_JOB_ID for each run that you do (which is why I’m using it, to ensure the file is unique for each run). |
st179975 | I noticed the note on fcntl, is there some test I should run on the parallel file system to ensure there are no issues with correct locking? I think GPFS should be fine, but perhaps theres some edge case, and a newer driver or something caused things to mess up.
I was able to try out NCCL, and this appears to be working for the > 2 nodes runs, so its not as urgent, but I’d still be interested in figuring this out |
st179976 | churchillmic:
No, the SLURM system gives you a unique SLURM_JOB_ID for each run that you do (which is why I’m using it, to ensure the file is unique for each run).
Thanks, that rules out clobbering the same file from multiple runs.
churchillmic:
I noticed the note on fcntl, is there some test I should run on the parallel file system to ensure there are no issues with correct locking?
There is. We have a test for the file store that’s built by default if you compile from source and will be located at build/bin/FileStoreTest. This test automatically creates some files in TMPDIR, which you can override yourself to force it to use the GPFS path. This doesn’t fully simulate the scenario you have with multiple machines, but at least hammers the file system with multiple processes from the same machine. It could uncover something, so it’s definitely worth a try.
churchillmic:
I was able to try out NCCL, and this appears to be working for the > 2 nodes runs, so its not as urgent, but I’d still be interested in figuring this out
The use of this store when using the NCCL backend is very light. Only a single process writes to the file and all others read from it. When using the Gloo backend, everybody both writes to and reads from the file, causing a lot more contention. |
st179977 | Hi there,
I’m using GPFS filesystem for file init, and I also got the read(): bad address problem. When I change the file location to local /tmp, it’s fine.
FYI: the mandatory file lock of Gluster and some known issues https://docs.gluster.org/en/v3/Administrator%20Guide/Mandatory%20Locks/ 15 |
st179978 | I try to make data parallelism compatible with model parallelism, but I encounter RuntimeError: all tensors must be on devices[0] during this process. Below is a simplified example of my code (my torch version is 1.0.1.post2):
import torch
import torch.nn as nn
import torch.nn.functional as F
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 512)
self.fc2 = nn.Linear(512, 10)
def forward(self, x):
first_device = x.device
x = self.fc1(x.to(self.fc1.weight.device))
x = F.relu(x)
x = self.fc2(x.to(self.fc2.weight.device))
x = F.softmax(x).to(first_device)
return x
def model_parallel(self, start):
self.fc1.cuda(start)
self.fc2.cuda(start + 1)
def run(rank, device_id, world_size):
torch.distributed.init_process_group(
backend='nccl',
init_method='tcp://localhost:10000',
world_size=world_size,
rank=rank,
)
model = MyModel()
model.model_parallel(device_id)
model = nn.parallel.DistributedDataParallel(
module=model,
device_ids=list(range(device_id, device_id + world_size)),
output_device=device_id,
broadcast_buffers=False,
)
model(torch.randn(1, 784).cuda(device_id))
if __name__ == "__main__":
mp = torch.multiprocessing.get_context('spawn')
world_size = 2
model_size = 2
procs = []
for i in range(world_size):
rank = i
device_id = i * model_size
procs.append(mp.Process(target=run, args=(rank, device_id, world_size, ), daemon=True))
procs[i].start()
for p in procs:
p.join()
The full traceback is:
Process SpawnProcess-1:
Traceback (most recent call last):
File "/home/user/lib/python3.6/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/home/user/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/user/nmt-research/example.py", line 34, in run
broadcast_buffers=False,
File "/home/user/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 217, in __init__
self._ddp_init_helper()
File "/home/user/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 232, in _ddp_init_helper
self._module_copies = replicate(self.module, self.device_ids, detach=True)
File "/home/user/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 13, in replicate
param_copies = Broadcast.apply(devices, *params)
File "/home/user/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 21, in forward
outputs = comm.broadcast_coalesced(inputs, ctx.target_gpus)
File "/home/user/lib/python3.6/site-packages/torch/cuda/comm.py", line 40, in broadcast_coalesced
return torch._C._broadcast_coalesced(tensors, devices, buffer_size)
RuntimeError: all tensors must be on devices[0]
Process SpawnProcess-2:
Traceback (most recent call last):
File "/home/user/lib/python3.6/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/home/user/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/user/nmt-research/example.py", line 34, in run
broadcast_buffers=False,
File "/home/user/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 217, in __init__
self._ddp_init_helper()
File "/home/user/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 232, in _ddp_init_helper
self._module_copies = replicate(self.module, self.device_ids, detach=True)
File "/home/user/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 13, in replicate
param_copies = Broadcast.apply(devices, *params)
File "/home/user/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 21, in forward
outputs = comm.broadcast_coalesced(inputs, ctx.target_gpus)
File "/home/user/lib/python3.6/site-packages/torch/cuda/comm.py", line 40, in broadcast_coalesced
return torch._C._broadcast_coalesced(tensors, devices, buffer_size)
RuntimeError: all tensors must be on devices[0]
I want to know how to perform data parallelism together with model parallelism correctly. Thanks in advance! |
st179979 | There are assumptions in torch.nn.parallel.DistributedDataParallel today that prevent you from doing this unfortunately. We’re working on some changes to DDP to make this possible. Stay tuned.
cc @mrshenli |
st179980 | This should be possible after #19271 17. Here is a tutorial 33 @lyy1994 could you please help to verify if that works for you? |
st179981 | Hello, is there any table where can I check which GPUs are compatible with pyTorch?
For example, geforce gtx/rtx models? |
st179982 | I think pytorch can use on every GPU which can work with CUDA, you may check the website for reference 2 |
st179983 | Yes, but im talking about a Distributed Neural Network. I tried to execute it on Tesla series with < 3.x compute capabilite and it doesnt work. So, I want to make sure if any GPU with > 3.x will work. |
st179984 | PyTorch works with compute cability 3.5 and higher. This is the Tesla Kepler series (K20, K40, K80). |
st179985 | Yes they do. You can check on this website 2 for the compute capability of your GPU and it works, if your GPU has a capability of 3.5 or above. |
st179986 | I’d like to report a weird behaviour in 1.0 that I could resolve by only going back to 0.4. I am training a network on quite a big data set (35 GB) and use 4 gpus by applying the command
torch.nn.DataParallel(model).cuda()
Further I am having a big batch size (>1000) which makes the command
torch.multiprocessing.set_sharing_strategy(‘file_system’). I have num_workers=16 in the dataloader
necessary. Now the trouble begins: every epoch my /dev/shm increases by ca. 3GB. At some point it is full and my process crashes. I tried 1.0.0 and 1.0.1 but both showed this bahaviour. Pytorch 0.4 does not have this problem, /dev/shm is never above 1GB.
Is this a bug? |
st179987 | Looking at the available sharing strategies 38, the file_system one is clearly prone to leaks. If the data loader ends up allocating new shared tensors for every epoch, this would explain your leak. Did you try using the file_descriptor sharing strategy? |
st179988 | file_descriptor is the default setting. I tried it of course but could not use for other reasons. |
st179989 | If you can (have sudo privilege), increase the file descriptor limit of your system and use the file_descriptor sharing strategy. |
st179990 | Hi Masters,
I am trying the following code on 2 nodes with diff num of CPU/GPU devices, running one parameter server (ps) process and diff num of worker process on each node.(e.g. global_ranks:[[0(ps),2(worker),3(worker)],[1(ps),4(worker)]])
For CUDA init reasons, I turned mp.set_start_method('spawn', force=True) on slave node and leads to the following crash:(NOT warning)
/home/simon/anaconda3/lib/python3.6/multiprocessing/semaphore_tracker.py:146: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
Could somebody help? Thanks in advance.
Results:
$Slave Node
<Process(Process-1, started)> is started...
<Process(Process-2, started)> is started...
0 test_run_worker() on global_rank: 4 ,global_c: 1 ,counter_list: [tensor([1.]), tensor([0.]), tensor([1.])]
0 test_run_ps() on global_rank: 1 , global_c= 0 ,counter_list: [tensor([1.]), tensor([0.]), tensor([1.])]
1 test_run_worker() on global_rank: 4 ,global_c: 4 ,counter_list: [tensor([2.]), tensor([1.]), tensor([2.])]
2 test_run_worker() on global_rank: 4 ,global_c: 6 ,counter_list: [tensor([2.]), tensor([1.]), tensor([3.])]
1 test_run_ps() on global_rank: 1 , global_c= 3 ,counter_list: [tensor([2.]), tensor([1.]), tensor([2.])]
3 test_run_worker() on global_rank: 4 ,global_c: 9 ,counter_list: [tensor([3.]), tensor([2.]), tensor([4.])]
4 test_run_worker() on global_rank: 4 ,global_c: 10 ,counter_list: [tensor([3.]), tensor([2.]), tensor([5.])]
2 test_run_ps() on global_rank: 1 , global_c= 6 ,counter_list: [tensor([3.]), tensor([2.]), tensor([3.])]
3 test_run_ps() on global_rank: 1 , global_c= 9 ,counter_list: [tensor([4.]), tensor([3.]), tensor([4.])]
4 test_run_ps() on global_rank: 1 , global_c= 12 ,counter_list: [tensor([5.]), tensor([4.]), tensor([5.])]
/home/simon/anaconda3/lib/python3.6/multiprocessing/semaphore_tracker.py:146: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
/home/simon/anaconda3/lib/python3.6/multiprocessing/semaphore_tracker.py:146: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
Some Codes:
#Master Node:
#GLOO_SOCKET_IFNAME=enp0s31f6 python -m torch.distributed.launch torch_dist_test2.py --local_rank=0
#Slave Node:
#GLOO_SOCKET_IFNAME=enp7s0 python -m torch.distributed.launch torch_dist_test2.py --local_rank=1
def init_dist_multi_process(world_size, global_rank, backend='gloo'):#'nccl'
dist.init_process_group(backend=backend,
init_method='tcp://192.168.1.12:23457',
world_size=world_size,
rank=global_rank)
def dist_broadcast(src, dic=None, tensor=None, async_op=True, p_device=torch.device('cpu')):
if not dic == None:
for key, value in dic.items():
dist.broadcast(tensor=torch.Tensor(value).to(p_device), src=src, async_op=async_op)
else:
dist.broadcast(tensor=tensor.to(p_device), src=src, async_op=async_op)
def test_run_ps(shared_coms):
init_dist_multi_process(world_size=shared_coms.world_size, global_rank=shared_coms.server_rank, backend='gloo')
counter_list = [torch.Tensor([0]) for _ in shared_coms.global_worker_rank_flatten_list]
for _ in range(5):
time.sleep(0.5)
for r, gr in enumerate(shared_coms.global_worker_rank_flatten_list):##
dist_broadcast(src=gr,tensor=counter_list[r])
global_c = sum([int(x) for x in counter_list])
print(_,'test_run_ps() on global_rank:',shared_coms.server_rank,', global_c=',global_c,',counter_list:',counter_list)
print('test_run_ps() time up')
time.sleep(5)
def test_run_worker(shared_coms, device_r):
init_dist_multi_process(world_size=shared_coms.world_size, global_rank=shared_coms.worker_rank_list[device_r], backend='gloo')
c = 0
counter_list = [torch.Tensor([0]) for _ in shared_coms.global_worker_rank_flatten_list]
for _ in range(5):
time.sleep(0.25*(1+device_r))
c+=1
i=0
for r, gr in enumerate(shared_coms.global_worker_rank_flatten_list):##
if gr == shared_coms.global_worker_rank_list[shared_coms.server_rank][device_r]:
counter_list[r] = torch.Tensor([c])
dist_broadcast(src=gr,tensor=counter_list[r])
global_c = sum([int(x) for x in counter_list])
print(_,'test_run_worker() on global_rank:',shared_coms.worker_rank_list[device_r],',global_c:',global_c,',counter_list:',counter_list)
print('test_run_worker() time up')
time.sleep(5)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--local_rank", type=int, default=0)
parser.add_argument("--global_rank_list", type=list, default=[[0,2,3],[1,4]])
parser.add_argument("--n_worker", type=int, default=16)
args = parser.parse_args()
server_rank = args.local_rank
global_rank_list = args.global_rank_list
world_size = sum([1 for y in global_rank_list for x in y])
n_proc = len(global_rank_list[server_rank])-1
global_n_proc = world_size - len(global_rank_list)
n_server = len(global_rank_list)
worker_rank_list = global_rank_list[server_rank][1:]
global_worker_rank_list = [global_rank_list[x][1:] for x in range(len(global_rank_list))]
global_worker_rank_flatten_list = []
for x in global_worker_rank_list: global_worker_rank_flatten_list+=x
n_workers_per_slave = args.n_worker
game = 'BreakoutNoFrameskip-v4'
process_list = []
shared_coms = SharedComponents(game, server_rank, global_rank_list, p_device=torch.device('cuda'))
mp.set_start_method('spawn', force=True)
p = mp.Process(target=test_run_ps, args=(shared_coms, ))#, args=(None))
process_list.append(p)
for device_r in range(n_proc):##
p = mp.Process(target=test_run_worker, args=(shared_coms, device_r))
process_list.append(p)
for p in process_list:
p.start()
print(p,' is started...')
for p in process_list:
p.join() |
st179991 | Hi! Hard to tell where this is going wrong. The warning from multiprocessing happens in the parent process (I think) and doesn’t pinpoint where the crash is happening. It looks like neither process gets to log the time up message, so do they even break out of their loops? |
st179992 | Hi Pieter,
You are right, they do break out of their loops or may be only warning messages after crashes.
Any way to debug? Thank you in advance.
I add logger = multiprocessing.log_to_stderr(), logger.setLevel(multiprocessing.SUBDEBUG) to the demo, getting the following info:
[INFO/Process-2] process shutting down
[DEBUG/Process-2] running all "atexit" finalizers with priority >= 0
[DEBUG/Process-2] running the remaining "atexit" finalizers
[INFO/Process-2] process exiting with exitcode 0
/home/simon/anaconda3/lib/python3.6/multiprocessing/semaphore_tracker.py:146: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
[INFO/Process-1] process shutting down
[DEBUG/Process-1] running all "atexit" finalizers with priority >= 0
[DEBUG/Process-1] running the remaining "atexit" finalizers
[INFO/Process-1] process exiting with exitcode 0
/home/simon/anaconda3/lib/python3.6/multiprocessing/semaphore_tracker.py:146: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
[INFO/MainProcess] process shutting down
[DEBUG/MainProcess] running all "atexit" finalizers with priority >= 0
[DEBUG/MainProcess] running the remaining "atexit" finalizers
[Level 5/MainProcess] calling <Finalize object, dead>
[Level 5/MainProcess] finalizer calling <function rmtree at 0x7fab0b4bfea0> with args ['/tmp/pymp-hfadelk_'] and kwargs {} |
st179993 | Hard to say. I did a quick search and came up with https://docs.python.org/3/library/multiprocessing.html#multiprocessing-programming 79, which might be useful here. It’s a warning that comes from deep in the guts of multiprocessing, so I’d start there. |
st179994 | When using torch.nn.parallel.DistributedDataParallel to parallelize the network on multiple GPUs, do nn.BatchNorm become synchronized among GPUs ?
I suppose it is, because there is a broadcast_buffers flag in DistributedDataParallel defaulted to True.
Do anyone has any thoughts or confirmation on this ? |
st179995 | The buffers in batch norm are synchronized between processes if broadcast_buffers=True, yes. This means that all processes get a copy of the buffers on process with rank 0. If you want to use a synchronized batch norm, check out nn.SyncBatchNorm 301. |
st179996 | Hi,
I am testing p2p communication of torch.distributed.
I have 2 nodes, with gloo backend.
When I isend / irecv multiple tensors with diff ‘tag’, it doesn’t show the expected result.
Could somebody help me of the async p2p?
Node0:
import torch
import torch.distributed as dist
if __name__ == "__main__":
rank = 0
dist.init_process_group(backend="gloo",
init_method='tcp://192.168.1.12:23457',
world_size=2,
rank=rank)
grads={'T0_grad':torch.zeros(2,2),'T1_grad':torch.zeros(2,2),'T2_grad':torch.zeros(2,2),'T3_grad':torch.zeros(2,2),'T4_grad':torch.zeros(2,2)}
if rank ==1:
tmp_tensor = torch.ones(2,2)
req = dist.isend(tmp_tensor,dst=0,tag=0)
print('rank',rank,' dist.isend(tmp_tensor):\n',tmp_tensor)
tmp_tensor2 = torch.ones(2,2)*4
req = dist.isend(tmp_tensor2,dst=0,tag=1)
print('rank',rank,' dist.isend(tmp_tensor2):\n',tmp_tensor2)
time.sleep(6)
elif rank==0:
time.sleep(1)
i = 3
req = dist.irecv(grads['T'+str(i)+'_grad'],src=1,tag=0)
print('rank',rank,' dist.irecv(grads[T'+str(i)+'_grad]):\n',grads['T'+str(i)+'_grad'])
i = 4
req = dist.irecv(grads['T'+str(i)+'_grad'],src=1,tag=1)
print('rank',rank,' dist.irecv(grads[T'+str(i)+'_grad]):\n',grads['T'+str(i)+'_grad'])
Node1:
import torch
import torch.distributed as dist
if __name__ == "__main__":
rank = 1
#... All else equal...
Result:
rank 1 dist.isend(tmp_tensor):
tensor([[1., 1.],
[1., 1.]])
rank 1 dist.isend(tmp_tensor2):
tensor([[4., 4.],
[4., 4.]])
rank 0 dist.irecv(grads[T3_grad]):
tensor([[0., 0.],
[0., 0.]])
rank 0 dist.irecv(grads[T4_grad]):
tensor([[0., 0.],
[0., 0.]]) |
st179997 | You’re kicking off the send/recv operations, but don’t synchronize on completion. You’ll have to add calls to req.wait() to ensure the send/recv operations have actually completed. If you’re looking for synchronous send/recv instead, replace calls to isend/irecv by calls to send/recv (without the i). |
st179998 | I train my model across several machines. I have two machines which have gpus and infiniband cards. The networks is 1Gbit, Infiniband is 2x40Gbit. When I remove cards, and start training everything works, though slower than on one machine. When I run with infiniband setup, the system just hangs. There’s 100% GPU utilisation, wattage is 1/2 of maximum, and there’s very little network activity.
Do you have any hints on how to proceed with finding out what’s wrong with training? |
st179999 | It might be a bios issue. I have a similar issue with 4Xv100 GPU that runs too slow. It turns out to be a bios setup problem. There is a bios adjustments which needs to be arranged as performance mode.
[V100 is too slow for training 39](http://v100 18 topic) |