Skip to content

RuntimeError: grad can be implicitly created only for scalar outputs #58

Open
@Shn9909

Description

@Shn9909

I encountered this strange error. Here is the output, thank you.
Before, it was showing that the error cannot run on CPU and GPU at the same time, I added . cuda() after loss, it starts showing this error.

Traceback (most recent call last):
File "D:/xiangmu/ENAS-pytorch-master/main.py", line 56, in
main(args)
File "D:/xiangmu/ENAS-pytorch-master/main.py", line 35, in main
trnr.train()
File "D:\xiangmu\ENAS-pytorch-master\trainer.py", line 223, in train
self.train_shared(dag=dag)
File "D:\xiangmu\ENAS-pytorch-master\trainer.py", line 317, in train_shared
loss.backward()
File "C:\Users\sunhaonan.conda\envs\enas\lib\site-packages\torch_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "C:\Users\sunhaonan.conda\envs\enas\lib\site-packages\torch\autograd_init_.py", line 150, in backward
grad_tensors_ = make_grads(tensors, grad_tensors)
File "C:\Users\sunhaonan.conda\envs\enas\lib\site-packages\torch\autograd_init_.py", line 51, in _make_grads
raise RuntimeError("grad can be implicitly created only for scalar outputs")
RuntimeError: grad can be implicitly created only for scalar outputs

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions