Converting F.relu() to nn.ReLU() in PyTorch
I have been using PyTorch extensively in some of my projects lately, and one of the things that has confused me was how to go about implementing a hidden layer of Rectified Linear Units (ReLU) using the nn.ReLU() syntax. I was already using the functional F.relu() syntax, and wanted to move away from this into a more OOP-approach. The following is a straightforward example on the way to convert an F....