You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm writing a simple network for a simple quantum physics task. I know, it has nothing to do with this course but I was learning pytorch here and I just don't know where else to ask.
I have this parameter 'a' I want in my net, but it doesn't update during the training process:
I tried initializing it with nn.Parameter, nn.parameter.Parameter() and nn.register_parameter() but none of those help.
The training process looks somewhat like that:
import torch
from torch import nn
from torch import tensor
import matplotlib.pyplot as plt
import scipy
from scipy.integrate import trapezoid, simpson
C = 1
h = 0.1
R = 40
r = torch.arange(h, R + h, h, dtype = torch.float).unsqueeze(dim = 1)
def V(r = r, C = C):
return C / r
def H(r, l, der, y):
return -1 * der + l*(l+1)/r**2 * y - V(r) * y
def Rayleigh(r, l, der, preds):
r = r.squeeze()
der = der.squeeze()
preds = preds.squeeze()
H_psi = H(r, l, der, preds)
numerator = simpson(y = (preds * H_psi).detach().numpy(), x = r.detach().numpy())
denominator = simpson(y = (preds * preds).detach().numpy(), x = r.detach().numpy())
return tensor(numerator / denominator)
psi = NN()
loss_fn = nn.MSELoss()
optimizer = torch.optim.Adam(params = psi.parameters(), lr = 0.01)
epochs = 3000
for epoch in range(epochs):
psi.train()
r.requires_grad = True
preds = psi(r)
der_1 = torch.autograd.grad(outputs = preds.squeeze(), inputs = r, grad_outputs = torch.ones_like(preds.squeeze()), create_graph = True)[0]
der_2 = torch.autograd.grad(outputs = der_1, inputs = r, grad_outputs = torch.ones_like(der_1), create_graph = True)[0]
r.requires_grad = False
loss_1 = loss_fn(Rayleigh(r, 0, der_2, preds), tensor([0.]))
loss_2 = loss_fn(psi(tensor([0.])), tensor([0.]))
loss = loss_1 + loss_2
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % 1000 == 0:
print(f'Epoch: {epoch}, Loss: {loss.item():.4f}, loss_1: {loss_1.item():.4f}, loss_2: {loss_2.item():.4f}')
Printing psi.a before and after training give me the same result which means that psi.a doesn't update (but it should as it's very important in terms of physics)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello there!
I'm writing a simple network for a simple quantum physics task. I know, it has nothing to do with this course but I was learning pytorch here and I just don't know where else to ask.
I have this parameter 'a' I want in my net, but it doesn't update during the training process:
I tried initializing it with nn.Parameter, nn.parameter.Parameter() and nn.register_parameter() but none of those help.
The training process looks somewhat like that:
Printing psi.a before and after training give me the same result which means that psi.a doesn't update (but it should as it's very important in terms of physics)
Would highly appreciate any help.
Beta Was this translation helpful? Give feedback.
All reactions