欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

矩形区域的泊松方程,模拟差分法

程序员文章站 2022-01-13 10:55:16
差分法是核心假设我们已经对区域进行了对应的网格剖分:xi=i∗δx,i=0,…,M−1x_{i}=i*\delta x,i=0,\ldots,M-1xi​=i∗δx,i=0,…,M−1yj=j∗δy,i=0,…,N−1y_{j}=j*\delta y,i=0,\ldots,N-1yj​=j∗δy,i=0,…,N−1考虑差分法的运算过程:-(ui+1,j−2ui,j+ui−1,jδx2+ui,j+1−2ui,j+ui,j−1δy2\frac{u_{i+1,j} - 2u_{i,j}+u_{i-1,j...

差分法是核心

假设我们已经对区域进行了对应的网格剖分:
x i = i ∗ δ x , i = 0 , … , M − 1 x_{i}=i*\delta x,i=0,\ldots,M-1 xi=iδx,i=0,,M1
y j = j ∗ δ y , i = 0 , … , N − 1 y_{j}=j*\delta y,i=0,\ldots,N-1 yj=jδy,i=0,,N1

考虑差分法的运算过程:
-( u i + 1 , j − 2 u i , j + u i − 1 , j δ x 2 + u i , j + 1 − 2 u i , j + u i , j − 1 δ y 2 \frac{u_{i+1,j} - 2u_{i,j}+u_{i-1,j}}{\delta x^{2}}+\frac{u_{i,j+1} - 2u_{i,j}+u_{i,j-1}}{\delta y^{2}} δx2ui+1,j2ui,j+ui1,j+δy2ui,j+12ui,j+ui,j1)= f i , j f_{i,j} fi,j
在上面这个等式两边同时乘 d x ∗ d y dx*dy dxdy,就变成了
− d y d x ∗ ( u i + 1 , j + u i − 1 , j ) + 2 ∗ ( d y d x + d x d y ) ∗ u i , j − d x d y ∗ ( u i , j + 1 + u i , j − 1 ) = d x ∗ d y ∗ f i , j -\frac{dy}{dx}*(u_{i+1,j} +u_{i-1,j})+2*(\frac{dy}{dx}+\frac{dx}{dy})*u_{i,j}-\frac{dx}{dy}*(u_{i,j+1} +u_{i,j-1})=dx*dy*f_{i,j} dxdy(ui+1,j+ui1,j)+2(dxdy+dydx)ui,jdydx(ui,j+1+ui,j1)=dxdyfi,j
这个过程我们可以这么理解,给定一个卷积核:
矩形区域的泊松方程,模拟差分法
我们将矩形区域的网格函数,排成一个 ( M + 1 ) × ( N + 1 ) (M+1)\times(N+1) (M+1)×(N+1)的函数矩阵 U U U,其中 U U U中的第 i , j i,j i,j个元素 u i , j u_{i,j} ui,j对应与 u ( x i , y j ) u(x_i,y_j) u(xi,yj)的近似值。
那么我们就有
卷积运算 U ∗ k e r = f U*ker=f Uker=f,卷积运算法则自己百度。
神经网络的模拟
之前我们一直将神经网络的输出作为近似解(这里也一样),不同的是,这次不再使用Ritz或者Galerkin方法做降阶处理。我们根据差分法定义,做近似的二阶偏导数。
损失函数为 l o s s = M S E ( U ∗ k e r − f ) loss=MSE(U*ker-f) loss=MSE(Ukerf)
下面是代码

import numpy as np import matplotlib.pyplot as plt import torch import time import torch.nn as nn import torch.nn.functional as F def UU(X, order,prob): if prob==1: temp = 10*(X[:,0]+X[:,1])**2 + (X[:,0]-X[:,1])**2 + 0.5 if order[0]==0 and order[1]==0: return torch.log(temp) if order[0]==1 and order[1]==0: return temp**(-1) * (20*(X[:,0]+X[:,1]) + 2*(X[:,0]-X[:,1])) if order[0]==0 and order[1]==1: return temp**(-1) * (20*(X[:,0]+X[:,1]) - 2*(X[:,0]-X[:,1])) if order[0]==2 and order[1]==0: return - temp**(-2) * (20*(X[:,0]+X[:,1])+2*(X[:,0]-X[:,1])) ** 2 \ + temp**(-1) * (22) if order[0]==1 and order[1]==1: return - temp**(-2) * (20*(X[:,0]+X[:,1])+2*(X[:,0]-X[:,1])) \ * (20*(X[:,0]+X[:,1])-2*(X[:,0]-X[:,1])) \ + temp**(-1) * (18) if order[0]==0 and order[1]==2: return - temp**(-2) * (20*(X[:,0]+X[:,1])-2*(X[:,0]-X[:,1])) ** 2 \ + temp**(-1) * (22) if prob==2: temp1 = X[:,0]*X[:,0] - X[:,1]*X[:,1] temp2 = X[:,0]*X[:,0] + X[:,1]*X[:,1] + 0.1 if order[0]==0 and order[1]==0: return temp1 * temp2**(-1) if order[0]==1 and order[1]==0: return (2*X[:,0]) * temp2**(-1) + \
                   temp1 * (-1)*temp2**(-2) * (2*X[:,0]) if order[0]==0 and order[1]==1: return (-2*X[:,1]) * temp2**(-1) + \
                   temp1 * (-1)*temp2**(-2) * (2*X[:,1]) if order[0]==2 and order[1]==0: return (2) * temp2**(-1) + \ 2 * (2*X[:,0]) * (-1)*temp2**(-2) * (2*X[:,0]) + \
                   temp1 * (2)*temp2**(-3) * (2*X[:,0])**2 + \
                   temp1 * (-1)*temp2**(-2) * (2) if order[0]==1 and order[1]==1: return (2*X[:,0]) * (-1)*temp2**(-2) * (2*X[:,1]) + \ (-2*X[:,1]) * (-1)*temp2**(-2) * (2*X[:,0]) + \
                   temp1 * (2)*temp2**(-3) * (2*X[:,0]) * (2*X[:,1]) if order[0]==0 and order[1]==2: return (-2) * temp2**(-1) + \ 2 * (-2*X[:,1]) * (-1)*temp2**(-2) * (2*X[:,1]) + \
                   temp1 * (2)*temp2**(-3) * (2*X[:,1])**2 + \
                   temp1 * (-1)*temp2**(-2) * (2) if prob==3: temp = torch.exp(-4*X[:,1]*X[:,1]) if order[0]==0 and order[1]==0: ind = (X[:,0]<=0).float() return ind * ((X[:,0]+1)**4-1) * temp + \ (1-ind) * (-(-X[:,0]+1)**4+1) * temp if order[0]==1 and order[1]==0: ind = (X[:,0]<=0).float() return ind * (4*(X[:,0]+1)**3) * temp + \ (1-ind) * (4*(-X[:,0]+1)**3) * temp if order[0]==0 and order[1]==1: ind = (X[:,0]<=0).float() return ind * ((X[:,0]+1)**4-1) * (temp*(-8*X[:,1])) + \ (1-ind) * (-(-X[:,0]+1)**4+1) * (temp*(-8*X[:,1])) if order[0]==2 and order[1]==0: ind = (X[:,0]<=0).float() return ind * (12*(X[:,0]+1)**2) * temp + \ (1-ind) * (-12*(-X[:,0]+1)**2) * temp if order[0]==1 and order[1]==1: ind = (X[:,0]<=0).float() return ind * (4*(X[:,0]+1)**3) * (temp*(-8*X[:,1])) + \ (1-ind) * (4*(-X[:,0]+1)**3) * (temp*(-8*X[:,1])) if order[0]==0 and order[1]==2: ind = (X[:,0]<=0).float() return ind * ((X[:,0]+1)**4-1) * (temp*(64*X[:,1]*X[:,1]-8)) + \ (1-ind) * (-(-X[:,0]+1)**4+1) * (temp*(64*X[:,1]*X[:,1]-8)) def FF(X,prob): return -UU(X,[2,0],prob) - UU(X,[0,2],prob) 

下面是样本点采集,这里需要提前把右端项排列成(1,1,M-1,N-1)的高阶数组形式。

class INSET(): def __init__(self,bound,nx,prob): self.dim = 2 #self.area = (bound[0,1] - bound[0,0])*(bound[1,1] - bound[1,0]) self.hx = [(bound[0,1] - bound[0,0])/nx[0],(bound[1,1] - bound[1,0])/nx[1]] self.size = (nx[0] - 1)*(nx[1] - 1) self.nx = nx
        self.bound = bound
        self.prob = prob
        self.size = (self.nx[0] + 1)*(self.nx[1] + 1) self.X = torch.zeros(self.size,2) m = 0 for i in range(self.nx[0] + 1): for j in range(self.nx[1] + 1): self.X[m,0] = self.bound[0,0] + i*self.hx[0] self.X[m,1] = self.bound[1,0] + j*self.hx[1] m = m + 1 #plt.scatter(self.X[:,0],self.X[:,1]) self.u_acc = UU(self.X,[0,0],prob).reshape(self.size,1) #采集内点 self.IS = (self.nx[0] - 1)*(self.nx[1] - 1) self.IX = torch.zeros(self.IS,self.dim) m = 0 for i in range(1,self.nx[0]): for j in range(1,self.nx[1]): self.IX[m,0] = self.bound[0,0] + i*self.hx[0] self.IX[m,1] = self.bound[1,0] + j*self.hx[1] m = m + 1 self.right = FF(self.IX,self.prob).view(1,1,self.nx[0] - 1,self.nx[1] - 1)*self.hx[0]*self.hx[1] #定义卷积核 self.r = self.hx[1]/self.hx[0] self.fi = torch.zeros(1,1,3,3) self.fi[0,0,0,1] = - self.r
        self.fi[0,0,1,0],self.fi[0,0,1,1],self.fi[0,0,1,2] = - 1/self.r,2*(self.r + 1/self.r),- 1/self.r
        self.fi[0,0,2,1] = - self.r class BDSET():#边界点取值 def __init__(self,bound,nx,prob): self.dim = 2 #self.area = (bound[0,1] - bound[0,0])*(bound[1,1] - bound[1,0]) self.hx = [(bound[0,1] - bound[0,0])/nx[0],(bound[1,1] - bound[1,0])/nx[1]] self.size = 2*(nx[0] + nx[1]) self.X = torch.zeros(self.size,self.dim)#储存内点 m = 0 for i in range(nx[0]): self.X[m,0] = bound[0,0] + (i + 0.5)*self.hx[0] self.X[m,1] = bound[1,0] m = m + 1 for j in range(nx[1]): self.X[m,0] = bound[0,1] self.X[m,1] = bound[1,0] + (j + 0.5)*self.hx[1] m = m + 1 for i in range(nx[0]): self.X[m,0] = bound[0,0] + (i + 0.5)*self.hx[0] self.X[m,1] = bound[1,1] m = m + 1 for j in range(nx[1]): self.X[m,0] = bound[0,0] self.X[m,1] = bound[1,0] + (j + 0.5)*self.hx[1] m = m + 1 #plt.scatter(self.X[:,0],self.X[:,1]) self.u_acc = UU(self.X,[0,0],prob).view(-1,1)#储存内点精确解 class TESET(): def __init__(self, bound, nx,prob): self.bound = bound
        self.nx = nx
        self.hx = [(self.bound[0,1]-self.bound[0,0])/self.nx[0], (self.bound[1,1]-self.bound[1,0])/self.nx[1]] self.prob = prob
        self.size = (self.nx[0] + 1)*(self.nx[1] + 1) self.X = torch.zeros(self.size,2) m = 0 for i in range(self.nx[0] + 1): for j in range(self.nx[1] + 1): self.X[m,0] = self.bound[0,0] + i*self.hx[0] self.X[m,1] = self.bound[1,0] + j*self.hx[1] m = m + 1 #plt.scatter(self.X[:,0],self.X[:,1]) self.u_acc = UU(self.X,[0,0],prob).reshape(self.size,1) class LEN(): def __init__(self,bound,mu): self.mu = mu
        self.bound = bound def forward(self,X): L = 1.0 for i in range(2): L = L*(1 - (1 - (X[:,i] - self.bound[i,0]))**self.mu) L = L*(1 - (1 - (self.bound[i,1] - X[:,i]))**self.mu) return L.view(-1,1) #print(lenth.forward(inset.X)) 

这里注意损失函数的定义。

np.random.seed(1234) torch.manual_seed(1234) class NETG(nn.Module):#u = netf*lenthfactor + netg,此为netg def __init__(self): super(NETG,self).__init__() self.fc1 = torch.nn.Linear(2,10) self.fc2 = torch.nn.Linear(10,10) self.fc3 = torch.nn.Linear(10,1) def forward(self,x): out = torch.sin(self.fc1(x)) + x@torch.eye(x.size(-1),10) out = torch.sin(self.fc2(out)) + out@torch.eye(out.size(-1),10) #注意这个是x.size(-1),表示当BDSET,或者TESET的样本点输入的时候,取的是[m,2]的2,如果是INSET的样本点输入,取得是[m,4,2]的2 return self.fc3(out) class SIN(nn.Module):#u = netg*lenthfactor + netf,此为netg网络所用的激活函数 def __init__(self,order): super(SIN,self).__init__() self.e = order def forward(self,x): return torch.sin(x)**self.e class Res(nn.Module): def __init__(self,input_size,output_size): super(Res,self).__init__() self.model = nn.Sequential( nn.Linear(input_size,output_size), SIN(1), nn.Linear(output_size,output_size), SIN(1) ) self.input = input_size
        self.output = output_size def forward(self,x): out = self.model(x) + x@torch.eye(x.size(-1),self.output)#模拟残差网络 return out class NETF(nn.Module):#u = netg*lenthfactor + netf,此为netg,此netg逼近内部点取值 def __init__(self): super(NETF,self).__init__() self.model = nn.Sequential( Res(2,10), Res(10,10) ) self.fc = torch.nn.Linear(10,1) def forward(self,x): out = self.model(x) return self.fc(out) def pred(netg,netf,lenth,X): return netg.forward(X) + netf.forward(X)*lenth.forward(X) def error(u_pred, u_acc): return (((u_pred-u_acc)**2).sum() / (u_acc**2).sum()) ** (0.5) #------------------------ def Lossg(netg,bdset):#拟合Dirichlet边界,这个就是简单的边界损失函数 ub = netg.forward(bdset.X) return ((ub - bdset.u_acc)**2).mean() def Traing(netg, bdset, optimg, epochg): print('train neural network g') lossg = Lossg(netg,bdset) lossbest = lossg print('epoch:%d,lossf:%.3e'%(0,lossg.item())) torch.save(netg.state_dict(),'best_netg.pkl') cycle = 100 for i in range(epochg): st = time.time() for j in range(cycle): optimg.zero_grad() lossg = Lossg(netg,bdset) lossg.backward() optimg.step() if lossg < lossbest: lossbest = lossg
            torch.save(netg.state_dict(),'best_netg.pkl') ela = time.time() - st print('epoch:%d,lossg:%.3e,time:%.2f'%((i + 1)*cycle,lossg.item(),ela)) #---------------------- def Lossf(netf,inset): insetF = netf.forward(inset.X) u_in = inset.G + inset.L * insetF#inset.G为netg在inset.X上取值,后面训练时提供,此举为加快迭代速度 u = u_in.view(1,1,inset.nx[0] + 1,inset.nx[1] + 1) ux = F.conv2d(u,inset.fi,stride = [1,1]) return F.mse_loss(ux,inset.right) def Trainf(netf, inset,optimf, epochf): print('train neural network f') ERROR,BUZHOU = [],[] lossf = Lossf(netf,inset) lossoptimal = lossf
    trainerror = error(pred(netg,netf,lenth,inset.X),inset.u_acc)#--------------------- print('epoch: %d, loss: %.3e, trainerror: %.3e' %(0, lossf.item(), trainerror.item())) torch.save(netf.state_dict(),'best_netf.pkl') cycle = 100 for i in range(epochf): st = time.time() for j in range(cycle): optimf.zero_grad() lossf = Lossf(netf,inset) lossf.backward() optimf.step() if lossf < lossoptimal: lossoptimal = lossf
            torch.save(netf.state_dict(),'best_netf.pkl') ela = time.time() - st
        trainerror = error(pred(netg,netf,lenth,inset.X),inset.u_acc) ERROR.append(trainerror) BUZHOU.append((i + 1)*cycle) print('epoch:%d,lossf:%.3e,train error:%.3e,time:%.2f'% ((i + 1)*cycle,lossf.item(),trainerror,ela)) return ERROR,BUZHOU def Train(netg, netf, lenth, inset, bdset, optimg, optimf, epochg, epochf): # Train neural network g Traing(netg, bdset, optimg, epochg) netg.load_state_dict(torch.load('best_netg.pkl')) # Calculate the length factor inset.L = lenth.forward(inset.X) #inset.L = [m,4,1],inset.Lx = [m,4,2] inset.L = inset.L.data #print(inset.L.shape,inset.Lx.shape) inset.G = netg.forward(inset.X) #inset.G = [m,4,1],inset.Gx = [m,4,2] inset.G = inset.G.data #print(inset.G.shape,inset.Gx.shape) # Train neural network f ERROR,BUZHOU = Trainf(netf, inset, optimf, epochf) return ERROR,BUZHOU    
    
bound = torch.tensor([[-1,1],[-1,1]]).float() nx = [40,30] nx_te = [60,40] prob = 1 mu = 3 lr = 1e-2 inset = INSET(bound,nx,prob) bdset = BDSET(bound,nx,prob) teset = TESET(bound,nx_te,prob) lenth = LEN(bound,mu) netg = NETG() netf = NETF() optimg = torch.optim.Adam(netg.parameters(), lr=lr) optimf = torch.optim.Adam(netf.parameters(), lr=lr) epochg = 6 epochf = 10 start_time = time.time() ERROR,BUZHOU = Train(netg, netf, lenth, inset, bdset, optimg, optimf, epochg, epochf) #print(ERROR,BUZHOU) elapsed = time.time() - start_time print('Train time: %.2f' %(elapsed)) netg.load_state_dict(torch.load('best_netg.pkl')) netf.load_state_dict(torch.load('best_netf.pkl')) te_U = pred(netg, netf, lenth, teset.X) testerror = error(te_U, teset.u_acc) print('testerror = %.3e\n' %(testerror.item())) 

本文地址:https://blog.csdn.net/forrestguang/article/details/109052122

相关标签: 泊松方程