2000字范文,分享全网优秀范文,学习好帮手!
2000字范文 > 应用篇:神经网络用于预测模型

应用篇:神经网络用于预测模型

时间:2023-09-13 10:46:54

相关推荐

应用篇:神经网络用于预测模型

文章目录

1.高熵合金晶界预测1.导入数据2.模型构建3.序贯模型与函数式API模型4.数据可视化5.训练曲线2.用DOS预测吸附能1.导入数据2.模型准备3.构建模型4.结果可视化

1.高熵合金晶界预测

1.导入数据

1.数据读取

import pandas as pddf = pd.read_csv('HEA.csv',header = 1).iloc[:-8,6:18] #导入数据,header=1从第2行开始导入,iloc[]获取有用数据df.columns = ['T','Co','Ni','Cr','Fe','Mn','Co ad','Ni ad','Cr ad','Fe ad','Mn ad','Dis'] #重新命名列标签print(df)

2.预处理

x = df.iloc[:,:-6].values #提取数据y = df['Cr ad'].valuesy = y.astype('float64') #将字符串转换为浮点数

  利用均值与标准差标准化数据

x⟶x−μσx\longrightarrow \frac{x-\mu}{ \sigma }x⟶σx−μ​

from sklearn.preprocessing import StandardScalerss_x = StandardScaler() #制作标准化器x_s = ss_x.fit_transform(x) #标准化ss_y = StandardScaler()y_s = ss_y.fit_transform(y.reshape(-1,1)).reshape(-1) #将一行数据转换一列数据,之后再转换一行数据print(x_s,y_s)

3.切分训练集

from sklearn.model_selection import train_test_split# 数据集切训练集(0.7)、测试集(0.15)、验证集(0.15)x_train,x_test,y_train,y_test = train_test_split(x_s,y_s,test_size = 0.15)x_train,x_validation,y_train,y_validation = train_test_split(x_train,y_train,test_size = 0.1765)print(x_train.shape,x_validation.shape,x_test.shape)

2.模型构建

1.Keras模型构建方法

2.三种模型方法对比

  三种方法的底层设计有细微的差别,通常这样的差别可以忽略不计

3.序贯模型与函数式API模型

1.序贯模型的构建

from tensorflow import kerasmodel_seq = keras.Sequential([ #构建网络keras.Input(shape=(6)),keras.layers.Dense(20,activation = 'relu'), #使用relu激活函数keras.layers.Dense(1)])print(model_seq.summary())

2.函数型模型的构建

layer_input = keras.Input(shape=(6)) #输入层layer_hidden = keras.layers.Dense(20,activation = 'relu')(layer_input) #连接到输入层之下layer_output = keras.layers.Dense(1)(layer_hidden) #连接到隐藏层之下model = keras.Model(inputs = layer_input,outputs = layer_output) #构建函数型模型print(model.summary())

3.编译与训练

adam = keras.optimizers.Adam(learn_rate = 0.001) #学习率0.001,Adam优化plie(lose = 'mse',optimizer = adam,metrics = ['mse'])history = keras.callbacks.History() #History 回调函数model.fit(x_train,y_train,validation_data = (x_validation,y_validation),epochs = 500,callbacks = [history],verbose = 0) #验证集手动给出,训练500次,不输出

4.模型评价

from sklearn.metrics import r2_scoreimport numpy as npdef model_predict(model,x_data,y_data,ss): #自定义函数,模型预测,预测后数据逆标准化result = model.predict(x_data) #模型预测y_pred = ss.inverse_transform(result).reshape(-1) #逆标准化y_real = ss.inverse_transform(y_data.reshape(-1,1)).reshape(-1)return y_real,y_predy_real_train,y_pred_train = model_predict(model,x_train,y_train,ss_y) #计算模型预测值y_real_validation,y_pred_validation = model_predict(model,x_validation,y_validation,ss_y)y_real_test,y_pred_test = model_predict(model,x_test,y_test,ss_y)def compute_mae_mse_rmse(real,pred): #自定义计算平均绝对误差mae,均方误差mse,均方根误差rmse,R方r2error = []for j in range(len(real)):error.append(real[j]-pred[j]) #预测值与真实值之差error_squared = []error_abs = []for val in error:error_squared.append(val*val) #误差平方error_abs.append(abs(val)) #误差绝对值mae = sum(error_abs)/len(error_abs) #MAEmse = sum(error_squared)/len(error_squared) #MSErmse = np.sqrt(mse) #均方根误差rmse是均方误差mse的算术平方根r2 = r2_score(real,pred) #评价回归模型,r2->1,模型越好return mae,mse,rmse,r2error_val_train = compute_mae_mse_rmse(y_real_train * 100,y_pred_train *100) #计算误差error_val_validation = compute_mae_mse_rmse(y_real_validation * 100, y_pred_validation * 100)error_val_test = compute_mae_mse_rmse(y_real_test * 100, y_pred_test * 100)

  模型训练结果

print('train mae: %.3f,mse: %.3f,rmse: %.3f,r2: %4f'%(error_val_train[0],error_val_train[1],error_val_train[2],error_val_train[3])) #输出误差print('validation mae: %.3f,mse: %.3f,rmse: %.3f,r2: %4f'%(error_val_validation[0],error_val_validation[1],error_val_validation[2],error_val_validation[3]))print('test mae: %.3f,mse: %.3f,rmse: %.3f,r2: %4f'%(error_val_test[0],error_val_test[1],error_val_test[2],error_val_test[3]))

4.数据可视化

1.绘制预测散点图

import matplotlib.pyplot as pltplt.figure(figsize = (6,6)) #设置画布大小,绘制预测散点图plt.scatter(y_real_train *100,y_pred_train *100,s = 100, #训练集,大小100,金黄色color = 'goldenrod',label = 'Traing Set')plt.scatter(y_real_validation *100,y_pred_validation *100,s = 100, #验证集,大小100,浅蓝色color = 'lightblue',label = 'Validation Set')plt.scatter(y_real_test *100,y_pred_test *100,s = 100, #测试集,大小100,紫色color = 'purple',label = 'Testing Set')plt.plot([0,65],[0,65],color = 'grey',linestyle = '--',linewidth = 3) #绘制对角线,虚线,宽度3,灰色plt.legend() #显示图例plt.xlim(0,65) # x y轴坐标范围plt.ylim(0,65)plt.xlabel('MC/MD') # x y轴标签plt.ylabel('ANN')plt.show()

2.绘制预测相图

x_pred = [] #构建待预测数组for T in np.arange(1000,1310,10): #温度Tfor i in np.arange(0.0,0.4,0.02): #Mn的浓度j = 0.4 - i # Cr的浓度xx = [] #构建一组数据xx.append(T) #温度xx.append(0.2) #Co浓度xx.append(0.2) #Ni浓度xx.append(j) #Cr浓度xx.append(0.2) #Fe浓度xx.append(i) #Mn浓度x_pred.append(xx)x_pred = np.array(x_pred) #转换为数组x_pred_nor = ss_x.transform(x_pred) #标准化y_pred_nor = model.predict(x_pred_nor) #模型预测y_pred = ss_y.inverse_transform(y_pred_nor).reshape(-1) #逆标准化# 绘制相图x0,y0 = np.meshgrid(np.arange(0.0,0.4,0.02),np.arange(1000,1310,10)) #生成坐标矩阵plt.figure(figsize = (6,5)) #设置画图大小levels = np.linspace(-10,50,101) #-10和50是色阶显示的数据范围,101是颜色显示的细致程度plt.contourf(x0,y0,y_pred.reshape(x0.shape)*100,cmap = 'bwr',levels = levels) #contourf()绘制二维颜色填充图,plt.xticks(np.arange(0.05,0.36,0.1)) # x y轴刻度值plt.yticks(np.arange(1000,1301,100))plt.xlim(0.05,0.35) # x y轴范围plt.ylim(1000,1300)plt.xlabel('X Mn') # x y轴标签plt.ylabel('T(K)')cb = plt.colorbar() #储存颜色条cb.set_ticks([0,10,20,30,40,50]) #设置颜色条刻度plt.show()

5.训练曲线

#绘制训练曲线plt.plot(history.history['loss'],label = 'train')plt.plot(history.history['val_loss'],label = 'validation')plt.legend() #显示图例plt.xlabel('Epoch') # x y轴标签plt.ylabel('Loss')plt.show()

1.欠拟合曲线

  模型在训练集上无法得到足够低的误差

2.过拟合曲线

  验证集损失减小到一个点再次开始增加

3.好的拟合曲线

  模型在训练集和验证集上的性能都很好

  训练和验证损失共同减少并稳定在同一点附近

2.用DOS预测吸附能

1.导入数据

1.序列化与反序列化

#二进制序列化import picklelist_1 = ['Fe','Co','Ni']with open('list_1.pkl','wb') as f:pickle.dump(list_1,f)

#序列化with open('list_1.pkl','rb') as f:list_2 = pickle.load(f)print(list_2)

2.数据导入

import picklewith open('OH_data',mode = 'rb') as f: #数据反序列化surface_dos = pickle.load(f) #DOStargets = pickle.load(f) #吸附能

3.查看数据样本

import matplotlib.pyplot as pltfor i in range(1,10): #查看数据样本plt.plot(surface_dos[0,:-500,0],surface_dos[0,:-500,i])plt.show()

4.数据预处理

import numpy as npfrom sklearn.model_selection import train_test_splitfrom sklearn.preprocessing import StandardScalersurface_dos = surface_dos[:,:2000,1:28] #截取处理surface_dos[:,1800:2000,0:27] = 0surface_dos = surface_dos.astype(np.float32) #数据类型准换ss_x = StandardScaler() #制作标准化器x_s = ss_x.fit_transform(surface_dos.reshape(-1,surface_dos.shape[2])).reshape(surface_dos.shape) #x标准化y = targetsx_train,x_test,y_train,y_test = train_test_split(x_s,y,test_size = 0.2) #划分为训练集和测试集,比例0.2

标准化StandardScaler()

xs=x−μσx_{s}=\frac{x-\mu}{\sigma}xs​=σx−μ​归一化MinMaxScaler()

xn=x−xminxmax−xminx_{n}=\frac{x-x_{min}}{x_{max}-x_{min}}xn​=xmax​−xmin​x−xmin​​

2.模型准备

1.池化(pool),卷积(conv)

池化(平均值池化)

AveragePooling1D(pool_size,strides,padding)

pool_size:池化窗口大小

strides:步长,对应池化前后数据量比例

padding:控制输出层的大小,‘valid’,‘same’

valid:shape(input)−shape(pool)+1stridesvalid:\frac{shape\left(input\right)-shape\left(pool\right)+1}{strides}valid:stridesshape(input)−shape(pool)+1​

same:shape(input)stridessame:\frac{shape\left(input\right)}{strides}\quad\quad\quad\quad\quad\quad\quad\quadsame:stridesshape(input)​

  池化层没有参数:在保证数据特征的同时减少了参数,可以减小计算量和预防过拟合。卷积

Conv1D(filters,kernel_size,activation,padding,strides)

filters:卷积核数量

kernel_size:卷积窗口长度

activation:激活函数

padding:控制输出层,'valid’不填充,'same’填充

strides:卷积步长

    卷积核

    Glorot正态分布初始化器,即高斯分布的初始化

N(0,2nin+nout)μ=0,σ=2nin+noutN\left(0,\sqrt{\frac{2}{n_{in}+n_{out}}}\right)\quad\quad\mu=0,\sigma=\sqrt{\frac{2}{n_{in}+n_{out}}}N(0,nin​+nout​2​​)μ=0,σ=nin​+nout​2​​

    卷积核作用后,再使用激活函数调整

import tensorflow as tffrom tensorflow import kerasfrom tensorflow.keras import layersfrom tensorflow.keras.layers import Input,BatchNormalization,Dropout,Densefrom tensorflow.keras.layers import AveragePooling1D,Concatenate,Conv1D,Flattenchannels = 9 #通道数def dos_featurizer(channels): #特征化,减少计算量input_dos = Input(shape = (2000,channels)) #输入层x1 = AveragePooling1D(pool_size = 4,strides = 4,padding = 'same')(input_dos) #平均值池化,池化窗口pool_size,步长strides,'same'自动补齐x2 = AveragePooling1D(pool_size = 25,strides = 4,padding = 'same')(input_dos)x3 = AveragePooling1D(pool_size = 200,strides = 4,padding = 'same')(input_dos)x = Concatenate(axis = -1)([x1,x2,x3]) #axis=-1 按最后一个维度拼接x = Conv1D(50,20,activation = 'relu',padding = 'same',strides = 2)(x) #卷积,卷积核数量50,卷积窗口大小20,激活函数relux = BatchNormalization()(x) #批标准化x = Conv1D(75, 3, activation='relu', padding='same', strides=2)(x) # 卷积x = AveragePooling1D(pool_size=3, strides=2, padding='same')(x) # 平均值池化x = Conv1D(100, 3, activation='relu', padding='same', strides=2)(x) # 卷积x = AveragePooling1D(pool_size=3, strides=2, padding='same')(x) # 平均值池化x = Conv1D(125, 3, activation='relu', padding='same', strides=2)(x) # 卷积x = AveragePooling1D(pool_size=3, strides=2, padding='same')(x) # 平均值池化x = Conv1D(150, 3, activation='relu', padding='same', strides=1)(x) # 卷积shared_model = Model(input_dos,x) #构建共享模型return shared_model

2.独立同分布假设(independent and identically distributed,IID)

  监督训练的假设,假设训练数据与测试数据满足相同分布

  用于保证通过训练数据得到的模型在测试数据上能有良好表现

3.偏移

  数据集偏移(Dataset shfit):训练数据与待预测数据的分布不同

  输入分布偏移(Covariate shift):特殊的数据集偏移,训练数据与待预测数据条件概率相同但边缘概率不同

4.Internal Covariate Shift(ICS)

  在深层网络的训练过程中,网络参数的变化引起内部节点数据分布变化的现象

  ICS结果导致输入(in)与输出(out)条件概率相同但边缘概率不同

5.白化(Whitening)

  一种数据标准化的方法,常用的PCA白化(均值为0,方差为1)和ZCA白化(均值为0,方差相同)

  缺点:计算成本高,每层都需要白化,改变每一层的分布

6.批标准化

  对于B={x1,2,…,m}中的所有x

  mini-batch的均值:

μB←1m∑i=1mxi\mu_{B}\gets\frac{1}{m}\sum_{i=1}^{m}x_{i}μB​←m1​i=1∑m​xi​

  mini-batch的方差:

σB2←1m∑i=1m(xi−μB)2\sigma_{B}^{2}\gets\frac{1}{m}\sum_{i=1}^{m}\left(x_{i}-\mu_{B}\right)^{2}σB2​←m1​i=1∑m​(xi​−μB​)2

  标准化:

xi^←xi−μBσB2+ε\hat{x_{i}}\gets\frac{x_{i}-\mu_{B}}{\sqrt{\sigma_{B}^{2}+\varepsilon }}xi​^​←σB2​+ε​xi​−μB​​

  缩放与平移:

yi←γx^i+β≡BNγ,β(xi)y_{i}\gets\gamma\hat{x}_{i}+\beta\equiv BN_{\gamma,\beta}\left(x_{i}\right)yi​←γx^i​+β≡BNγ,β​(xi​)

  对网络的参数 X 进行缩放,得 α\alphaαX

  缩放前:输入值 X   均值 μ\muμ    方差σ\sigmaσ2

  缩放后:输入值 α\alphaαX  均值 αμ\alpha\muαμ  方差α\alphaα2σ\sigmaσ2

BN(αX)=γαX−αμα2σ2+β=γX−μσ2+β=BN(X)BN\left(\alpha X\right)=\gamma\frac{\alpha X-\alpha\mu}{\sqrt{\alpha^{2}\sigma^{2}}}+\beta=\gamma\frac{X-\mu}{\sqrt{\sigma^{2}}}+\beta=BN\left(X\right)BN(αX)=γα2σ2​αX−αμ​+β=γσ2​X−μ​+β=BN(X)

  缩放对BN层输出没有影响

3.构建模型

1.模型构建

from tensorflow.keras.models import Modelfrom tensorflow.keras.optimizers import Adamdef create_model(shared_conv,channels): #自定义创建模型函数input1 = Input(shape = (2000,channels)) #输入层input2 = Input(shape=(2000, channels))input3 = Input(shape=(2000, channels))conv1 = shared_conv(input1) #创建共享模型conv2 = shared_conv(input2)conv3 = shared_conv(input3)convmerge = Concatenate(axis = -1)([conv1,conv2,conv3]) #axis=-1 按最后一个维度拼接convmerge = Flatten()(convmerge) #降为一维数组convmerge = Dropout(0.2)(convmerge) #预防过拟合,丢弃法,每次丢掉0.2的神经元convmerge = Dense(200,activation = 'linear')(convmerge) #隐藏层,200个神经元convmerge = Dense(1000, activation='relu')(convmerge) # 隐藏层,200个神经元convmerge = Dense(1000, activation='relu')(convmerge) # 隐藏层,200个神经元out = Dense(1)(convmerge) #输出层model = Model(inputs = [input1,input2,input3],outputs = out) #构建模型return modelshare_conv = dos_featurizer(channels)model = create_model(share_conv,channels) #创建模型print(model.summary())pile(loss = 'logcosh',optimizer = Adam(0.001),metrics = ['mean_absolute_error']) #设置模型,学习率0.001

2.模型训练

model.fit( #训练模型[x_train[:,:,0:9],x_train[:,:,9:18],x_train[:,:,18:27]], #输入y_train,batch_size = 128,epochs = 60,verbose = 0, #输出,训练60次,不输出,batch_size=128,128个数据为一组validation_data = ([x_test[:,:,0:9],x_test[:,:,9:18],x_test[:,:,18:27]],y_test) #验证集)

4.结果可视化

#结果可视化train_out = model.predict([x_train[:,:,0:9],x_train[:,:,9:18],x_train[:,:,18:27]]) #模型预测test_out = model.predict([x_test[:,:,0:9],x_test[:,:,9:18],x_test[:,:,18:27]])train_out,test_out = train_out.reshape(len(train_out)),test_out.reshape(len(test_out)) #列数组改为行数组plt.figure(figsize = (6,6)) #画布大小plt.plot([-3,3],[-3,3]) #绘制对角线plt.scatter(y_train,train_out,s = 10,c = 'b') #训练集plt.scatter(y_test,test_out,s = 10,c = 'r',alpha = 0.5) #验证集plt.xlabel('DFT(eV)') # x y标签plt.ylabel('ML Predicted(eV)')plt.axis([-3,3,-3,3]) #坐标轴范围plt.show()

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。