人妖在线一区,国产日韩欧美一区二区综合在线,国产啪精品视频网站免费,欧美内射深插日本少妇

新聞動(dòng)態(tài)

tensorflow2 自定義損失函數(shù)使用的隱藏坑

發(fā)布日期:2022-02-21 12:30 | 文章來源:源碼中國

Keras的核心原則是逐步揭示復(fù)雜性,可以在保持相應(yīng)的高級(jí)便利性的同時(shí),對(duì)操作細(xì)節(jié)進(jìn)行更多控制。當(dāng)我們要自定義fit中的訓(xùn)練算法時(shí),可以重寫模型中的train_step方法,然后調(diào)用fit來訓(xùn)練模型。

這里以tensorflow2官網(wǎng)中的例子來說明:

import numpy as np
import tensorflow as tf
from tensorflow import keras
x = np.random.random((1000, 32))
y = np.random.random((1000, 1))
class CustomModel(keras.Model):
 tf.random.set_seed(100)
 def train_step(self, data):
  # Unpack the data. Its structure depends on your model and
  # on what you pass to `fit()`.
  x, y = data
  with tf.GradientTape() as tape:
y_pred = self(x, training=True)  # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)
  # Compute gradients
  trainable_vars = self.trainable_variables
  gradients = tape.gradient(loss, trainable_vars)
  # Update weights
  self.optimizer.apply_gradients(zip(gradients, trainable_vars))
  # Update metrics (includes the metric that tracks the loss)
  self.compiled_metrics.update_state(y, y_pred)
  # Return a dict mapping metric names to current value
  return {m.name: m.result() for m in self.metrics}
 

# Construct and compile an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = CustomModel(inputs, outputs)
model.compile(optimizer="adam", loss=tf.losses.MSE, metrics=["mae"])
# Just use `fit` as usual
model.fit(x, y, epochs=1, shuffle=False)
32/32 [==============================] - 0s 1ms/step - loss: 0.2783 - mae: 0.4257
 
<tensorflow.python.keras.callbacks.History at 0x7ff7edf6dfd0>

這里的loss是tensorflow庫中實(shí)現(xiàn)了的損失函數(shù),如果想自定義損失函數(shù),然后將損失函數(shù)傳入model.compile中,能正常按我們預(yù)想的work嗎?

答案竟然是否定的,而且沒有錯(cuò)誤提示,只是loss計(jì)算不會(huì)符合我們的預(yù)期。

def custom_mse(y_true, y_pred):
 return tf.reduce_mean((y_true - y_pred)**2, axis=-1)
a_true = tf.constant([1., 1.5, 1.2])
a_pred = tf.constant([1., 2, 1.5])
custom_mse(a_true, a_pred)
<tf.Tensor: shape=(), dtype=float32, numpy=0.11333332>
tf.losses.MSE(a_true, a_pred)
<tf.Tensor: shape=(), dtype=float32, numpy=0.11333332>

以上結(jié)果證實(shí)了我們自定義loss的正確性,下面我們直接將自定義的loss置入compile中的loss參數(shù)中,看看會(huì)發(fā)生什么。

my_model = CustomModel(inputs, outputs)
my_model.compile(optimizer="adam", loss=custom_mse, metrics=["mae"])
my_model.fit(x, y, epochs=1, shuffle=False)
32/32 [==============================] - 0s 820us/step - loss: 0.1628 - mae: 0.3257
<tensorflow.python.keras.callbacks.History at 0x7ff7edeb7810>

我們看到,這里的loss與我們與標(biāo)準(zhǔn)的tf.losses.MSE明顯不同。這說明我們自定義的loss以這種方式直接傳遞進(jìn)model.compile中,是完全錯(cuò)誤的操作。

正確運(yùn)用自定義loss的姿勢(shì)是什么呢?下面揭曉。

loss_tracker = keras.metrics.Mean(name="loss")
mae_metric = keras.metrics.MeanAbsoluteError(name="mae")
class MyCustomModel(keras.Model):
 tf.random.set_seed(100)
 def train_step(self, data):
  # Unpack the data. Its structure depends on your model and
  # on what you pass to `fit()`.
  x, y = data
  with tf.GradientTape() as tape:
y_pred = self(x, training=True)  # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = custom_mse(y, y_pred)
# loss += self.losses
  # Compute gradients
  trainable_vars = self.trainable_variables
  gradients = tape.gradient(loss, trainable_vars)
  # Update weights
  self.optimizer.apply_gradients(zip(gradients, trainable_vars))
  
  # Compute our own metrics
  loss_tracker.update_state(loss)
  mae_metric.update_state(y, y_pred)
  return {"loss": loss_tracker.result(), "mae": mae_metric.result()}
 
 @property
 def metrics(self):
  # We list our `Metric` objects here so that `reset_states()` can be
  # called automatically at the start of each epoch
  # or at the start of `evaluate()`.
  # If you don't implement this property, you have to call
  # `reset_states()` yourself at the time of your choosing.
  return [loss_tracker, mae_metric]
 
# Construct and compile an instance of CustomModel
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
my_model_beta = MyCustomModel(inputs, outputs)
my_model_beta.compile(optimizer="adam")
# Just use `fit` as usual
my_model_beta.fit(x, y, epochs=1, shuffle=False)
32/32 [==============================] - 0s 960us/step - loss: 0.2783 - mae: 0.4257
<tensorflow.python.keras.callbacks.History at 0x7ff7eda3d810>

終于,通過跳過在 compile() 中傳遞損失函數(shù),而在 train_step 中手動(dòng)完成所有計(jì)算內(nèi)容,我們獲得了與之前默認(rèn)tf.losses.MSE完全一致的輸出,這才是我們想要的結(jié)果。

總結(jié)一下,當(dāng)我們?cè)谀P椭邢胗米远x的損失函數(shù),不能直接傳入fit函數(shù),而是需要在train_step中手動(dòng)傳入,完成計(jì)算過程。

到此這篇關(guān)于tensorflow2 自定義損失函數(shù)使用的隱藏坑的文章就介紹到這了,更多相關(guān)tensorflow2 自定義損失函數(shù)內(nèi)容請(qǐng)搜索本站以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持本站!

香港服務(wù)器租用

版權(quán)聲明:本站文章來源標(biāo)注為YINGSOO的內(nèi)容版權(quán)均為本站所有,歡迎引用、轉(zhuǎn)載,請(qǐng)保持原文完整并注明來源及原文鏈接。禁止復(fù)制或仿造本網(wǎng)站,禁止在非www.sddonglingsh.com所屬的服務(wù)器上建立鏡像,否則將依法追究法律責(zé)任。本站部分內(nèi)容來源于網(wǎng)友推薦、互聯(lián)網(wǎng)收集整理而來,僅供學(xué)習(xí)參考,不代表本站立場(chǎng),如有內(nèi)容涉嫌侵權(quán),請(qǐng)聯(lián)系alex-e#qq.com處理。

相關(guān)文章

實(shí)時(shí)開通

自選配置、實(shí)時(shí)開通

免備案

全球線路精選!

全天候客戶服務(wù)

7x24全年不間斷在線

專屬顧問服務(wù)

1對(duì)1客戶咨詢顧問

在線
客服

在線客服:7*24小時(shí)在線

客服
熱線

400-630-3752
7*24小時(shí)客服服務(wù)熱線

關(guān)注
微信

關(guān)注官方微信
頂部