人妖在线一区,国产日韩欧美一区二区综合在线,国产啪精品视频网站免费,欧美内射深插日本少妇

新聞動(dòng)態(tài)

Python機(jī)器學(xué)習(xí)NLP自然語言處理基本操作之Seq2seq的用法

發(fā)布日期:2021-12-24 00:51 | 文章來源:腳本之家

概述

從今天開始我們將開啟一段自然語言處理 (NLP) 的旅程. 自然語言處理可以讓來處理, 理解, 以及運(yùn)用人類的語言, 實(shí)現(xiàn)機(jī)器語言和人類語言之間的溝通橋梁.

Seq2seq

Seq2seq 由 Encoder 和 Decoder 兩個(gè) RNN 組成. Encoder 將變長(zhǎng)序列輸出, 編碼成 encoderstate 再由 Decoder 輸出變長(zhǎng)序列.

Seq2seq 的使用領(lǐng)域:

  • 機(jī)器翻譯: Encoder-Decoder 的最經(jīng)典應(yīng)用
  • 文本摘要: 輸入是一段文本序列, 輸出是這段文本序列的摘要序列
  • 閱讀理解: 將輸入的文章和問題分別編碼, 再對(duì)其進(jìn)行解碼得到問題的答案
  • 語音識(shí)別: 輸入是語音信號(hào)序列, 輸出是文字序列

優(yōu)點(diǎn):

  • 非常靈活: 并不限制 Encoder, Decoder 使用何種神經(jīng)網(wǎng)絡(luò), 也不限制輸入和輸出的狀態(tài)
  • 端到端: 將語義理解和語言生成結(jié)合在了一起, 而不是分開處理

缺點(diǎn):

  • 信息損失: 無論輸入如何變化, Encoder 給出的都是一個(gè)固定維數(shù)的向量. 在生成文本時(shí), 生成每個(gè)詞所用到的語義向量都是一樣的, 過于簡(jiǎn)單

Attention 模型

Attention 是一種用于提升 RNN 的 Encoder 和 Decoder 模型的效果的機(jī)制. 廣泛應(yīng)用于機(jī)器翻譯, 語音識(shí)別, 圖像標(biāo)注等多個(gè)領(lǐng)域. 深度學(xué)習(xí)中的注意力機(jī)制從本質(zhì)上講和人類的選擇性視覺注意力機(jī)制類似. 核心目標(biāo)也是從眾多信息中選擇出對(duì)當(dāng)前任務(wù)目標(biāo)更關(guān)鍵的信息.

Attention 實(shí)質(zhì)上是一種 content-based addressing 的機(jī)制. 即從網(wǎng)絡(luò)中某些狀態(tài)集合中選取給定狀態(tài)較為相似的狀態(tài), 進(jìn)而做后續(xù)的信息抽取.

首先根據(jù) Encoder 和 Decoder 的特征計(jì)算權(quán)值, 然后對(duì) Encoder 的特征進(jìn)行加權(quán)求和, 作為 Decoder 的輸入. 其作用的將 Encoder 的特征以更好的方式呈獻(xiàn)給 Decoder. (并不是所有的 context 都對(duì)下一個(gè)狀態(tài)的生成產(chǎn)生影響, Attention 就是選擇恰當(dāng)?shù)?context 用它生成下一個(gè)狀態(tài).

Seq2seq 模型

# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#  http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Sequence-to-sequence model with an attention mechanism."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import random
import numpy as np
from six.moves import xrange  # pylint: disable=redefined-builtin
import tensorflow as tf
import data_utils
setattr(tf.contrib.rnn.GRUCell, '__deepcopy__', lambda self, _: self)
setattr(tf.contrib.rnn.BasicLSTMCell, '__deepcopy__', lambda self, _: self)
setattr(tf.contrib.rnn.MultiRNNCell, '__deepcopy__', lambda self, _: self)
class Seq2SeqModel(object):
  """Sequence-to-sequence model with attention and for multiple buckets.
  This class implements a multi-layer recurrent neural network as encoder,
  and an attention-based decoder. This is the same as the model described in
  this paper: http://arxiv.org/abs/1412.7449 - please look there for details,
  or into the seq2seq library for complete model implementation.
  This class also allows to use GRU cells in addition to LSTM cells, and
  sampled softmax to handle large output vocabulary size. A single-layer
  version of this model, but with bi-directional encoder, was presented in
 http://arxiv.org/abs/1409.0473
  and sampled softmax is described in Section 3 of the following paper.
 http://arxiv.org/abs/1412.2007
  """
  def __init__(self,
source_vocab_size,
target_vocab_size,
buckets,
size,
num_layers,
max_gradient_norm,
batch_size,
learning_rate,
learning_rate_decay_factor,
use_lstm=False,
num_samples=512,
forward_only=False,
dtype=tf.float32):
 """Create the model.
 Args:
source_vocab_size: size of the source vocabulary.
target_vocab_size: size of the target vocabulary.
buckets: a list of pairs (I, O), where I specifies maximum input length
  that will be processed in that bucket, and O specifies maximum output
  length. Training instances that have inputs longer than I or outputs
  longer than O will be pushed to the next bucket and padded accordingly.
  We assume that the list is sorted, e.g., [(2, 4), (8, 16)].
size: number of units in each layer of the model.
num_layers: number of layers in the model.
max_gradient_norm: gradients will be clipped to maximally this norm.
batch_size: the size of the batches used during training;
  the model construction is independent of batch_size, so it can be
  changed after initialization if this is convenient, e.g., for decoding.
learning_rate: learning rate to start with.
learning_rate_decay_factor: decay learning rate by this much when needed.
use_lstm: if true, we use LSTM cells instead of GRU cells.
num_samples: number of samples for sampled softmax.#??
forward_only: if set, we do not construct the backward pass in the model.
dtype: the data type to use to store internal variables.
 """
 self.source_vocab_size = source_vocab_size
 self.target_vocab_size = target_vocab_size
 self.buckets = buckets
 self.batch_size = batch_size
 self.learning_rate = tf.Variable(
  float(learning_rate), trainable=False, dtype=dtype)
 self.learning_rate_decay_op = self.learning_rate.assign(
  self.learning_rate * learning_rate_decay_factor)
 self.global_step = tf.Variable(0, trainable=False)
 # If we use sampled softmax, we need an output projection.
 output_projection = None
 softmax_loss_function = None
 # Sampled softmax only makes sense if we sample less than vocabulary size.
 if num_samples > 0 and num_samples < self.target_vocab_size:
w_t = tf.get_variable("proj_w", [self.target_vocab_size, size], dtype=dtype)
w = tf.transpose(w_t)
b = tf.get_variable("proj_b", [self.target_vocab_size], dtype=dtype)
output_projection = (w, b)
def sampled_loss(labels, logits):
  labels = tf.reshape(labels, [-1, 1])
  # We need to compute the sampled_softmax_loss using 32bit floats to
  # avoid numerical instabilities.
  local_w_t = tf.cast(w_t, tf.float32)
  local_b = tf.cast(b, tf.float32)
  local_inputs = tf.cast(logits, tf.float32)
  return tf.cast(
tf.nn.sampled_softmax_loss(
 weights=local_w_t,
 biases=local_b,
 labels=labels,
 inputs=local_inputs,
 num_sampled=num_samples,
 num_classes=self.target_vocab_size),
dtype)
softmax_loss_function = sampled_loss
 # Create the internal multi-layer cell for our RNN.
 def single_cell():
return tf.contrib.rnn.GRUCell(size)
 if use_lstm:
def single_cell():
  return tf.contrib.rnn.BasicLSTMCell(size)
 cell = single_cell()
 if num_layers > 1:
cell = tf.contrib.rnn.MultiRNNCell([single_cell() for _ in range(num_layers)])
 # The seq2seq function: we use embedding for the input and attention.
 def seq2seq_f(encoder_inputs, decoder_inputs, do_decode):
return tf.contrib.legacy_seq2seq.embedding_attention_seq2seq(
 encoder_inputs,
 decoder_inputs,
 cell,
 num_encoder_symbols=source_vocab_size,
 num_decoder_symbols=target_vocab_size,
 embedding_size=size,
 output_projection=output_projection,
 feed_previous=do_decode,
 dtype=dtype)
 # Feeds for inputs.從這邊可以看出不同bucket是共用一組參數(shù)
 self.encoder_inputs = []
 self.decoder_inputs = []
 self.target_weights = []
 for i in xrange(buckets[-1][0]):  # Last bucket is the biggest one.
self.encoder_inputs.append(tf.placeholder(tf.int32, shape=[None],
  name="encoder{0}".format(i)))
 for i in xrange(buckets[-1][1] + 1): # 因?yàn)樵黾恿恕癵o”標(biāo)志
self.decoder_inputs.append(tf.placeholder(tf.int32, shape=[None],
  name="decoder{0}".format(i)))
self.target_weights.append(tf.placeholder(dtype, shape=[None],
  name="weight{0}".format(i)))
 # Our targets are decoder inputs shifted by one. decoder_inputs[0]沒有用
 targets = [self.decoder_inputs[i + 1]
for i in xrange(len(self.decoder_inputs) - 1)]
 # Training outputs and losses.
 '''
 lambda x, y的x是指encoder_inputs,y是指decoder_inputs
 '''
 if forward_only:
self.outputs, self.losses = tf.contrib.legacy_seq2seq.model_with_buckets(
 self.encoder_inputs, self.decoder_inputs, targets,
 self.target_weights, buckets, lambda x, y: seq2seq_f(x, y, True),
 softmax_loss_function=softmax_loss_function)
# If we use output projection, we need to project outputs for decoding.
if output_projection is not None:
  for b in xrange(len(buckets)):
 self.outputs[b] = [
  tf.matmul(output, output_projection[0]) + output_projection[1]
  for output in self.outputs[b]
 ]
 else:
self.outputs, self.losses = tf.contrib.legacy_seq2seq.model_with_buckets(
 self.encoder_inputs, self.decoder_inputs, targets,
 self.target_weights, buckets,
 lambda x, y: seq2seq_f(x, y, False),
 softmax_loss_function=softmax_loss_function)
 # Gradients and SGD update operation for training the model.
 params = tf.trainable_variables()
 if not forward_only:
self.gradient_norms = []
self.updates = []
opt = tf.train.GradientDescentOptimizer(self.learning_rate)
for b in xrange(len(buckets)):
  gradients = tf.gradients(self.losses[b], params)
  clipped_gradients, norm = tf.clip_by_global_norm(gradients,
 max_gradient_norm)
  self.gradient_norms.append(norm)
  self.updates.append(opt.apply_gradients(
zip(clipped_gradients, params), global_step=self.global_step))
 self.saver = tf.train.Saver(tf.global_variables())
  def step(self, session, encoder_inputs, decoder_inputs, target_weights,
  bucket_id, forward_only):
 """Run a step of the model feeding the given inputs.
 Args:
session: tensorflow session to use.
encoder_inputs: list of numpy int vectors to feed as encoder inputs.
decoder_inputs: list of numpy int vectors to feed as decoder inputs.
target_weights: list of numpy float vectors to feed as target weights.
bucket_id: which bucket of the model to use.
forward_only: whether to do the backward step or only forward.
 Returns:
A triple consisting of gradient norm (or None if we did not do backward),
average perplexity, and the outputs.
 Raises:
ValueError: if length of encoder_inputs, decoder_inputs, or
  target_weights disagrees with bucket size for the specified bucket_id.
 """
 # Check if the sizes match.
 encoder_size, decoder_size = self.buckets[bucket_id]
 #encoder_inputs的shape為(encoder_size,batch_size)
 if len(encoder_inputs) != encoder_size:
raise ValueError("Encoder length must be equal to the one in bucket,"
  " %d != %d." % (len(encoder_inputs), encoder_size))
 if len(decoder_inputs) != decoder_size:
raise ValueError("Decoder length must be equal to the one in bucket,"
  " %d != %d." % (len(decoder_inputs), decoder_size))
 if len(target_weights) != decoder_size:
raise ValueError("Weights length must be equal to the one in bucket,"
  " %d != %d." % (len(target_weights), decoder_size))
 # Input feed: encoder inputs, decoder inputs, target_weights, as provided.
 input_feed = {}
 for k in xrange(encoder_size):
input_feed[self.encoder_inputs[k].name] = encoder_inputs[k]
 for k in xrange(decoder_size):
input_feed[self.decoder_inputs[k].name] = decoder_inputs[k]
input_feed[self.target_weights[k].name] = target_weights[k]
 # Since our targets are decoder inputs shifted by one, we need one more.
 last_target = self.decoder_inputs[decoder_size].name
 input_feed[last_target] = np.zeros([self.batch_size], dtype=np.int32)
 # Output feed: depends on whether we do a backward step or not.
 if not forward_only:
output_feed = [self.updates[bucket_id],  # Update Op that does SGD.
self.gradient_norms[bucket_id],  # Gradient norm.
self.losses[bucket_id]]  # Loss for this batch.
 else:
output_feed = [self.losses[bucket_id]]  # Loss for this batch.
for l in xrange(decoder_size):  # Output logits.
  output_feed.append(self.outputs[bucket_id][l])
 outputs = session.run(output_feed, input_feed)
 if not forward_only:
return outputs[1], outputs[2], None  # Gradient norm, loss, no outputs.
 else:
return None, outputs[0], outputs[1:]  # No gradient norm, loss, outputs.
  
  '''
  根據(jù)指定bucket_id,產(chǎn)生batch_encoder_inputs和batch_decoder_inputs
  這里batch_encoder_inputs和batch_decoder_inputs的shape都由原來的(batch_size,encoder_size) 
  變?yōu)?encoder_size,batch_size),方便進(jìn)行batch訓(xùn)練
  '''
  def get_batch(self, data, bucket_id):
 """Get a random batch of data from the specified bucket, prepare for step.
 To feed data in step(..) it must be a list of batch-major vectors, while
 data here contains single length-major cases. So the main logic of this
 function is to re-index data cases to be in the proper format for feeding.
 Args:
data: a tuple of size len(self.buckets) in which each element contains
  lists of pairs of input and output data that we use to create a batch.
bucket_id: integer, which bucket to get the batch for.
 Returns:
The triple (encoder_inputs, decoder_inputs, target_weights) for
the constructed batch that has the proper format to call step(...) later.
 """
 encoder_size, decoder_size = self.buckets[bucket_id]
 encoder_inputs, decoder_inputs = [], []
 # Get a random batch of encoder and decoder inputs from data,
 # pad them if needed, reverse encoder inputs and add GO to decoder.
 for _ in xrange(self.batch_size):
encoder_input, decoder_input = random.choice(data[bucket_id])
# Encoder inputs are padded and then reversed.
encoder_pad = [data_utils.PAD_ID] * (encoder_size - len(encoder_input))
encoder_inputs.append(list(reversed(encoder_input + encoder_pad)))
# Decoder inputs get an extra "GO" symbol, and are padded then.
decoder_pad_size = decoder_size - len(decoder_input) - 1
decoder_inputs.append([data_utils.GO_ID] + decoder_input +
[data_utils.PAD_ID] * decoder_pad_size)
 # Now we create batch-major vectors from the data selected above.
 batch_encoder_inputs, batch_decoder_inputs, batch_weights = [], [], []
 # Batch encoder inputs are just re-indexed encoder_inputs.
 #encoder_inputs的shape為(batch_size,encoder_size) 
 #batch_encoder_inputs的shape為(encoder_size,batch_size)
 for length_idx in xrange(encoder_size):
batch_encoder_inputs.append(
 np.array([encoder_inputs[batch_idx][length_idx]
  for batch_idx in xrange(self.batch_size)], dtype=np.int32))
 # Batch decoder inputs are re-indexed decoder_inputs, we create weights.
 for length_idx in xrange(decoder_size):
batch_decoder_inputs.append(
 np.array([decoder_inputs[batch_idx][length_idx]
  for batch_idx in xrange(self.batch_size)], dtype=np.int32))
# Create target_weights to be 0 for targets that are padding.
batch_weight = np.ones(self.batch_size, dtype=np.float32)
for batch_idx in xrange(self.batch_size):
  # We set weight to 0 if the corresponding target is a PAD symbol.
  # The corresponding target is decoder_input shifted by 1 forward.
  if length_idx < decoder_size - 1:
 target = decoder_inputs[batch_idx][length_idx + 1]
 #如果到底decoder的最后一個(gè)單詞或target為pad,則不需要比較,即不考慮這一部分的損失函數(shù)
  if length_idx == decoder_size - 1 or target == data_utils.PAD_ID:
 batch_weight[batch_idx] = 0.0
batch_weights.append(batch_weight) #shape為(encoder_size,batch_size)
 return batch_encoder_inputs, batch_decoder_inputs, batch_weights

到此這篇關(guān)于Python機(jī)器學(xué)習(xí)NLP自然語言處理基本操作之Seq2seq的用法的文章就介紹到這了,更多相關(guān)Python Seq2seq內(nèi)容請(qǐng)搜索本站以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持本站!

版權(quán)聲明:本站文章來源標(biāo)注為YINGSOO的內(nèi)容版權(quán)均為本站所有,歡迎引用、轉(zhuǎn)載,請(qǐng)保持原文完整并注明來源及原文鏈接。禁止復(fù)制或仿造本網(wǎng)站,禁止在非www.sddonglingsh.com所屬的服務(wù)器上建立鏡像,否則將依法追究法律責(zé)任。本站部分內(nèi)容來源于網(wǎng)友推薦、互聯(lián)網(wǎng)收集整理而來,僅供學(xué)習(xí)參考,不代表本站立場(chǎng),如有內(nèi)容涉嫌侵權(quán),請(qǐng)聯(lián)系alex-e#qq.com處理。

相關(guān)文章

實(shí)時(shí)開通

自選配置、實(shí)時(shí)開通

免備案

全球線路精選!

全天候客戶服務(wù)

7x24全年不間斷在線

專屬顧問服務(wù)

1對(duì)1客戶咨詢顧問

在線
客服

在線客服:7*24小時(shí)在線

客服
熱線

400-630-3752
7*24小時(shí)客服服務(wù)熱線

關(guān)注
微信

關(guān)注官方微信
頂部