All Downloads are FREE. Search and download functionalities are using the official Maven repository.

scripts.nn.layers.low_rank_affine.dml Maven / Gradle / Ivy

There is a newer version: 1.2.0
Show newest version
#-------------------------------------------------------------
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
#
#-------------------------------------------------------------

/*
 * Low-rank Affine (fully-connected) layer.
 * 
 * This layer has three advantages over the affine layer:
 * 1. It has significantly lower memory requirement than affine layer making it ideal for devices such as GPUs.
 * 2. It implicitly avoids overfitting by minimizing the number of parameters in the neural network.
 * 3. It can exploit sparsity-aware fused operators.
 */

forward = function(matrix[double] X, matrix[double] U, matrix[double] V, matrix[double] b)
    return (matrix[double] out) {
  /*
   * Computes the forward pass for a low-rank affine (fully-connected) layer
   * with M neurons.  The input data has N examples, each with D
   * features.
   *
   * Inputs:
   *  - X: Inputs, of shape (N, D).
   *  - U: LHS factor matrix for weights, of shape (D, R).
   *  - V: RHS factor matrix for weights, of shape (R, M).
   *  - b: Biases, of shape (1, M).
   *
   * Outputs:
   *  - out: Outputs, of shape (N, M).
   */
  out = X %*% U %*% V + b
}

backward = function(matrix[double] dout, matrix[double] X,
                    matrix[double] U, matrix[double] V, matrix[double] b)
    return (matrix[double] dX, matrix[double] dU, matrix[double] dV, matrix[double] db) {
  /*
   * Computes the backward pass for a low-rank fully-connected (affine) layer
   * with M neurons.
   *
   * Inputs:
   *  - dout: Gradient wrt `out` from upstream, of shape (N, M).
   *  - X: Inputs, of shape (N, D).
   *  - U: LHS factor matrix for weights, of shape (D, R).
   *  - V: RHS factor matrix for weights, of shape (R, M).
   *  - b: Biases, of shape (1, M).
   *
   * Outputs:
   *  - dX: Gradient wrt `X`, of shape (N, D).
   *  - dU: Gradient wrt `U`, of shape (D, R).
   *  - dV: Gradient wrt `V`, of shape (R, M).
   *  - db: Gradient wrt `b`, of shape (1, M).
   */
  dX = dout %*% t(V) %*% t(U)
  
  # If out = Z %*% L, then dL = t(Z) %*% dout
  # Substituting Z = X %*% U and L = V, we get
  dV = t(U) %*% t(X) %*% dout
    
  dU = t(X) %*% dout %*% t(V)
  
  db = colSums(dout)
}

init = function(int D, int M, int R)
    return (matrix[double] U, matrix[double] V, matrix[double] b) {
  /*
   * Initialize the parameters of this layer.
   *
   * Note: This is just a convenience function, and parameters
   * may be initialized manually if needed.
   *
   * We use the heuristic by He et al., which limits the magnification
   * of inputs/gradients during forward/backward passes by scaling
   * unit-Gaussian weights by a factor of sqrt(2/n), under the
   * assumption of relu neurons.
   *  - http://arxiv.org/abs/1502.01852
   *
   * Inputs:
   *  - D: Dimensionality of the input features (number of features).
   *  - M: Number of neurons in this layer.
   *  - R: Rank of U,V matrices such that R << min(D, M).
   *
   * Outputs:
   *  - U: LHS factor matrix for weights, of shape (D, R).
   *  - V: RHS factor matrix for weights, of shape (R, M).
   *  - b: Biases, of shape (1, M).
   */
  U = rand(rows=D, cols=R, pdf="normal") * sqrt(2.0/D)
  V = rand(rows=R, cols=M, pdf="normal") * sqrt(2.0/R)
  b = matrix(0, rows=1, cols=M)
}





© 2015 - 2024 Weber Informatics LLC | Privacy Policy