LinearLayer

LinearLayer[n]

represents a trainable, fully connected net layer that computes with output vector of size n.

LinearLayer[{n1,n2,}]

represents a layer that outputs an array of dimensions n1×n2×.

LinearLayer[]

leaves the dimensions of the output array to be inferred from context.

LinearLayer[n,opts]

includes options for initial weights and other parameters.

Details and Options

  • The following optional parameters can be included:
  • "Biases"Automaticinitial vector of biases (b in w.x+b)
    "Weights"Automaticinitial matrix of weights (w in w.x+b)
    LearningRateMultipliers Automaticlearning rate multiplier(s) to apply to weights and/or biases
  • When weights and biases are not explicitly specified or are given as Automatic, they are added automatically when NetInitialize or NetTrain is used.
  • The setting "Biases"->None specifies that no biases should be used.
  • If weights and biases have been added, LinearLayer[][input] explicitly computes the output from applying the layer.
  • LinearLayer[][{input1,input2,}] explicitly computes outputs for each of the inputi.
  • When given a NumericArray as input, the output will be a NumericArray.
  • NetExtract can be used to extract weights and biases from a LinearLayer object.
  • LinearLayer is typically used inside NetChain, NetGraph, etc.
  • LinearLayer exposes the following ports for use in NetGraph etc.:
  • "Input"an array
    "Output"an array of size n1×n2×
  • LinearLayer[{}] specifies that the LinearLayer should produce a single real number.
  • LinearLayer[n,"Input"->m] is the most common usage of LinearLayer and represents a LinearLayer that takes a vector of length m and produces a vector of length n.
  • When it cannot be inferred from previous layers in a larger net, the option "Input"shape can be used to fix the input of LinearLayer. Possible forms for shape include:
  • "Real"a single real number
    ma vector of length m
    {m1,m2,}an array of dimensions m1×m2×
    NetEncoder[]an encoder
  • Options[LinearLayer] gives the list of default options to construct the layer. Options[LinearLayer[]] gives the list of default options to evaluate the layer on some data.
  • Information[LinearLayer[]] gives a report about the layer.
  • Information[LinearLayer[],prop] gives the value of the property prop of LinearLayer[]. Possible properties are the same as for NetGraph.

Examples

open allclose all

Basic Examples  (2)

Create a LinearLayer whose output is a length-5 vector:

Create a randomly initialized LinearLayer:

Apply the layer to an input vector to produce an output vector:

Scope  (11)

Arguments  (2)

Create a LinearLayer that produces a 3×2 matrix:

Specify an arbitrary depth output array:

An empty list corresponds to a scalar output:

Ports  (6)

Specify that the "Input" port of the layer takes a vector of length 3:

Specify the "Input" and "Output" ports explicitly:

Define a layer that takes and returns real numbers:

Apply the initialized layer to an input:

A LinearLayer with fully specified "Input" and "Output" ports can be initialized:

Apply the initialized layer to an input:

Define a NetEncoder that takes a class and produces its one-hot encoding vector:

Attach the encoder to the layer "Input" port:

Apply the layer to a member of the class:

Define a NetEncoder that takes an image and produces a 28×28 matrix:

Attach the encoder to the layer "Input" port:

Apply the layer to a member of the class:

Parameters  (3)

"Biases"  (1)

Define and initialize a LinearLayer without biases:

This is equivalent to Dot:

"Weights"  (2)

Specify a weight matrix, and use no biases:

Apply the layer to an input:

Extract the weight:

Use a specific weight matrix and bias vector:

This layer is fully specified and does not need to be initialized:

Options  (2)

LearningRateMultipliers  (2)

Create a LinearLayer with frozen weights and biases:

Train a net with this layer inside:

The weights and biases of the layer have been unchanged:

Create a LinearLayer with frozen weights, but free biases:

Train a net with this layer inside:

The weights have been unchanged, but the biases changed during training:

Applications  (1)

Create a two-layer perceptron by stacking linear layers in a NetChain:

Train the perceptron on the "MNIST" dataset of handwritten digits:

Classify unseen digits:

Properties & Relations  (2)

LinearLayer[n] can be specified as simply n in a NetChain:

For inputs and outputs that are vectors, LinearLayer computes:

Evaluate a LinearLayer on data:

Manually compute the same result:

Possible Issues  (3)

LinearLayer cannot be initialized until all its input and output dimensions are known:

LinearLayer cannot accept symbolic inputs:

A LinearLayer with output size n and input size m has a weight matrix of dimensions n×m:

If n and m are too large, there might not be enough system or GPU memory to initialize or train a net containing such a LinearLayer:

Wolfram Research (2016), LinearLayer, Wolfram Language function, https://reference.wolfram.com/language/ref/LinearLayer.html (updated 2020).

Text

Wolfram Research (2016), LinearLayer, Wolfram Language function, https://reference.wolfram.com/language/ref/LinearLayer.html (updated 2020).

CMS

Wolfram Language. 2016. "LinearLayer." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2020. https://reference.wolfram.com/language/ref/LinearLayer.html.

APA

Wolfram Language. (2016). LinearLayer. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/LinearLayer.html

BibTeX

@misc{reference.wolfram_2024_linearlayer, author="Wolfram Research", title="{LinearLayer}", year="2020", howpublished="\url{https://reference.wolfram.com/language/ref/LinearLayer.html}", note=[Accessed: 26-November-2024 ]}

BibLaTeX

@online{reference.wolfram_2024_linearlayer, organization={Wolfram Research}, title={LinearLayer}, year={2020}, url={https://reference.wolfram.com/language/ref/LinearLayer.html}, note=[Accessed: 26-November-2024 ]}