Updated nnlib2Rcpp package

For anyone interested, ‘nnlib2Rcpp’ package has been updated with several added features. Among other changes, the latest v.0.2.1 of the nnlib2Rcpp package allows users to define the behavior of custom Neural Network (NN) components they create using only R code. This includes custom layers and sets of connections.

Package nnlib2Rcpp is based on the nnlib2 C++ NN library. It interfaces compiled C++ NN components with R. New types of NN components can be created using the provided C++ classes and templates. However, as of v.0.2.1, user-defined NN components can also be created without any need for C++. Of course, NN components defined using nnlib2 C++ classes and templates (as described in an older post), or components already included in the package can still be used. All such components can be added to neural networks defined in R via the nnlib2Rcpp package’s “NN” module and cooperate with each other.

Defining custom NN component behavior in R does have a cost in terms of runtime performance and, to a certain degree, defies many of the reasons for using the provided C++ classes. However, it may be useful for some purposes.

The goal of the simple example listed below is to implement, using only R, a NN with functionality similar to that described in the aforementioned post and required some steps done in C++. In the example, component (connection set and output layer) functions required for a -simplified- perceptron-like NN are defined and the NN is set up. This is essentially a single layer perceptron as the first (“generic” layer just accepts the data and transfers it to the connections without performing any computations.
library(nnlib2Rcpp)

# Function for connections, when recalling/mapping:

CSmap <- function(WEIGHTS, SOURCE_OUTPUT,...)
	SOURCE_OUTPUT %*% WEIGHTS

# Function for connections, when encoding data:

learning_rate <- 0.3

CSenc <- function( WEIGHTS, SOURCE_OUTPUT,
				   DESTINATION_MISC, DESTINATION_OUTPUT, ...)
{
  # desired output should have been placed in misc registers:
  a <- learning_rate *
          (DESTINATION_MISC - DESTINATION_OUTPUT)
  # compute connection weight adjustments:
  a <- outer( SOURCE_OUTPUT, a , "*" )
  # compute adjusted weights:
  w <- WEIGHTS + a
  # return new (adjusted) weights:
  return(list(WEIGHTS=w))
}

# Function for layer, when recalling/mapping:
# (no encode function is needed for the layer in this example)

LAmap <- function(INPUT_Q,...)
{
	x <- colSums(INPUT_Q)		# input function is summation.
	x 0,1,0)		# threshold function is step.
	return(x)
}

# prepare some data based on iris data set:

data_in <- as.matrix(iris[1:4])
iris_cases <- nrow((data_in))

# make a "one-hot" encoding matrix for iris species
desired_data_out <- matrix(data=0, nrow=iris_cases, ncol=3)
desired_data_out[cbind(1:iris_cases,unclass(iris[,5]))]=1

# create the NN and define its components:
# (first generic layer simply accepts input and transfers it to the connections)

p <- new("NN")
p$add_layer("generic",4)
p$add_connection_set(list(name="R-connections",
                          encode_FUN="CSenc",
                          recall_FUN="CSmap"))
p$add_layer(list(name="R-layer",
                 size=3,
                 encode_FUN="",
                 recall_FUN="LAmap"))
p$create_connections_in_sets(0,0)

# encode data and desired output (for 50 training epochs):

for(i in 1:50)
	for(c in 1:iris_cases)
	{
		p$input_at(1,data_in[c,])
		p$set_misc_values_at(3,desired_data_out[c,])  # put desired output in misc registers
		p$recall_all_fwd();
		p$encode_at(2)
	}

# Recall the data and show NN's output:

for(c in 1:iris_cases)
{
	p$input_at(1,data_in[c,])
	p$recall_all_fwd()
	cat("iris case ",c,", desired = ", desired_data_out[c,],
		" returned = ", p$get_output_from(3),"\n")
}
More information can be found in the package’s documentation by typing:
help(NN_R_components)
A complete list of other changes done and features added to the package can be found here.

Creating custom neural networks with nnlib2Rcpp

For anyone interested, this is a post about creating arbitrary, new, or custom Neural Networks (NN) using the nnlib2Rcpp R package. The package provides some useful NN models that are ready to use. Furthermore, it can be a versatile basis for experimentation with new or custom NN models, which is what this brief tutorial is about. A warning is necessary at this point:

Warning: the following text contains traces of C++ code. If you are in any way sensitive to consuming C++, please abandon reading immediately.

The NN models in nnlib2Rcpp are created using a collection of C++ classes written for creating NN models called nnlib2. A cut-down class-diagram of the classes (and class-templates) in this collection can be found here . The most important class in the collection is “component” (for all components that constitute a NN). Objects of class “component” can be added to a NN “topology” (hosted in objects of class “nn”) and interact with each other. Layers of processing nodes (class “layer”), groups of connections (class “connection_set”), and even entire neural nets (class “nn”) are based on class “component”. When implementing new components, it is also good to remember that:

– Objects of class “layer” contain objects of class “pe” (processing elements [or nodes]).

– Template “Layer” simplifies creation of homogeneous “layer” sub-classes containing a particular “pe” subclass (i.e. type of nodes).

– Objects of class “connection_set” contain objects of class “connection”.

– Template “Connection_Set” simplifies creation of homogeneous “connection_set” sub-classes containing a particular “connection” subclass (i.e. type of connections).

– Customized and modified NN components and sub-components are to be defined based on these classes and templates.

– All aforementioned classes have an “encode” (training) and a “recall” (mapping) method; both are virtual and can be overridden with custom behavior. Calling “nn” “encode” triggers the “encode” function of all the components in its topology which, in turn, triggers “encode” for “pe” objects (processing nodes) in a “layer” or “connection” objects in a “connection_set”. Similarly for “recall”.

The NN to create in this example will be based on Perceptron, the most classic of them all. It is not yet implemented in nnlib2Rcpp, so in this example we will play the role of Prof. Rosenblatt and his team [6] and implement a simple multi-class Perceptron ourselves. Unlike, Prof Rosenblatt, you -instead of inventing it- can find information about it in a wikipedia page for it [1]. We will implement a simplified (not 100% scientifically sound) variation, with no bias, fixed learning rate (at 0.3) and connection weights initialized to 0.

Let’s add it to nnlib2Rcpp.

Step 1: setup the tools needed.

To follow this example, you will need to have Rtools [2] and the Rcpp R package [3] installed, and the nnlib2Rcpp package source (version 0.1.4 or above). This can be downloaded from CRAN [4] or the latest version can be found on github [5]. If fetched or downloaded from github, the nnlib2Rcpp.Rproj is a project file for building the package in RStudio. I recommend getting the source from github, unpacking it (if needed) in a directory and then opening the aforementioned nnlib2Rcpp.Rproj in Rstudio. You can then test-build the unmodified package; if it succeeds you can proceed to the next step, adding your own NN components.

Step 2: define the model-specific components and sub-components.

Open the “additional_parts.h” C++ header file found in sub-directory “src” and create the needed classes. Much of the default class behavior is similar to what is required for a Perceptron, so we will focus on what is different and specific to the model. We will need to define (in the “additional_parts.h” file) the following:

(a) a “pe” subclass for the Perceptron processing nodes. All “pe” objects provide three methods, namely “input_function”, “activation_function”, and “threshold_function”; by default, each is applied to the result of the previous one, except for the “input_function” which gathers (by default sums) all incoming values and places result on the internal register variable “input”. The sequence of executing these methods is expected to place the final result in “pe” variable “output”. You may choose (or choose not) to modify these methods if this fits your model and implementation approach. You may also choose to modify “pe” behavior in its “encode” and/or “recall” functions, possibly bypassing the aforementioned methods completely. It may help to see the “pe.h” header file (also in directory “src”) for more insight on the base class. In any case, a C++ implementation for Perceptron processing nodes could be:
class perceptron_pe : public pe
{
public:

DATA threshold_function(DATA value)
 {
 if(value>0) return 1;
 return 0;
 }
};
(b) Next you may want to define a class for layers consisting of “perceptron_pe” objects as defined above; this can be done quickly using the template “Layer”:

typedef Layer< perceptron_pe > perceptron_layer;
(c) Moving on to the connections now. Notice that in Perceptron connections are the only elements modified (updating their weights) during encoding. Among other functionality, each connection knows its source and destination nodes, maintains and updates the weight, modifies transferred data etc. So a C++ class for such Percepton connections could be:
class perceptron_connection: public connection
{
public:

// mapping, multiply value coming from source node by weight
// and send it to destination node.
void recall()
 {
 destin_pe().receive_input_value( weight() * source_pe().output );
 }

// training, weights are updated:
void encode()
 {
 weight() = weight() + 0.3 * (destin_pe().input - destin_pe().output) * source_pe().output;
 }
};
for simplicity, during training learning rate is fixed to 0.3 and the connection assumes that the desired output values will be placed in the “input” registers of the destination nodes before updating weights.
Note: for compatibility with nnlib2Rcpp version 0.1.4 (the current version on CRAN)- the example above assumes that the desired values are placed as input to the processing nodes right before update of weights (encoding); version 0.1.5 and above provides direct access from R to the “misc” variables that nodes and connections maintain (via “NN” method “set_misc_values_at”, more on “NN” below). It may have been more elegant to use these “misc” variables for holding desired output in processing nodes instead of “input”.

(d) Next, you may want to define a class for groups of such connections, which can be done quickly using the template “Connection_Set”:
typedef Connection_Set< perceptron_connection > perceptron_connection_set;
Step 3: Add the ability to create such components at run-time.

Again in the “additional_parts.h” C++ header file found in directory “src” add code that creates Percepron layers and groups of connections when a particular name is used. Locate the “generate_custom_layer” function and add to it the line:
if(name == "perceptron") return new perceptron_layer(name,size);
(you will notice other similar definitions are already there). Finally, locate the “generate_custom_connection_set” function and add to it the line:
if(name == "perceptron") return new perceptron_connection_set(name);
(again, you will notice other similar definition examples are already there).
Note: as of July 15, 2020 all the aforementioned changes to “additional_parts.h” C++ header are already implemented as an example in the nnlib2Rcpp version 0.1.5 github repo [5].

That is it. You can now build the modified library and then return to the R world to use your newly created Perceptron components in a NN. The “NN” R module in nnlib2Rcpp allows you to combine these (and other) components in a network and then use it in R.

It is now time to see if this cut-down modified Perceptron is any good. In the example below, the iris dataset is used to train it. The example uses the “NN” R module in nnlib2Rcpp to build the network and then trains and tests it. The network topology consists of a generic input layer (component #1) of size 4 i.e. as many nodes as the iris features, a set of connections (component #2) whose weights are initialized to 0 (in create_connections_in_sets) below, and a processing layer (component #3) of size 3 i.e. as many nodes as the iris species:

library("nnlib2Rcpp")

# create the NN and define its components
p <- new("NN")
p$add_layer("generic",4)
p$add_connection_set("perceptron")
p$add_layer("perceptron",3)
p$create_connections_in_sets(0,0)

# show the NN topology
p$outline()

# prepare some data based on iris dataset
data_in <- as.matrix(iris[1:4])
iris_cases <- nrow((data_in))
species_id <- unclass(iris[,5])
desired_data_out <- matrix(data=0, nrow=iris_cases, ncol=3)
for(c in 1:iris_cases) desired_data_out[c,species_id[c]]=1

# encode data and desired output (for 30 training epochs)
for(i in 1:30)
for(c in 1:iris_cases)
{
p$input_at(1,data_in[c,])
p$recall_all(TRUE)
p$input_at(3,desired_data_out[c,])
p$encode_at(2)
}

# show the NN
p$print()

# Recall the data to see what species Perceptron returns:
for(c in 1:iris_cases)
{
p$input_at(1,data_in[c,])
p$recall_all(TRUE)
cat("iris case ",c,", desired = ", desired_data_out[c,], " returned = ", p$get_output_from(3),"\n")
}
Checking the output one sees that our Perceptron variation is not THAT bad. At least it recognizes Iris setosa and virginica quite well. However, classification performance on versicolor cases is rather terrible.

iris case 1 , desired = 1 0 0 returned = 1 0 0
iris case 2 , desired = 1 0 0 returned = 1 0 0
iris case 3 , desired = 1 0 0 returned = 1 0 0

iris case 148 , desired = 0 0 1 returned = 0 0 1
iris case 149 , desired = 0 0 1 returned = 0 0 1
iris case 150 , desired = 0 0 1 returned = 0 0 1

Anyway, this example was not about classification success but about creating a new NN type in the nnlib2Rcpp R package. I hope it will be useful to some of you out there.

Links (all accessed July 12, 2020):

[1] Perceptron:
https://en.wikipedia.org/w/index.php?title=Perceptron&oldid=961536136

[2] RTools:
https://cran.r-project.org/bin/windows/Rtools/history.html

[3] Rcpp package:
https://cran.r-project.org/web/packages/Rcpp/index.html

[4] nnlib2Rcpp package on CRAN:
https://cran.r-project.org/web/packages/nnlib2Rcpp/index.html

[5] nnlib2Rcpp package on github:
https://github.com/VNNikolaidis/nnlib2Rcpp

[6] Frank Rosenblatt:
https://en.wikipedia.org/wiki/Frank_Rosenblatt


PS. Time permitting, more components will be added to the collection (and added to nnlib2Rcpp), maybe accompanied by posts similar to this one; these will eventually be available in the package. Any interesting or useful NN component that you would like to contribute is welcomed (credit, of course, will go to you, its creator); if so, please contact me using the comments below. (Another project is to create parallel-processing versions of the components, if anyone wants to help).

Version 0.1.4 of nnlib2Rcpp: a(nother) R package for Neural Networks

For anyone interested, a new version (v.0.1.4) of nnlib2Rcpp is available on GitHub. It can be installed the usual way for packages on GitHub:

library(devtools)
install_github("VNNikolaidis/nnlib2Rcpp")
nnlib2Rcpp is an R package containing a number of Neural Network (NN) implementations. The NNs are implemented in C++ (using nnlib2 C++ class library) and are interfaced with R via Rcpp package (which is required). The package currently includes versions of Back-Propagation, Autoencoder, Learning Vector Quantization (unsupervised and supervised) and simple Matrix-Associative-Memory neural networks. Functions and modules for directly using these models from R are provided.

Furthermore, a new “NN” module (NN-class) has been added to version 0.1.4, that allows the creation of custom NNs using predefined components, and manipulation of the network and its components from R. It also provides a fixed procedure for defining new NN component types (layers, nodes, sets of connections etc) which can then be used in the module (some familiarity with C++ is required).

The  “NN” (aka NN-class)   provides methods for handling the NN, such as: add_layer, add_connection_set, create_connections_in_sets, connect_layers_at, fully_connect_layers_at, add_single_connection, input_at, encode_at, encode_all, recall_at, recall_all, get_output_from, get_input_at, get_weights_at, print, outline.

A (rather useless and silly) example of using the “NN” module follows:
# (1.A) create new 'NN' object:
n <- new("NN")

# (1.B) Add topology components:
# 1. add a layer of 4 generic nodes:
n$add_layer("generic",4)
# 2. add (empty) set for connections that pass data unmodified:
n$add_connection_set("pass-through")
# 3. add another layer of 2 generic nodes:
n$add_layer("generic",2)
# 4. add (empty) set for connections that pass data * weight:
n$add_connection_set("wpass-through")
# 5. add a layer of 1 generic node:
n$add_layer("generic",1)
# Create actual connections in sets,w/random initial weights in [0,1]:
n$create_connections_in_sets(0,1)
# Optionally, show an outline of the topology:
n$outline()

# (1.C) use the network.
# input some data, and run create output for it:
n$input_at(1,c(10,20,30,40))
n$recall_all(TRUE)
# the final output:
n$get_output_from(5)

# (1.D) optionally, examine the network:
# the input at first layer at position 1:
n$get_input_at(1)
# Data is passed unmodified through connections at position 2,
# and (by default) summed together at each node of layer at 3.
# Final output from layer in position 3:
n$get_output_from(3)
# Data is then passed multiplied by the random weights through
# connections at position 4. The weights of these connections:
n$get_weights_at(4)
# Data is finally summed together at the node of layer at position 5,
# producing the final output, which (again) is:
n$get_output_from(5)
The next example, creates a simple MAM using the "NN" (NN-class) module: 
# (2.A) Create the NN and its components:

m <- new( "NN" )
m$add_layer( "generic" , 4 )
m$add_layer( "generic" , 3 )
m$fully_connect_layers_at(1, 2, "MAM", 0, 0)

# (2.B) Use it to store iris species:

iris_data    <- as.matrix( scale( iris[1:4] ) )
iris_species <- as.integer( iris$Species )
for(r in 1:nrow( iris_data ) )
{
    x <- iris_data[r,]
    z <- rep( -1, 3 )
    z [ iris_species[r] ] <- 1
    m$input_at( 1, x )
    m$input_at( 3, z )
    m$encode_all( TRUE )
}

# (2.C) Attempt to recall iris species:

recalled_ids <- NULL;
for(r in 1:nrow(iris_data))
{
    x <- iris_data[r,]
    m$input_at( 1, x )
    m$recall_all( TRUE )
    z <- m$get_output_from( 3 )
    recalled_ids <- c( recalled_ids, which.max ( z ) )
}

plot(iris_data, pch=recalled_ids)
Hopefully the collection of predefined components will expand (and any contribution of components is welcomed).

nnlib2Rcpp: a(nother) R package for Neural Networks

For anyone interested, nnlib2Rcpp is an R package containing a number of Neural Network implementations and is available on GitHub. It can be installed as follows (the usual way for packages on GitHub):
library(devtools)
install_github("VNNikolaidis/nnlib2Rcpp")
The NNs are implemented in C++ (using  nnlib2 C++ class library) and are interfaced with R via Rcpp package (which is also required).

The package currently includes the following NN implementations:

  • A Back-Propagation (BP) multi-layer NN (supervised) for input-output mappings.
  • An Autoencoder NN (unsupervised) for dimensionality reduction (a bit like PCA) or dimensionality expansion.
  • A Learning Vector Quantization NN (LVQ, supervised) for classification.
  • A Self-Organizing Map NN (unsupervised, simplified 1-D variation of SOM) for clustering (a bit like k-means).
  • A simple Matrix-Associative-Memory NN (MAM, supervised) for storing input-output vector pairs.
For more information see the package documentation.