For a brief overview of this process, click here.
 Building and Running Network Models
Building a Model
A CNS network model is created by writing a MATLAB script that sets up a single MATLAB struct, for which we usually use the variable "m". The model structure identifies the package and describes the number, types, and sizes of layers as well as the connectivity between cells. It must also provide an initial value for all fields that do not have default values.
See the script demopkg_run in the demo package for an example of some code that sets up a simple model structure. Here is some of that structure, which will be explained in the sections to follow.
m = package: 'demopkg' layers: {[1x1 struct] [1x1 struct] [1x1 struct]} m.layers{1} = type: 'input' pz: 0 size: {[1] [256] [256]} y_start: 0.0020 y_space: 0.0039 x_start: 0.0020 x_space: 0.0039 m.layers{2} = type: 'scale' pz: 1 size: {[1] [128] [128]} y_start: 0.0039 y_space: 0.0078 x_start: 0.0039 x_space: 0.0078 m.layers{3} = type: 'filter' pz: 2 rfCount: 11 size: {[4] [118] [118]} y_start: 0.0430 y_space: 0.0078 x_start: 0.0430 x_space: 0.0078
Choosing a Package
Before anything else, you need to identify which package of cell types this model uses. This is done as follows:
m.package = name;
For example:
m.package = 'demopkg';
Calling Package Methods
The remainder of this section describes how to set up a model structure from scratch; however, the package you have chosen may contain some methods to help you do this.
 The package may have a CNSInit method, which gets called automatically by CNS to fill in the values of some fields for you.
 The package may have some additional methods which you can call.
You should check the package to see if any such methods are available.
ModelLevel Information
In addition to the choice of package, you must provide an initial value for all fields having model scope that do not have default values. This is done as follows:
m.field = value;
Click here for details on setting values for the different classes of fields.
Layer Information
Most of the work in setting up a network model is in defining the various layers of cells that make up the network. As shown above, each layer is defined by a struct in the cell array m.layers. The following sections describe the various elements of a layer definition and how to set them up.
Basic Layer Properties
Property  Usage 
name  Optional. Provides the layer with a name. This can be useful when models have lots of layers; the cns_layerno function can be used to find the number of a named layer. If given, must be unique.
Example: m.layers{1}.name = 'image'; 
type  Required. Tells CNS the cell type of this layer. Must be a nonabstract type defined in the selected package.
Example: m.layers{1}.type = 'input'; 
size  Required. Tells CNS how many cells are in the layer along each dimension. The dimensionality is determined by the cell type.
Example: m.layers{1}.size = {500 100 100}; Note 1: if the cell type maps some dimensions to a common coordinate space, the sizes of those dimensions will probably be determined by that process (which will probably involve the cns_mapdim function). Note 2: CNS issues performance warnings if your layer size will lead to inefficient processing; see performance note here. To turn off these warnings, you can set: m.quiet = true; 
Common Coordinate Mapping
If some of a cell type's dimensions are mapped to a common coordinate space, you need to establish that mapping for each layer you define. This is done by calling the cns_mapdim function for each mapped dimension. This sets:
 The size of that dimension.
 The common coordinate grid along that dimension, encoded by the parameter fields dim_start and dim_space. (You can see examples of these above.)
cns_mapdim has a number of different options. The following code sets up two layers of the demo model above. Note:
 Layer 2 is given the fixed size of {1 128 128} (the 128 comes from scaling 256 down by a factor of 2). Common grid coordinates are set for dimensions y and x by placing 128 regular grid points to fill the range [0, 1].
 The first dimension of layer 3 is fixed to size 4, but the sizes of the y and x dimensions are derived from the fact that layer 3 is generated by moving an 11x11 filter across layer 2 in steps of 1. The y and x grid coordinates are placed at the center of each valid filter position.
m.layers{2}.size{1} = 1; m = cns_mapdim(m, 2, 'y', 'scaledpixels', 256, 2); m = cns_mapdim(m, 2, 'x', 'scaledpixels', 256, 2); m.layers{3}.size{1} = 4; m = cns_mapdim(m, 3, 'y', 'int', 2, 11, 1); m = cns_mapdim(m, 3, 'x', 'int', 2, 11, 1);
Once common coordinates have been set up, there are several useful functions you can call from MATLAB:
 cns_center  find the position of a cell in common coordinates.
 cns_findnearest, cns_findnearest_at  find the nearest n cells to a given cell or position.
 cns_findwithin, cns_findwithin_at  find cells within radius r of a given cell or position.
Explicit Synapses
Explicit synapses (if your cell type has the synType property) may be enumerated for all the cells in a layer using the following three properties.
Property  Usage 
synapseZs  The layer number of the presynaptic cell for each synapse, for each cell in this layer. This is a numeric array of size [ns n1 n2 ...], where:
For example, if layer z is a 3D layer of size [8 64 64], with at most 50 synapses per cell, we would have: size(m.layers{z}.synapseZs) = [50 8 64 64] The synapseZs values for cell (1, 1, 1) will be in synapseZs(:, 1, 1, 1). If some cells have less than ns synapses, the trailing synapseZs values for those cells must be zero. For example, if cell (1, 1, 1) has only 42 synapses, then synapseZs(43:50, 1, 1, 1) must all be 0. If all synapses for this layer originate in the same presynaptic layer, you can set synapseZs to a scalar. 
synapseIs  This holds the linear index of each presynaptic cell within its own layer. For example, if we want to point to presynaptic cell (5, 2, 4) which sits in a layer of size [10 20 30], that cell's linear index within its layer can be computed as:
sub2ind([10 20 30], 5, 2, 4) = 615 (Note: cns_iconv provides the same functionality and is more convenient to use with CNS model structures.) Like synapseZs above, this is a numeric array of size [ns n1 n2 ...]. Also like synapseZs, if some cells have less than ns synapses, the trailing synapseIs values for those cells must be zero. 
synapseTs  If desired, you can attach a positive integer to each synapse, which is a good way to differentiate types of synapses. If present, synapseTs must be the same size as synapseIs and have zeros exactly where synapseIs has zeros. If all synapses have the same type, synapseTs can also be a scalar. 
Once explicit synapses have been enumerated for all layers, the cns_trace function is a useful tool for tracing connectivity through the network.
Group Membership
It is sometimes convenient to have multiple layers share some of the same data (parameters, feature dictionaries, etc.) One way to do this would be to have the package define these fields at the model level, but that may be too broad. CNS has a "group" option whereby multiple layers of the same type can be declared to be a group. They will then share a single copy of all fields which the package defines as:
For example, when building a network model, you could declare layers 1 and 2 to be part of the same group (group 1) like this:
m.layers{1}.groupNo = 1; m.layers{2}.groupNo = 1;
CNS would then expect to find the values of any "group" fields (fields of the classes listed above) in:
m.groups{1}
If a given layer z is not assigned to any group, CNS will look for any "group" fields in:
m.layers{z}
If groups are used, they must be contiguously numbered. For example, if group 5 exists, then groups 14 must also exist.
Groups may have names, e.g.:
m.groups{1}.name = 'c1';
Named groups can be found in a large model using the function cns_groupno.
Fields
You must provide an initial value for all fields having layer (or group) scope that do not have default values.
This is done for layer z as follows:
m.layers{z}.field = value;
Exception: if layer z belongs to group g, fields having group scope are set as follows:
m.groups{g}.field = value;
Click here for details on setting values for the different classes of fields.
Execution Order
By default, during a single network iteration:
 All layers are computed in parallel. While this is not strictly true, you can pretend it is. The order of processing is both unspecified and irrelevant. A doublebuffering technique is used so that all computations performed in iteration t use inputs from iteration t1.
 Every cell (except those without a kernel) gets its compute kernel called once.
The above make sense for dynamic, timebased simulations such as models that use spiking neurons. But they don't make sense for other kinds of models. For example:
 (1) would be suboptimal for a purely feedforward model, which is most efficiently computed stepwise, from bottom to top.
 (2) doesn't hold when training a convolutional network, where a single network iteration consists of a forward pass, a backward pass, and a weight update.
CNS's solution to this is to assign each layer to one or more step numbers. For example, the following would cause CNS to break a full network iteration into three steps, consisting of layers 1, 2, and 3, respectively. The results of step 1 will be available to the cells being computed in step 2, etc.
m.layers{1}.stepNo = 1; m.layers{2}.stepNo = 2; m.layers{3}.stepNo = 3;
More than one layer can be computed in a single step. (Indeed, in the default case, where no stepNos are specified, all layers get assigned to step 1.)
The function cns_setstepnos can automatically assign step numbers for some common cases.
A layer can also be computed more than once in a single network iteration. Here, layer z will get computed twice, once in step 3 and once in step 6.
m.layers{z}.stepNo = [3 6];
In networks that are computed in steps, it is often the case that the cells within a single step are independent of each other. They depend on cells in previous steps, but not on each other. If this is true for all steps, then the doublebuffering referred to above is unnecessary, and you can get a performance gain by turning it off. This is done with the setting:
m.independent = true;
Initial Values of Fields
You must provide a value (for variables, an initial value) for all fields that do not have default values. This is done by setting a value in the model structure, in one of these places (depending on the scope):
m.field = value; m.layers{z}.field = value; m.groups{g}.field = value;
The following table shows where each class/scope of field is initialized, and the format required. Note:
 n1 = the size of a layer along dimension 1
 n2 = the size of a layer along dimension 2
 ...
 ns = the maximum number of synapses for a cell in a layer
 nv = the number of values in a multivalued field
Field Class  Scope  Initialized in  Format (SingleValued) 
Format (Multivalued) 
parameter pointer 
model  m 


layer  m.layers{z}  
group  m.groups{g}  
ND array  model  m 


layer  m.layers{z}  
group  m.groups{g}  
cell field  cell  m.layers{z} 


synapse field  synapse  m.layers{z} 


A few functions that might be useful here are:
 cns_getconsts  returns the values of compiletime constants defined by the package.
 cns_intmin, cns_intmax  lower and upper bounds for an integer.
 cns_fltmin, cns_fltmax  lower and upper bounds for a singleprecision floating point number.
Reviewing the Model Structure
Once you've assembled your model structure, you can try initializing it on the GPU. CNS will tell you if anything is missing, wrongly formatted, etc.
You may be relying on CNS to fill in some default values for you. If you want to check these, you can get a complete model structure with all the defaults filled in by calling cns_getdflts.
You are also free to use the model structure to store additional information that CNS doesn't know about. This is fine, as long as you don't overwrite anything CNS needs. If you've forgotten what's yours and what's CNS's, the function cns_getknownfields will tell you.
Running a Model
Once you have built your model structure, you can initialize it on the GPU, execute it, set inputs and retrieve outputs between iterations/steps, etc. All of this is done using the cns function.