(強いAI)技術的特異点/(世界加速) 23at FUTURE
(強いAI)技術的特異点/(世界加速) 23 - 暇つぶし2ch550:YAMAGUTIseisei
18/09/09 08:18:23.81 vMpjqLBja BE:132176096-2BP(3)
Arguments pushed into the stack can properly be calculated as sub-expressions (sub-tree).
In this sense for the actual function call it is irrelevant either program variables or temporary are pushed into the stack.

  # 1 push B
    # clearing the next cell in the stack [remember that sp is negative]
    # the line below is same as in C syntax: *(++sp)=0;
    dec sp; t1; sp t1; t2; sp t2; t1:0 t2:0
    # same as in C syntax: *sp+=B;
    t3; sp t3; b Z; Z t3:0; Z

  # 2 push A
    # the same with A
    dec sp; t4; sp t4; t5; sp t5; t4:0 t5:0
    t6; sp t6; a Z; Z t6:0; Z

  # 3 push return_address
    dec sp; t7; sp t7; t8; sp t8; t7:0 t8:0
    t9; sp t9; t10 t9:0 goto_address
    . t10: return_address

  # 4 goto f goto_address: Z Z f

  # 5 sp -= 3
    return_address: const(-3) sp

Notation const(-3) sp is a short for

  unique_name sp
  ...
  unique_name:-3

551:YAMAGUTIseisei
18/09/09 08:21:26.97 vMpjqLBja BE:44058863-2BP(3)
The code above handles neither return value nor indirect calls yet.
The return value can be stored in a special variable (register).
If the program uses the return value in a sub-expression, then it must copy the value into a temporary immediately upon return.
Indirect calls can be achieved by dereferencing a temporary holding the address of the function.
It is straightforward, but more complex code.

Stack pointer can be modified inside a function when the function requests stack (local) variables.
For accessing local variables usually base pointer bp is used.
It is initialised on function entrance; is used as a base reference for local variables ? each local variable has an associated offset from base pointer; and is used to restore stack pointer at the end of the function.
Functions can call other functions, which means that each function must save upon entry and restore upon exit base pointer.
So the function body has to be wrapped with the following commands:


~ 12 ~

Page 13


  1. # push bp
  2. # sp -> bp
  3. # sp -= stack_size
  # ... function body
  5. # bp -> sp
  6. # pop bp
  7. # return

Or in Subleq code.

552:YAMAGUTIseisei
18/09/09 08:23:41.07 vMpjqLBja BE:44058492-2BP(3)
  dec sp; ?+11; sp ?+7; ?+6; sp ?+2; 0
  ?+6; sp ?+2; bp 0
  bp; sp bp
  stack_size sp

  # ... function body

  sp; bp sp
  ?+8; sp ?+4; bp; 0 bp; inc sp
  ?+8; sp ?+4; ?+7; 0 ?+3; Z Z 0

stack_size is a constant, which is calculated for every function during parsing.
It turns out that it is not enough to save bp.
A function call can happen inside an expression.
In such case all temporaries of the expression have to be saved.
A new function will be using the same temporary memory cells for its own needs.
For the expression f()+g() the results of the calls may be stored in variables t1 and t2.
If function g changes t1 where the result of function f is stored, a problem would appear.

A solution is to make every function push all temporaries it is using onto the stack and to restore them upon exit.
Consider the following function:

  int g()
  {
    return k+1;
  }

It translates into:

553:YAMAGUTIseisei
18/09/09 08:24:55.53 vMpjqLBja BE:78326584-2BP(3)
  _g:
    # save bp
    dec sp; ?+11; sp ?+7; ?+6; sp ?+2; 0
    ?+6; sp ?+2; bp 0
    bp; sp bp

    # push t1
    dec sp; ?+11; sp ?+7; ?+6; sp ?+2; 0
    ?+6; sp ?+2; t1 0
    # push t2
    dec sp; ?+11; sp ?+7; ?+6; sp ?+2; 0
    ?+6; sp ?+2; t2 0

    # calculate addition
    t1; t2
    _k t1
    dec t1
    t1 t2
    # set the return value [negative]
    ax; t2 ax

    # pop t2
    ?+8; sp ?+4; t2; 0 t2; inc sp
    # pop t1
    ?+8; sp ?+4; t1; 0 t1; inc sp

    # restore bp
    sp; bp sp
    ?+8; sp ?+4; bp; 0 bp; inc sp
    # exit
    ?+8; sp ?+4; ?+7; 0 ?+3; Z Z 0

554:YAMAGUTIseisei
18/09/09 08:25:45.91 vMpjqLBja BE:39163182-2BP(3)
~ 13 ~

Page 14


If somewhere inside the code there are calls to other functions, the temporaries t1 and t2 hold their calculated values because other functions save and restore them when executed.
Since all used temporaries in the function are pushed into the stack, it pays off to reduce the number of used temporaries.
It is possible to do this just by releasing any used temporary into a pool of used temporaries.
Then later when a new temporary is requested, the pool is first checked and a new temporary is allocated only when the pool is empty.
The expression

  1+k[1]

compiles into

  t1; t2; _k t1; dec t1; t1 t2
  t3; t4; ?+11; t2 Z; Z ?+4; Z; 0 t3; t3 t4;
  t5; t6; dec t5; t4 t5; t5 t6
  # result in t6

When pool of temporaries is introduced the number of temporaries is halved:

  t1; t2; _k t1; dec t1; t1 t2
  t1; t3; ?+11; t2 Z; Z ?+4; Z; 0 t1; t1 t3
  t1; t2; dec t1; t3 t1; t1 t2
  # result in t2

which dramatically reduces the code removing corresponding push and pop operations.

555:YAMAGUTIseisei
18/09/09 08:26:33.41 vMpjqLBja BE:119937877-2BP(3)
4.4 Stack variables

Once bp is placed on the stack and sp is decremented to allocate memory, all local variables become available.
They can be accessed only indirectly because the compiler does not know their addresses.
For example, the function f in

  int f(int x, int y)
  {
    int a, b=3, c[3], d=5;
    ...
  }
  f(7,9);

has 4 local variables with the stack size equal to 6.
When this function is entered the stack has the following values:

... y[9] x[7] [return_address] [saved_bp] a[?] b[3] c0[?] c1[?] c2[?] d[5] ...
              ^               ^
              (bp)               (sp)

The compiler knows about the offset of each variable from bp.


~ 14 ~

Page 15

556:YAMAGUTIseisei
18/09/09 08:28:24.39 vMpjqLBja BE:48954645-2BP(3)
  Variable Offset
  y   -3
  x   -2
  a   1
  b   2
  c   3
  d   6

Hence, in the code any reference to a local variable, not pointing to an array, can be replaced with *(bp+offset).
The array c has to be replaced with (bp+offset) because the name of array is the address of its first element.
The name does not refer to a variable, but the referencing with [] does.
In C

  c[i]

is the same as

  *(c+i)

which can be interpreted in our example as

  *((bp+3)+i)

4.5 Multiplication

The only trivial multiplication in Subleq is multiplication by 2,

  t=a+a: t; a Z; a Z; Z t; Z

To multiply 2 numbers one can use the formula

557:YAMAGUTIseisei
18/09/09 08:29:27.49 vMpjqLBja BE:29373326-2BP(3)
  A*B = (2A)*(B/2) + A*(B%2)

This is a simple recursive formula, but it requires integer and modular division.
Division can be implemented as the following algorithm.
Given two numbers A and B, B is increased by 2 until the next increase gives B greater then A.
At the same time as increasing B, we increase another variable I by 2, which has been initialized to 1.
When B becomes greater then A, I holds the part of the result of division ? the rest is to be calculated further using A-B and original B.
This can be done recursively accumulating all I's.
At the last step when A<B, A is the modulus.
This algorithm can be implemented as a short recursive function in C.
Upon the exit this function returns the integer division as the result and division modulus in the argument j.

  int a, int b, int * j)
  {

    if( a < b ) { *j=a; return 0; }

    int b1=b, i=1, bp, ip;

  next:
    bp = b1; ip = i;
    b1 *= 2; i *= 2;
    if( b1 > a )
      return ip+divMod(a-bp,b,j);
    goto next;
  }


~ 15 ~

Page 16

558:YAMAGUTIseisei
18/09/09 08:30:57.51 vMpjqLBja BE:66089039-2BP(3)
This function is not optimal.
A more efficient function can be achieved by replacing recursion with another external loop.
Multiplication, integer and modular division operations requiring quite elaborate calculations can be implemented as library functions.
That is, each multiplication a*b can be replaced with a call _mul(a,b), and later the compiler may add (if necessary) the implementation of the function.

4.6 Conditional jump

In C, Boolean expressions which evaluate to zero are false and non-zero are true.
In Subleq this leads to longer code when handling Boolean expressions because every Boolean expression evaluates on the basis of equality or non-equality to zero.

A better approach is to treat less or equal to zero as false and a positive value as true.
Then if-expression if(expr){<body>} will be just one instruction

  Z t next
  <body>
  next: ...

where t is the result of the expression expr.
However to remain fully compatible with C (for example, if(x+1){...} ? an implicit conversion to Boolean) all cases where integer expression is used as Boolean have to be detected.
Fortunately there are only a few such cases:

  if(expr)
  while(expr)
  for(...,expr,...)
  ! expr
  expr1 && expr2
  expr1 || expr2

The job can be done inside the parser, so the compiler would not have to care about Boolean or integer expression, and it can produce much simpler code.

In cases when a Boolean variable is used in expressions as integer, like in:

559:YAMAGUTIseisei
18/09/09 08:31:39.95 vMpjqLBja BE:117490368-2BP(3)
  passing an argument f(a>0)
  returning from a function return(a>0);
  assignment x=(a>0);
  other arithmetic expression x=1+5*(a>0);

the variable must be converted to C-style, i.e. negative result zeroed.
This can be done as simple as

  x Z ?+3; x; Z


Z x ? + 3
Z Z
x Z
Z

x > 0
x == 0
x < 0

Figure 3 Diagram representing conditional jumps.


~ 16 ~

Page 17


A terse check for a value being less than, equal to, or greater than zero is:

  Z x ?+3; Z Z G; x Z E; Z; L:

560:YAMAGUTIseisei
18/09/09 08:32:36.33 vMpjqLBja BE:119937877-2BP(3)
where L, E, and G are addresses to pass the execution in cases when x is less than, equal to, or greater than zero respectively.
Figure 3 shows the schema of the execution.
Note, that x does not change and Z is zero on any exit!

5. Results



Figure 4 FPGA board, 28 Subleq processors with allocated 2 Kb per processor


Figure 4 shows our FPGA board powered via USB cable.
Sized about 5 x 7 centimetres the board implements 28 Subleq processors with allocated memory of 2 Kb per processor and running at clock frequency 150 MHz.

To test the efficiency of the board we chose two mathematical problems.
The first calculates the size of a function residue of an arithmetic group.
The second calculates modular double factorials.

5.1 Test #1

In the first test we selected a problem of finding the order of the function residue of the following process:

  xi +1 = 2 xi mod M
  yi +1 = 2( xi + yi ) mod M

where x and y are integers initialised to 1, mod is a modulo operation, M is some value.
Starting from the point (x0=1,y0=1) the equations generate a sequence of pairs.
We chose this problem because its solution is difficult, with answers often much greater than M (but less than M 2 ).
Number M was selected such, that the calculations could be completed in a few minutes.
When this sequence is sufficiently long, a new pair of generated numbers will eventually be the same as the pair previously generated in the sequence.
The task is to find how many steps have to be completed before the first occurrence of the result with the same value.
In our test the selected value of M was M=5039 and the number of iterations was calculated as 12693241.

561:YAMAGUTIseisei
18/09/09 08:33:43.69 vMpjqLBja BE:176235089-2BP(3)
Page 17

~ 18 ~


A C program to solve this problem can be written without the use of multiplication or division:

  int x=1, y=1, m=5039;
  int c=0, ctr=1, t;
  int x0=0, y0=0;

  int printf();
  int main()
  {

    while(1)
    {
      y += x; y += y; x += x;
      while( x>=m ) x-=m;
      while( y>=m ) y-=m;

      if( x==x0 && y==y0 ) break;

      if( ++c==ctr )
      {
        x0=x; y0=y;
        c=0; ctr+=ctr;
      }
    }
    printf("point: %d %d loop: %d of %d\n",x0,y0,c+1,ctr);
  }

562:YAMAGUTIseisei
18/09/09 08:35:14.91 vMpjqLBja BE:58745838-2BP(3)
This program has been tested in the following cases:

  1. Compiled with our Subleq compiler and run on one of the processors on FPGA board;
  2. Compiled with our Subleq compiler and emulated on PC#1 (Intel Q9650 at 3GHz)
  3. Compiled with Microsoft C/C++ compiler (v16) with full optimisation and run on PC#1.
  4. Same as 2, but run on PC#2 (Pentium 4 at 1.7GHz)
  5. Same as 3 run on PC#2

The table below shows execution time in seconds for each test.

  1 Subleq on 1 processor FPGA 94.0
  2 Subleq on PC#1 46.0
  3 C on PC#1 0.37
  4 Subleq on PC#2 216
  5 C on PC#2 0.54

From these results we conclude that the speed of a single processor on FPGA is of the same order of magnitude as the speed of CPU of ordinary PC when emulating Subleq instructions.
Native code on a PC runs about hundred times faster.

5.2 Test #2

The second test was the calculation of modular double factorials, namely

   N n
  (N !)! mod M = ?? i mod M
   n =1 i =1


Page 18

~ 19 ~

563:YAMAGUTIseisei
18/09/09 08:36:23.18 vMpjqLBja BE:58745164-2BP(3)
In this test case we were able to use the full power of our multi-processor Subleq system because multiplication in the above equation could be calculated in parallel across all 28 processors.
For N=5029 and M=5039 the result is 95 and these numbers were used in the test.
Number M was same as in the Test#1 and number N was selected to give the result (95) in ASCII printable range.
The calculations were run in the following configurations:

  1. Hand written Subleq code run of FPGA board [Appendix 7.3]
  2. Subleq code emulated on PC (same as PC#1 in the first test)
  3. Equivalent C code compiled with the same C compiler and run on PC [Appendix 7.1]
  4. Same C code compiled with Subleq compiler and emulated on PC
  5. Equivalent C code without multiplication operation compiled with C compiler and run on PC [Appendix 7.2]
  6. Same C code as in 5 compiled with Subeq compiler and emulated on PC

The code we used was not 100% efficient, since the solution to the problem needs ~O(NlogN) operations if utilising modular exponentiation, rather than ~O(N 2 ) as presented in the Appendix.
However this is not important when evaluating relative performance.

The results are presented in the table below.
The values are execution time in seconds.

  1 Subleq on FPGA, parallel on 28 processors 62.0
  2 Subleq on PC (emulation) 865
  3 C with multiplication, executable run on PC 0.15
  4 C with multiplication, Subleq emulated on PC 12060
  5 C without multiplication, executable run on PC 7.8
  6 C without multiplication, Subleq emulated on PC 9795

The 28 FPGA processors easily outperform the emulation of the same Subleq code on a PC.
C code without multiplication compiled into Subleq and emulated runs faster than C code with multiplication, because the compiler’s library multiplication function is not as efficient as the multiplication function written in this example.

564:YAMAGUTIseisei
18/09/09 08:40:24.80 vMpjqLBja BE:39163182-2BP(3)
6. Conclusion

Using an inexpensive Cyclone III FPGA we have successfully built an OISC multi-processor device with processors running in parallel.
Each processor has its own memory limited to 2 Kb.
Due to this limitation we were unable to build a multi-processor board with even simpler individual processor instruction set, such as e.g.
bit copying [2], because in that case, to run practically useful computational tasks the minimum required memory is ~ 1 Mb of memory per processor.
The limited memory available in our device also did not permit us to run more advanced programs, such as emulators of another processor or use more complex computational algorithms,
because all the computational code has to fit inside the memory allocated for each processor.


Page 19

~ 20 ~


The size of memory available to each processor can be increased by choosing larger and faster albeit more expensive FPGA such as Stratix V.
Then a faster processing clock and larger number of CPUs could be implemented as well.
The VHDL code of the CPU state machine could also be optimised improving computational speed.
Given sufficient memory, it would be possible to emulate any other processor architecture, and use algorithms written for other CPU’s or run an operating system.
Apart from the memory constrain, another downside of this minimalist approach was reduced speed.
Our board uses rather slow CPU clock speed of 150 MHz.
As mentioned above, more expensive FPGA can run at much faster clock speeds.

565:YAMAGUTIseisei
18/09/09 08:41:29.57 vMpjqLBja BE:36716235-2BP(3)
On the other hand, the simplicity of our design allows for it to be implemented as a standalone miniature-scale multi-processor computer, thus reducing both physical size and energy consumption.
With proper hardware, it might also be possible to power such devices with low power solar batteries similar to those used in cheap calculators.
Our implementation is scalable ? it is easy to increase the number of processors by connecting additional boards without significant load on host’s power supply.
A host PC does not have to be fast to load the code and read back the results.
Since our implementation is FPGA based, it is possible to create other types of runtime re-loadable CPUs, customised for specific tasks by reprogramming FPGA.

In conclusion, we have demonstrated the feasibility of the OISC concept and applied it to building a functional prototype of OISC multi-processor system.
Our results demonstrate that with proper hardware and software implementation, a substantial computational power can be achieved already in a very simple OISC multi-processor design.


Page 21

~ 21 ~


7. Appendix

This section presents pieces of code calculating modular double factorial.

7.1 C with multiplication

The following C program calculates modular double factorial using built-in multiplication and division operations.

566:YAMAGUTIseisei
18/09/09 08:42:50.59 vMpjqLBja BE:102803876-2BP(3)
1   int printf();
2   int main()
3   {
4     int a=5029;
5     int b=1;
6     int m=5039;
7     int x=1;
8     int i,j;
9
10     for( i=a; i>b; i-- )
11     for( j=1; j<=i; j++ )
12     x = (j*x)%m;
13
14     printf("%d",x);
15   }

Lines 10-12 is a double loop multiplying numbers from b to a modulo m.

7.2 C without multiplication

This C program does the same calculation as the program above in 7.1, but without built-in multiplication and division operations.
Multiplication and division functions are written explicitly.
en explicitly.

1   int DivMod(int a, int b, int *m)
2   {
3     int b1, i1, bp, ip;
4     int z = 0;
5
6   start:
7     if( a<b ){ *m=a; return z; }
8

567:YAMAGUTIseisei
18/09/09 08:43:44.94 vMpjqLBja BE:73431465-2BP(3)
9     b1=b; i1=1;
10
11   next:
12     bp = b1; ip = i1;
13     b1 += b1; i1 += i1;
14
15     if( b1 > a )
16     {
17       a = a-bp;
18       z += ip;
19       goto start;
20     }
21
22     if( b1 < 0 ) return z;
23
24     goto next;
25   }
26
27   int Mult(int a, int b)
28   {
29     int dmm, r=0;
30
31     while(1)
32     {
33       if( !a ) return r;
34       a=DivMod(a,2,&dmm);
35       if( dmm ) r += b;
36       b += b;
37     }
38   }
39
40   int printf();

568:YAMAGUTIseisei
18/09/09 08:45:16.48 vMpjqLBja BE:66089039-2BP(3)
41
42  int a=5029, b=1, m=5039;
43   int k=0, x=1, t;
44
45   int main()
46   {
47   start: k=a;
48   loop: t=Mult(k,x);
49     DivMod(t,m,&x);
50
51     if( --k ) goto loop;
52     if( --a > b ) goto start;
53
54     printf("%d",x);
55   }

Lines 1-25 implement the division algorithm described in 4.5, but optimised by removing recursive call.
The multiplication (lines 27-38) is a straightforward implementation of the formula shown in 4.5.


Page 21

~ 22 ~


C loops are replaced with goto statements to make process flow similar to Subleq implementation in the next subsection 7.3.

7.3 Subleq code

Subleq code calculating modular double factorials has been written manually, because the compiled Subleq from C did not fit into the memory.
The code below has 83 instructions, which can fit even into 1 Kb with 32-bit word.

569:YAMAGUTIseisei
18/09/09 08:46:06.76 vMpjqLBja BE:61193055-2BP(3)
1   0 0 Start
2
3   . A:5029 B:1 MOD:5039
4   . Z:0 K:0 X:1
5
6   Start:
7   A Z; Z K; Z
8
9   Loop:
10   mu_a; K mu_a
11   mu_b; X mu_b
12
13
14   Mult:
15   mu_r
16
17   mu_begin:
18   t2; mu_a t2 mu_return:N2
19
20   dm_a; mu_a dm_a
21   dm_b; C2 dm_b
22   dm_return; N3L dm_return
23   t2 t2 DivMod
24
25   N3:
26   dm_m t2 ?+3
27
28   mu_b mu_r
29
30   mu_a; dm_z mu_a
31   mu_b Z; Z mu_b; Z Z mu_begin
32

570:YAMAGUTIseisei
18/09/09 08:48:15.18 vMpjqLBja BE:9791322-2BP(3)
33   . mu_a:0 mu_b:0 mu_r:0
34
35   #Mult
36
37
38   N2:
39   dm_a; mu_r Z; Z dm_a; Z
40   dm_b; MOD dm_b
41
42   dm_return; N1L dm_return
43   Z Z DivMod
44
45   N1:
46   X; dm_m X
47
48   C1 K ?+3
49   Z Z Loop
50
51   C1 A
52   B A END
53   B Z; Z A; Z
54   K K Start
55
56   END:
57   X (-1)
58   Z Z (-1)
59
60   DivMod:
61
62   dm_z
63   dm_m
64

571:YAMAGUTIseisei
18/09/09 08:48:55.68 vMpjqLBja BE:88117766-2BP(3)
65   dm_start:
66   t1; dm_b t1
67   dm_a t1 ?+6
68   dm_a dm_m; Z Z dm_return:0
69
70   dm_b1; dm_b Z; Z dm_b1; Z
71   dm_i1; C1 dm_i1
72
73  dm_next:
74   dm_bp; dm_b1 dm_bp
75   dm_ip; dm_i1 dm_ip
76
77   dm_b1 Z; Z dm_b1; Z
78   dm_i1 Z; Z dm_i1; Z
79   t1; dm_b1 t1
80   dm_a t1 dm_next
81
82   dm_bp dm_a
83   dm_ip Z; Z dm_z; Z Z dm_start
84
85   . dm_a:0 dm_b:0 dm_z:0
86   . dm_m:0 dm_b1:0 dm_ip:0
87   . dm_i1:0 dm_bp:0 t1:0
88
89   #divMod
90
91   . N1L:-N1 N3L:-N3 t2:0
92   . C1:1 C2:2 0

572:YAMAGUTIseisei
18/09/09 08:51:10.16 vMpjqLBja BE:44059436-2BP(3)
Lines 3 and 4 define variables similar to how variables are defined in the C example above.
A defines the number for which double factorial is to be calculated; B is a starting number ? in our case it is 1, but in general it can be any number.
When the task is distributed among parallel processors the range B to A is broken into smaller ranges and submitted to the processors independently.
Upon completion the results collected and processed further.
MOD is modulus of the algorithm.
Z is Subleq zero register.
K is an intermediate value running from A to 1.
And X is the accumulated result.

Line 7 initialises K.
Lines 10 and 11 prepare formal arguments for the multiplication algorithm written between lines 14 and 35.
This code of the multiplication algorithm is almost one to one equivalent to the function Mult written in the previous sub-section.


Page 22

~ 23 ~


The only complex is that DivMod function organised here is a function so its code is being reused from the lines 23 and 43.
To make this possible one needs initialise formal arguments for the function as well as the return address.
The return address is copied via indirect labels N1L and N3L.

Lines 39 and 40 take the result from multiplication and initialise arguments for division.
Lines 42 and 43 initialise return address and call DivMod.
Line 46 extracts the result into X.

Lines 48 and 49 decrement K and check if it is less than 1.
If not, the whole iteration is repeated with K smaller by 1.
If K reached zero, proceed.

573:YAMAGUTIseisei
18/09/09 08:51:32.29 vMpjqLBja BE:78326584-2BP(3)
Lines 51-54 decrement A and check if A has reached B.
If yes, we go to label END.
If not, we go to line 7 and repeat the whole process again but now with A reduced by, hence K starting from new value of A.

Line 57 crossed over is to print the result.
This instruction is handy when emulating Subleq.
But in calculating on the FPGA board this instruction does not exists because the board does not have concept of input-output operations.
The next line 58 is a valid Subleq halt command.

Lines 60-89 are corresponding Subleq code for the function DivMod presented in C in the sub-section above.

Finally, lines 91 and 92 define return addresses for calls to DivMod, a temporary t2, and two constants 1 and 2.
The later is required for division in the multiplication formula from 4.5.

References

1.
Jones, Douglas W.
(June 1988).
"The Ultimate RISC".
ACM SIGARCH Computer Architecture News (New York: ACM) 16 (3): 48?55.
2.
Oleg Mazonka, "Bit Copying: The Ultimate Computational Simplicity", Complex Systems Journal 2011, Vol 19, N3, pp. 263-285
3.
URLリンク(esolangs.org)
4.
URLリンク(esolangs.org)
5.
Derivative languages in the references section of URLリンク(esolangs.org)

574:YAMAGUTIseisei
18/09/12 06:50:03.08 Yfdxz6GgZ BE:176234898-2BP(3)
6.
Mavaddat, F.; Parhami, B.
(October 1988).
"URISC: The Ultimate Reduced Instruction Set Computer". Int'l J. Electrical Engineering Education
(Manchester University Press) 25 (4):
327?334
7.
URLリンク(esolangs.org)
8.
URLリンク(da.vidr.cc)
9.
URLリンク(www.sccs.swarthmore.edu)


Page 23

~ 24 ~


10.
URLリンク(techtinkering.com)
11.
URLリンク(esolangs.org)

575:YAMAGUTIseisei
18/09/18 21:31:03.83 F/b4+koTS BE:29372843-2BP(3)
Numenta Publishes a New Theory That Could Solve the Mystery of How the Brain Transforms Sensations Into Mental Objects 脳が感情を精神的目的にどのように変換するかの謎を解くことができる新しい理論をNumentaが発表
URLリンク(businesswire.com)
A Theory of How Columns in the Neocortex Enable Learning the Structure of the World新皮質の柱がどのように世界の構造を学習するかの理論
URLリンク(ncbi.nlm.nih.gov)
スレリンク(future板:819番)

576:YAMAGUTIseisei
18/09/18 21:44:42.66 F/b4+koTS BE:97908858-2BP(3)
Therefore, the capacity of the network is limited by the pooling capacity of the output layer.Mathematical analysis suggests that a single cortical column can store hundreds of objects before reaching this limit (see Supplementary Material).

To measure actual network capacity, we trained networks with an increasing number of objects and plotted recognition accuracy.
For a single cortical column, with 4,096 cells in the output layer and 150 mini-columns in the input layer, the recognition accuracy remains perfect up to 400 objects (Figure5A, blue).
The retrieval accuracy drops when the number of learned objects exceeds the capacity of the network.

Figure 5
Recognition accuracy is plotted as a function of the number of learned objects.
(A)Network capacity relative to number of mini-columns in the input layer.
The number of output cells is kept at 4,096 with 40 cells active at any time.
(B)Network capacity ...

From the mathematical analysis, we expect the capacity of the network to increase as the size of the input and output layers increase.We again tested our analysis through simulations.
With the number of active cells fixed,the capacity increases with the number of mini-columns in the input layer(Figure5A).
This is because with more cells in the input layer,the sparsity of activation increases, and it is less likely for an output cell to be falsely activated.
The capacity also significantly increases with the number of output cells when the size of the input layer is fixed(Figure5B).
This is because the number of feedforward connections per output cell decreases when there are more output cells available.
We found that if the size of individual columns is fixed, adding columns can increase capacity(Figure5C).
This is because the lateral connections in the output layer can help disambiguate inputs once individual cortical columns hit their capacity limit.However,this effect is limited;the incremental benefit of additional columns decreases rapidly.

577:YAMAGUTIseisei
18/09/18 21:49:00.18 F/b4+koTS BE:97908285-2BP(3)
The above simulations demonstrate that it is possible for a single cortical column to model and recognize several hundred objects.
Capacity is most impacted by the number of cells in the input and output layers.
Increasing the number of columns has a marginal effect on capacity.
The primary benefit of multiple columns is to dramatically reduce the number of sensations needed to recognize objects.
A network with one column is like looking at the world through a straw; it can be done, but slowly and with difficulty.

Noise robustness
We evaluated robustness of a single column network to noise.
After the network learned a set of objects, we added varying amounts of random noise to the sensory and location inputs.
The noise affected the active bits in the input without changing its overall sparsity (see Materials and Methods).
Recognition accuracy after 30 touches is plotted as a function of noise (Figure6A).
There is no impact on the recognition accuracy up to 20% noise in the sensory input and 40% noise in the location input.
We also found that the convergence speed was impacted by noise in the location input (Figure6B).
It took more sensations to recognize the object when the location input is noisy.

Figure 6
Robustness of a single column network to noise.
(A)Recognition accuracy is plotted as a function of the amount of noise in the sensory input (blue) and in the location input (yellow).
(B) Recognition accuracy as a function of the number of sensations.

578:yamaguti
18/09/18 21:50:26.30 F/b4+koTS BE:19582324-2BP(3)
Go to:
Mapping to biology
Anatomical evidence suggests that the sensorimotor inference model described above exists at least once in each column (layers 4 and 2/3) and perhaps twice (layers 6a and 5).
We adopt commonly used terminology to describe these layers.
This is a convenience as the connectivity and physiology of cell populations is what matters.
Cells we describe as residing in separate layers may actually intermingle in cortical tissue (Guy and Staiger, 2017).

Layers 4 and 2/3
The primary instance of the model involves layers 4 and 2/3 as illustrated in Figure ​Figure7A.7A.
The following properties evident in L4 and L2/3 match our model.
L4 cells receive direct thalamic input from sensory “core” regions (e.g., LGN; Douglas and Martin, 2004).
This input onto proximal dendrites exhibits driver properties (Viaene et al., 2011a).
L4 cells do not form long range connections within their layer (Luhmann et al., 1990).
L4 cells project to and activate cells in L2/3 (Lohmann and Rörig, 1994; Feldmeyer et al., 2002; Sarid et al., 2007), and receive feedback from L2/3 (Lefort et al., 2009; Markram et al., 2015).
L2/3 cells project long distances within their layer (Stettler et al., 2002; Hunt et al., 2011) and are also a major output of cortical columns (Douglas and Martin, 2004; Shipp, 2007).
It is known that L2/3 activation follows L4 activation (Constantinople and Bruno, 2013).

579:yamaguti
18/09/18 21:50:54.03 F/b4+koTS BE:110146695-2BP(3)
Figure 7
Mapping of sensorimotor inference network onto experimentally observed cortical connections.
Arrows represent documented pathways.
(A) First instance of network; L4 is input layer, L2/3 is output layer.
Green arrows are feedforward pathway, from thalamo-cortical ...

The model predicts that a representation of location is input to the basal distal dendrites of the input layer.
A timing requirement of our model is that the location signal is a predictive signal that must precede the arrival of the sensory input.
This is illustrated by the red line in Figure ​Figure7A.7A.
About 45% of L4 synapses come from cells in L6a (Binzegger et al., 2004).
The axon terminals were found to show a strong preference for contacting basal dendrites (McGuire et al., 1984) and activation of L6a cells caused weak excitation of L4 cells (Kim et al., 2014).
Therefore, we propose that the location representation needed for the upper model comes from L6a.

580:yamaguti
18/09/18 21:52:53.05 F/b4+koTS BE:58744883-2BP(3)
Layers 6a and 5
Another potential instance of the model is in layers 6a and 5 as illustrated in Figure ​Figure7B.7B.
The following properties evident in L6a and L5 match our model.
L6a cells receive direct thalamic input from sensory “core” regions (e.g., LGN; Thomson, 2010).
This input exhibits driver properties and resembles the thalamocortical projections to L4 (Viaene et al., 2011b).
L6a cells project to and activate cells in L5 (Thomson, 2010).
Recent experimental studies found that the axons of L6 CT neurons densely ramified within layer 5a in both visual and somatosensory cortices of the mouse,
and activation of these neurons generated large excitatory postsynaptic potentials (EPSPs) in pyramidal neurons in layer 5a (Kim et al., 2014).
L6a cells receive feedback from L5 (Thomson, 2010).
L5 cells project long distances within their layer (Schnepel et al., 2015) and L5 cells are also a major output of cortical columns (Douglas and Martin, 2004; Guillery and Sherman, 2011; Sherman and Guillery, 2011).
There are three types of pyramidal neurons in L5 (Kim et al., 2015).
Here we are referring to only one of them, the larger neurons with thick apical trunks that send an axon branch to relay cells in the thalamus (Ramaswamy and Markram, 2015).
However, there is also empirical evidence our model does not map cleanly to L6a and L5.
For example, Constantinople and Bruno (2013) have shown a sensory stimulus will often cause L5 cells to fire simultaneously or even slightly before L6 cells, which is inconsistent with the model.
Therefore, whether L6a and L5 can be interpreted as an instance of the model is unclear.

581:yamaguti
18/09/18 21:56:04.78 F/b4+koTS BE:58745838-2BP(3)
Origin of location signal
The derivation of the location representation in L6a is unknown.
Part of the answer will involve local processing within the lower layers of the column and part will likely involve long range connections between corresponding regions in “what” and “where” pathways (Thomson, 2010).
Parallel “what” and “where” pathways exist in all the major sensory modalities (Ungerleider and Haxby, 1994; Ahveninen et al., 2006).
Evidence suggests that regions in “what” pathways form representations that exhibit increasing invariance to translation, rotation or scale and increasing selectivity to sensory features in object centered coordinates (Rust and DiCarlo, 2010).
This effect can be interpreted as forming allocentric representations.
In contrast, it has been proposed that regions in “where” pathways form representations in egocentric coordinates (Goodale and Milner, 1992).
If an egocentric motor behavior is generated in a “where” region, then a copy of the motor command will need to be sent to the corresponding “what” region where it can be converted to a new predicted allocentric location.
The conversion is dependent on the current position and orientation of the object relative to the body.
It is for this reason we suggest that the origin of the location signal might involve long-range connections between “where” and “what” regions.
In the Discussion section we will describe how the location might be generated.

582:yamaguti
18/09/18 21:56:55.79 F/b4+koTS BE:34268827-2BP(3)
Physiological evidence
In addition to anatomical support, there are several physiological predictions of the model that are supported by empirical observation.
L4 and L6a cells exhibit “simple” receptive fields (RFs) while L2/3 and L5 cells exhibit “complex” RFs (Hubel and Wiesel, 1962; Gilbert, 1977).
Key properties of complex cells include RFs influenced by a wider area of sensory input and increased temporal stability (Movshon et al., 1978).
L2/3 cells have receptive fields that are twice the size of L4 cells in the primary somatosensory cortex (Chapin, 1986).
A distinct group of cells with large and non-oriented receptive fields were found mostly in layer 5 of the visual cortex (Mangini and Pearlman, 1980; Lemmon and Pearlman, 1981).
These properties are consistent with, and observed, in the output layer of our model.

The model predicts that cells in a mini-column in the input layer (L4 and L6a) will have nearly identical RFs when presented with an input than cannot be predicted as part of a previously learned object.
However, in the context of learned objects, the cells in a mini-column will differentiate.
One key differentiation is that individual cells will respond only in specific contexts.
This differentiation has been observed in multiple modalities (Vinje and Gallant, 2002; Yen et al., 2006; Martin and Schröder, 2013; Gavornik and Bear, 2014).
Our model is also consistent with findings that early sensory areas are biased toward recent perceptual recognition results (St.
John-Saaltink et al., 2016).

583:yamaguti
18/09/18 21:57:27.83 F/b4+koTS BE:29372843-2BP(3)
A particularly relevant version of this phenomenon is “border ownership” (Zhou et al., 2000).
Cells which have similar classic receptive fields when presented with isolated edge-like features, diverge, and fire uniquely when the feature is part of a larger object.
Specifically, the cells fire when the feature is at a particular location on a complex object, a behavior predicted and exhibited by our model.
To explain border ownership, researchers have proposed a layer of cells that perform “grouping” of inputs.
The grouping cells are stable over time (Craft et al., 2007).
The output layer of our model performs this function.
“Border ownership” is a form of complex object modeling.
It has been observed in both primary and secondary sensory regions (Zhou et al., 2000).
We predict that similar properties can be observed in primary and secondary sensory regions for even more complex and three-dimensional objects.

Lee et al.
show that enhancement of motor cortex activity facilitates sensory-evoked responses of topographically aligned neurons in primary somatosensory cortex (Lee et al., 2008).
Specifically, they found that S1 corticothalamic neurons in whisker/barrel cortex responded more robustly to whisker deflections when motor cortex activity was focally enhanced.
This supports the model hypothesis that behaviorally-generated location information projects in a column-by-column fashion to primary sensory regions.

584:yamaguti
18/09/18 21:59:33.62 F/b4+koTS BE:110146695-2BP(3)
Go to:
Discussion
Relationship with previous models
Due to the development of new experimental techniques, knowledge of the laminar circuitry of the cortex continues to grow (Thomson and Bannister, 2003; Thomson and Lamy, 2007).
It is now possible to reconstruct and simulate the circuitry in an entire cortical column (Markram et al., 2015).
Over the years, numerous efforts have been undertaken to develop models of cortical columns.
Many cortical column models aim to explain neurophysiological properties of the cortex.
For example, based on their studies on the cat visual cortex (Douglas and Martin, 1991), provided one of the first canonical microcircuit models of a cortical column.
This model explains intracellular responses to pulsed visual stimulations and has remained highly influential (Douglas and Martin, 2004).
Hill and Tononi (2004) constructed a large-scale model of point neurons that are organized in a repeating columnar structure to explain the difference of brain states during sleep and wakefulness.
Traub et al.
(2004) developed a single-column network model based on multi-compartmental biophysical models to explain oscillatory, epileptic, and sleeplike phenomena.
Haeusler and Maass (2007) compared cortical microcircuit models with and without the lamina-specific structure and demonstrated several computational advantages of more realistic cortical microcircuit models.
Reimann et al.
(2013) showed that the neocortical local field potentials can be explained by a cortical column model composed of >12,000 reconstructed multi-compartmental neurons.

585:yamaguti
18/09/18 22:03:33.31 F/b4+koTS BE:44059436-2BP(3)
Although these models provided important insights on the origin of neurophysiological signals, there are relatively few models proposing the functional roles of layers and columns.
Bastos et al.
(2012) discussed the correspondence between the micro-circuitry of the cortical column and the connectivity implied by predictive coding.
This study used a coarse microcircuit model based on the work of Douglas and Martin (2004) and lacked recent experimental evidence and detailed connectivity patterns across columns.

Raizada and Grossberg (2003) described the LAMINART model to explain how attention might be implemented in the visual cortex.
This study highlighted the anatomical connections of the L4-L2/3 network and proposed that perceptual grouping relies on long-range lateral connections in L2/3.
This is consistent with our proposal of the stable object representation in L2/3.
A recent theory of optimal context integration proposes that long-range lateral connections are used to optimally integrate information from the surround (Iyer and Mihalas, 2017).
The structure of their model is broadly consistent with the theories presented here, and provides a possible mathematical basis for further analysis.

The benefit of cortical columns
Our research has been guided by Mountcastle's definition of a cortical column (Mountcastle, 1978, 1997), as a structure “formed by many mini-columns bound together by short-range horizontal connections.”
The concept plays an essential role in the theory presented in this paper.
Part of our theory is that each repetitive unit, or “column,” of sensory cortex can learn complete objects by locally integrating sensory and location data over time.
In addition, we have proposed that multiple cortical columns greatly speed up inference and recognition time by integrating information in parallel across dispersed sensory areas.

586:yamaguti
18/09/18 22:10:09.20 F/b4+koTS BE:117490368-2BP(3)
An open issue is the exact anatomical organization of columns.
We have chosen to describe a model of columns with discrete inter-column boundaries.
This type of well-defined structure is most clear in the rat barrel cortex (Lubke et al., 2000; Bureau et al., 2004; Feldmeyer et al., 2013)
but Mountcastle and others have pointed out that although there are occasional discontinuities in physiological and anatomical properties,
there is a diverse range of structures and the more general rule is continuity (Mountcastle, 1978; Horton and Adams, 2005; Rockland, 2010).

Mountcastle's concept of a repetitive functional unit, whether continuous or discrete, is useful to understand the principles of cortical function.
Our model assigns a computational benefit to columns, that of integrating discontinuous information in parallel across disparate areas.
This basic capability is independent of any specific type of column (such as, hypercolumns or ocular dominance columns), and independent of discrete or continuous structures.
The key requirement is that each column models a different subset of sensory space and is exposed to different parts of the world as sensors move.

--
This is clear in cortex but Mountcastle have pointed out that although there are occasional discontinuities in physiological and anatomical properties, there is a diverse range of structures and the more general rule is continuity.

587:yamaguti
18/09/18 22:11:15.12 F/b4+koTS BE:24477252-2BP(3)
Generating the location signal
A key prediction of our model is the presence of a location signal in each column of a cortical region.
We deduced the need for this signal based on the observation that cortical regions predict new sensory inputs due to movement (Duhamel et al., 1992; Nakamura and Colby, 2002; Li and DiCarlo, 2008).
To predict the next sensory input, a patch of neocortex needs to know where a sensor will be on a sensed object after a movement is completed.
The prediction of location must be done separately for each part of a sensor array.
For example, for the brain to predict what each finger will feel on a given object, it has to predict a separate allocentric location for each finger.
There are dozens of semi-independent areas of sensation on each hand, each of which can sense a different location and feature on an object.
Thus, the allocentric location signals must be computed in a part of the brain where somatic topology is similarly granular.
For touch, this suggests the derivation of allocentric location is occurring in each column throughout primary regions such as, S1 and S2.
The same argument holds for primary visual regions, as each patch of the retina observes different parts of objects.

588:yamaguti
18/09/18 22:15:52.69 F/b4+koTS BE:154205497-2BP(3)
Although we don't know how the location signal is generated, we can list some theoretically-derived requirements.
A column needs to know its current location on an object, but it also needs to predict what its new location will be after a movement is completed.
To translate an egocentric motor signal into a predicted allocentric location, a column must also know the orientation of the object relative to the body part doing the moving.
This can be expressed in the pseudo-equation [current location + orientation of object + movement ≥ predicted new location].
This is a complicated task for neurons to perform.
Fortunately, it is highly analogous to what grid cells do.
Grid cells are a proof that neurons can perform these types of transformations, and they suggest specific mechanisms that might be deployed in cortical columns.

1.
Grid cells in the entorhinal cortex (Hafting et al., 2005; Moser et al., 2008) encode the location of an animal's body relative to an external environment.
A sensory cortical column needs to encode the location of a part of the animal's body (a sensory patch) relative to an external object.

2.
Grid cells use path integration to predict a new location due to movement (Kropff et al., 2015).
A column must also use path integration to predict a new location due to movement.

3.
To predict a new location, grid cells combine current location, with movement, with head direction cells (Moser et al., 2014).
Head direction cells represent the “orientation” of the “animal” relative to an external environment.
Columns need a representation of the “orientation” of a “sensory patch” relative to an external object.

4.
The representation of space using grid cells is dimensionless.
The dimensionality of the space they represent is defined by the tiling of grid cells, combined with how the tiling maps to behavior.
Similarly, our model uses representations of location that are dimensionless.

589:yamaguti
18/09/18 22:17:38.66 F/b4+koTS BE:88117294-2BP(3)
These analogs, plus the fact that grid cells are phylogenetically older than the neocortex, lead us to hypothesize that the cellular mechanisms used by grid cells were preserved and replicated in the sub-granular layers of each cortical column.
It is not clear if a column needs neurons that are analogous to place cells (Moser et al., 2015).
Place cells are believed to associate a location (derived from grid cells) with features and events.
They are believed to be important for episodic memory.
Presently, we don't see an analogous requirement in cortical columns.

590:yamaguti
18/09/18 22:18:35.23 F/b4+koTS BE:137071687-2BP(3)
Today we have no direct empirical evidence to support the hypothesis of grid-cell like functionality in each cortical column.
We have only indirect evidence.
For example, to compute location, cortical columns must receive dynamically updated inputs regarding body pose.
There is now significant evidence that cells in numerous cortical areas, including sensory regions, are modulated by body movement and position.
Primary visual and auditory regions contain neurons that are modulated by eye position (Trotter and Celebrini, 1999; Werner-Reiss et al., 2003) as do areas MT, MST, and V4 (Bremmer, 2000; DeSouza et al., 2002).
Cells in frontal eye fields (FEF) respond to auditory stimuli in an eye-centered frame of reference (Russo and Bruce, 1994).
Posterior parietal cortex (PPC) represents multiple frames of reference including head-centered (Andersen et al., 1993) and body-centered (Duhamel et al., 1992; Brotchie et al., 1995, 2003; Bolognini and Maravita, 2007) representations.
Motor areas also contain a diverse range of reference frames, from representations of external space independent of body pose to representations of specific groups of muscles (Graziano and Gross, 1998; Kakei et al., 2003).
Many of these representations are granular, specific to particular body areas, and multisensory, implying numerous transformations are occurring in parallel (Graziano et al., 1997; Graziano and Gross, 1998; Rizzolatti et al., 2014).
Some models have shown that the above information can be used to perform coordinate transformations (Zipser and Andersen, 1988; Pouget and Snyder, 2000).

Determining how columns derive the allocentric location signal is a current focus of our research.

591:yamaguti
18/09/18 22:19:13.83 F/b4+koTS BE:66089039-2BP(3)
Role of inhibitory neurons
There are several aspects of our model that require inhibition.
In the input layer, neurons in mini-columns mutually inhibit each other.
Specifically, neurons that are partially depolarized (in the predictive state) generate a first action potential slightly before cells that are not partially depolarized.
Cells that spike first prevent other nearby cells from firing.
This requires a very fast, winner-take-all type of inhibition among nearby cells, and suggests that such fast inhibitory neurons contain stimulus-related information, which is consistent with recent experiment findings (Reyes-Puerta et al., 2015a,b).
Simulations of the timing requirement for this inhibition can be found in Billaudelle and Ahmad (2015).
Activations in the output layer do not require very fast inhibition.
Instead, a broad inhibition within the layer is needed to maintain the sparsity of activation patterns.
Experiment evidence for both fast and broad inhibition have been reported in the literature (Helmstaedter et al., 2009; Meyer et al., 2011).

Our simulations do not model inhibitory neurons as individual cells.
The functions of inhibitory neurons are encoded in the activation rules of the model.
A more detailed mapping to specific inhibitory neuron types is an area for future research.

Hierarchy
The neocortex processes sensory input in a series of hierarchically arranged regions.
As input ascends from region to region, cells respond to larger areas of the sensory array and to more complex features.
A common assumption is that complete objects can only be recognized at a level in the hierarchy where cells respond to input over the entire sensory array.

592:yamaguti
18/09/18 22:25:26.84 F/b4+koTS BE:19581942-2BP(3)
Our model proposes an alternate view.
All cortical columns, even columns in primary sensory regions, are capable of learning representations of complete objects.
However, our network model is limited by the spatial extent of the horizontal connections in the output layer.
Therefore, hierarchy is still required in many situations.
For example, say we present an image of a printed letter on the retina.
If the letter occupies a small part of the retina, then columns in V1 could recognize the letter.
If, however, the letter is expanded to occupy a large part of the retina,
then columns in V1 would no longer be able to recognize the letter because the features that define the letter are too far apart to be integrated by the horizontal connections in L2/3.
In this case, a converging input onto a higher cortical region would be required to recognize the letter.
Thus, the cortex learns multiple models of objects, both within a region and across hierarchical levels.

What would occur if multiple objects were being sensed at the same time?
In our model, one part of a sensory array could be sensing one object and another part of the sensory array could be sensing a different object.
Difficulty would arise if the sensations from two or more objects were overlaid or interspersed on a region, such as, if your index and ring finger touched one object while your thumb and middle finger touched another object.
In these situations, we suspect the system would settle on one interpretation or the other.

--
If, however, the letter is expanded to occupy a large part of the retina, then columns in V1 would no longer be able to recognize the letter because the features that define the letter are too far apart to be integrated by the connections in L.
--
Difficulty would arise if the sensations from two objects were overlaid on a region, such as, if your index and ring finger touched one object while your thumb and middle finger touched another object.

593:yamaguti
18/09/18 22:27:46.03 F/b4+koTS BE:85669875-2BP(3)
Sensory information is processed in parallel pathways, sometimes referred to as “what” and “where” pathways.
We propose that our object recognition model exists in “what” regions, which are associated with the ability to recognize objects.
How might we interpret “where” pathways in light of our model? First, the anatomy in the two pathways is similar.
This suggests that “what” and “where” regions perform similar operations, but achieve different results by processing different types of data.
For example, our network might learn models of ego-centric space if the location signal represented ego-centric locations.
Second, we suspect that bi-directional connections between what and where regions are required for converting ego-centric motor behaviors into allocentric locations.
We are currently exploring these ideas.

594:yamaguti
18/09/18 22:28:32.19 F/b4+koTS BE:39163182-2BP(3)
Vision, audition, and beyond
We described our model using somatic sensation.
Does it apply to other sensory modalities? We believe it does.
Consider vision.
Vision and touch are both based on an array of receptors topologically mapped to an array of cortical columns.
The retina is not like a camera.
The blind spot and blood vessels prevent all parts of an object from being sensed simultaneously, and the density of receptors in the retina is not uniform.
Similarly, the skin cannot sense all parts of an object at once, and the distribution of somatic receptors is not uniform.
Our model is indifferent to discontinuities and non-uniformities.
Both the skin and retina move, exposing cortical columns to different parts of sensed objects over time.
The methods for determining the allocentric location signal for touch and vision would differ somewhat.
Somatic sensation has access to richer proprioceptive inputs, whereas vision has access to other clues such as, ocular disparity.
Aside from differences in how allocentric location is determined, our model is indifferent to the underlying sensory modality.
Indeed, columns receiving visual input could be interspersed with columns receiving somatic input, and the long-range intercolumn connections in our model would unite these into a single object representation.

595:yamaguti
18/09/18 22:31:02.38 F/b4+koTS BE:19582324-2BP(3)
Similar parallels can be made for audition.
Perhaps the more powerful observation is that the anatomy supporting our model exists in most, if not all, cortical regions.
This suggests that no matter what kind of information a region is processing, its feedforward input is interpreted in the context of a location.
This would apply to high-level concepts as well as low-level sensory data.
This hints at why it is easier to memorize a list of items when they are mentally associated with physical locations, and why we often use mental imagery to convey abstract concepts.

Testable predictions
A number of experimentally testable predictions follow from this theory.

1.
The theory predicts that sensory regions will contain cells that are stable over movements of a sensor while sensing a familiar object.

2.
The set of stable cells will be both sparse and specific to object identity.
The cells that are stable for a given object will in general have very low overlap with those that are stable for a completely different object.

3.
Layers 2/3 of cortical columns will be able to independently learn and model complete objects.
We expect that the complexity of the objects a column can model will be related to the extent of long-range lateral connections.

4.
Activity within the output layer of each cortical column (layers 2/3) will become sparser as more evidence is accumulated for an object.
Activity in the output layer will be denser for ambiguous objects.
These effects will only be seen when the animal is freely observing familiar objects.

5.
These output layers will form stable representations.
In general, their activity will be more stable than layers without long-range connections.

6.
Activity within the output layers will converge on a stable representation slower with long-range lateral connections disabled, or with input to adjacent columns disabled.

596:yamaguti
18/09/18 22:34:10.70 F/b4+koTS BE:58745546-2BP(3)
7.
The theory provides an algorithmic explanation for border ownership cells (Zhou et al., 2000).
In general each region will contain cells tuned to the location of features in the object's reference frame.
We expect to see these representations in layer 4.


Summary
Our research has focused on how the brain makes predictions of sensory inputs.
Starting with the premise that all sensory regions make predictions of their constantly changing input, we deduced that each small area in a sensory region must have access to a location signal that represents where on an object the column is sensing.
Building on this idea, we deduced the probable function of several cellular layers and are beginning to understand what cortical columns in their entirety might be doing.
Although there are many things we don't understand, the big picture is increasingly clear.
We believe each cortical column learns a model of “its” world, of what it can sense.
A single column learns the structure of many objects and the behaviors that can be applied to those objects.
Through intra-laminar and long-range cortical-cortical connections, columns that are sensing the same object can resolve ambiguity.

In 1978 Vernon Mountcastle reasoned that since the complex anatomy of cortical columns is similar in all of the neocortex, then all areas of the neocortex must be performing a similar function (Mountcastle, 1978).
His hypothesis remains controversial partly because we haven't been able to identify what functions a cortical column performs, and partly
because it has been hard to imagine what single complex function is applicable to all sensory and cognitive processes.

--
His hypothesis remains controversial partly because we haven't been able to identify what functions a partly because it has been hard to imagine what single complex function is applicable to all sensory and cognitive processes.

597:yamaguti
18/09/18 22:45:59.69 F/b4+koTS BE:132176096-2BP(3)
The model of a cortical column presented in this paper is described in terms of a sensory regions and sensory processing, but the circuitry underlying our model exists in all cortical regions.
Thus, if Mountcastle's conjecture is correct, even high-level cognitive functions, such as, mathematics, language, and science would be implemented in this framework.
It suggests that even abstract knowledge is stored in relation to some form of `location' and that much of what we consider to be `thought' is implemented by inference and behavior generating mechanisms originally evolved to move and infer with fingers
and eyes.

Go to:
Materials and methods
Here we formally describe the activation and learning rules for the HTM sensorimotor inference network.
We use a modified version of the HTM neuron model (Hawkins and Ahmad, 2016) in the network.
There are three basic aspects of the algorithm: initialization, computing cell states, and learning.
These steps are described along with implementation and simulation details.

Notation
Let Nin represent the number of mini-columns in the input layer, M the number of cells per mini-column in the input layer, Nout the number of cells in the output layer and Nc the number of cortical columns.
The number of cells in the input layer and output layer is MNin and Nout, respectively, for each cortical column.
Each input cell receives both the sensory input and a contextual input that corresponds to the location signal.
The location signal is a Next dimensional sparse vector L.

598:yamaguti
18/09/18 22:48:27.72 F/b4+koTS BE:22029833-2BP(3)
Each cell can be in one of three states: active, predictive, or inactive.
We use M × Nin binary matrices Ain and Πin to denote activation state and predictive state of input cells and use the Nout dimensional binary vector Aoutto denote the activation state of the output cells in a cortical column.
The concatenated output of all cortical columns is represented as a NoutNcolumn dimensional binary vector
At any point in time there are only a small number of cells active, so these are generally very sparse.

Each cell maintains a single proximal dendritic segment and a set of basal distal dendritic segments (denoted as basal below).
Proximal segments contain feedforward connections to that cell.
Basal segments represent contextual input.
The contextual input acts as a tiebreaker and biases the cell to win.
The contextual input to a cell in the input layer is a vector representing the external location signal L.
The contextual input to a cell in the output layer comes from other output cells in the same or different cortical columns.

For each dendritic segment, we maintain a set of “potential” synapses between the dendritic segment and other cells that could potentially form a synapse with it (Chklovskii et al., 2004; Hawkins and Ahmad, 2016).
Learning is modeled by the growth of new synapses from this set of potential synapses.
A “permanence” value is assigned to each potential synapse and represents the growth of the synapse.
Potential synapses are represented by permanence values greater than zero.
A permanence value close to zero represents an unconnected synapse that is not fully grown.
A permanence value greater than the connection threshold represents a connected synapse.
Learning occurs by incrementing or decrementing permanence values.

599:yamaguti
18/09/18 22:50:40.71 F/b4+koTS BE:73431465-2BP(3)
We denote the synaptic permanences of the dth dendritic segment of the ith input cell in the jth mini-column as a Next × 1 vector Dijd, in.
Similarly, the permanences of the dth dendritic segment of the ith output cell is the NoutNc × 1 dimensional vector Did, out.

Output neurons receive feedforward connections from input neurons within the same cortical column.
We denote these connections with a M × Nin × Nout tensor F, where fijk represents the permanence of the synapse between the ith input cell in the jth mini-column and the kth output cell.

For D and F, we will use a dot (e.g., ) to denote the binary vector representing the subset of potential synapses on a segment (i.e., permanence value above 0).
We use a tilde (e.g., ) to denote the binary vector representing the subset of connected synapses (i.e., permanence value above connection threshold).

Initialization
Each dendritic segment is initialized to contain a random set of potential synapses.
Dijd, in is initialized to contain a random set of potential synapses chosen from the location input.
Segments in Did, outare initialized to contain a random set of potential synapses to other output cells.
These can include cells from the same cortical column.
We enforce the constraint that a given segment only contains synapses from a single column.
In all cases the permanence values of potential synapses are chosen randomly: initially some are connected (above threshold) and some are unconnected.

Computing cell states
A cell in the input layer is predicted if any of its basal distal segments have sufficient activity:
(1)
where is the activation threshold of the basal distal dendrite of an input cell.

600:yamaguti
18/09/18 22:52:37.71 F/b4+koTS BE:117490368-2BP(3)
For the input layer, all the cells in a mini-column share the same feedforward receptive fields.
Following (Hawkins and Ahmad, 2016) we assume that an inhibitory process selects a set of s mini-columns that best match the current feedforward input pattern.
We denote this winner set as Win.
The set of active input layer cells is calculated as follows:
(2)
The first conditional states that predicted cells in a winning mini-column becoming winners and become active.
If no cell in a mini-column is predicted, all cells in that mini-column become active (second conditional).

To determine activity in the output layer we calculate the feedforward and lateral input to each cell.
Cells with enough feedforward overlap with the input layer, and the most lateral support from the previous time step become active.
The feedforward overlap to the kth output cell is:
(3)
The set of output cells with enough feedforward input is computed as:
(4)
where is a threshold.
We then select the active cells using the number of active basal segments as a sorting function:
(5)
where represents the number of active basal segments in the previous time step, and the sth highest number of active basal segments is denoted as .
is the activation threshold of the basal distal dendrite of an output cell.
I[] is the indicator function, and s is the minimum desired number of active neurons.
If the number of cells with lateral support is less than s in a cortical column, would be zero and all cells with enough feedforward input will become active.
Note that we used a modified version of the original HTM neuron model in the output layer by considering the effect of multiple active basal segments.

601:yamaguti
18/09/18 22:55:38.21 F/b4+koTS BE:58744883-2BP(3)
Learning in the input layer
In the input layer, basal segments represent predictions.
At any point only segments that match its contextual input are modified.
If a cell was predicted (Equation 1) and becomes active, the corresponding basal segments are selected for learning.
If no cell in an active mini-column was predicted, we select a winning cell as the cell with the best basal input match via random initial conditions.

For selected segments, we decrease the permanence of inactive synapses by a small value p- and increase the permanence of active synapses by a larger value p+
(6)
where ∘ represents element-wise multiplication.
Incorrect predictions are negatively punished.
If a basal dendritic segment on a cell becomes active and the cell subsequently does not become active, we slightly decrement the permanences of active synapses on the corresponding segments.
Note that in Equation (6), learning is applied to all potential synapses (denoted by ).

Learning in the output layer
When learning a new object a sparse set of cells in the output layer is selected to represent the new object.
These cells remain active while the system senses the object at different locations.
Thus, each output cell pools over multiple feature/location representations in the input layer.

For each sensation, proximal synapses are learned by increasing the permanence of active synapses by , and decreasing the permanence of inactive synapses by :
(7)
Basal segments of active output cells are learned using a rule similar to Equation (7):
(8)
Feedback

602:yamaguti
18/09/18 22:57:08.46 F/b4+koTS BE:39164328-2BP(3)
Feedback from the output layer to the input layer is used as an additional modulatory input to fine tune which cells in a winning mini-column become active.
Cells in the input layer maintain a set of apical segments similar to the set of basal segments.
If a cell has apical support (i.e., an active apical segment), we use a slightly lower value of to calculate .
In addition if multiple cells in a mini-column are predicted, only cells with feedback become active.
These rules make the set of active cells more precise with respect to the current representation in the output layer.
Apical segments on winning cells in the input layer are learned using exactly the same rules as basal segments.

Simulation details
To generate our convergence and capacity results we generated a large number of objects.
Each object consists of a number of sensory features, with each feature assigned to a corresponding location.
We encode each location as a 2,400-dimensional sparse binary vector with 10 random bits active.
Each sensory feature is similarly encoded by a vector with 10 random bits active.
The length of the sensory feature vector is the same as the number of mini-columns of the input layer Nin.
The input layer contains 150 mini-columns and 16 cells per mini-column, with 10 mini-columns active at any time.
The activation threshold of basal distal dendrite of input neuron is 6.
The output layer contains 4,096 cells and the minimum number of active output cells is 40.
The activation threshold is 3 for proximal dendrites and 18 for basal dendrites for output neurons.

603:yamaguti
18/09/18 23:02:11.17 F/b4+koTS BE:48954645-2BP(3)
During training, the network learns each object in random order.
For each object, the network senses each feature three times.
The activation pattern in the output layer is saved for each object to calculate retrieval accuracy.
During testing, we allow the network to sense each object at Klocations.
After each sensation, we classify the activity pattern in the output layer.
We say that an object is correctly classified
if, for each cortical column, the overlap between the output layer and the stored representation for the correct object is above a threshold, and the overlaps with the stored representation for all other objects are below that threshold.
We use a threshold of 30.

For the network convergence experiment (Figures4,5), each object consists of 10 sensory features chosen from a library of 5 to 30 possible features.
The number of sensations during testing is 20.
For the capacity experiment, each object consists of 10 sensory features chosen from a large library of 5,000 possible features.
The number of sensations during testing is 3.

Finally, we make some simplifying assumptions that greatly speed up simulation time for larger networks.
Instead of explicitly initializing a complete set of synapses across every segment and every cell, we greedily create segments on a random cell and initialize potential synapses on that segment by sampling from currently active cells.
This happens only when there is no match to any existing segment.

For the noise robustness experiment (Figure6) we added random noise to the sensory input and the location input.
For each input, we randomly flip a fraction of the active input bits to inactive, and flip the corresponding number of inactive input bits to active.
This procedure randomizes inputs while maintaining constant input sparsity.
The noise level denotes the fraction of active input bits that are changed for each input.
We varied the amount of noise between 0 and 0.7.

604:yamaguti
18/09/18 23:04:10.79 F/b4+koTS BE:97908858-2BP(3)
We constructed an ideal observer model to estimate the theoretical upper limit for model performance (Figure4C, Supplementary Figure 9).
During learning, the ideal observer model memorizes a list of (feature, location) pairs for each object.
During inference, the ideal observer model stores the sequence of observed (feature, location) pairs and calculates the overlap between all the observed pairs and the memorized list of pairs for each object.
The predicted object is the object that has the most overlap with all the observed sensations.
To compare the ideal observer with a multi-column network with N columns, we provide it with N randomly chosen observations per sensation.
Performance of the ideal observer model represents the best one can do given all the sensations up to the current time.
We also used the same framework to create a model that only uses sensory features, but no location signals (used in Figure ​Figure4C4C).

Go to:
Author contributions
JH conceived of the overall theory and the detailed mapping to neuroscience, helped design the simulations, and wrote most of the paper.
SA and YC designed and implemented the simulations and created the mathematical formulation of the algorithm.

Conflict of interest statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
JH, SA, and YC were employed by Numenta Inc.
Numenta has some patents relevant to the work.
Numenta has stated that use of its intellectual property, including all the ideas contained in this work, is free for non-commercial research purposes.
In addition Numenta has released all pertinent source code as open source under a GPL V3 license (which includes a patent peace provision).

Go to:
Acknowledgments

605:yamaguti
18/09/18 23:06:29.77 F/b4+koTS BE:19582324-2BP(3)
We thank the reviewers for their detailed comments, which have helped to improve the paper significantly.
We thank Jeff Gavornik for his thoughtful comments and suggestions.
We also thank Marcus Lewis, Nathanael Romano, and numerous other collaborators at Numenta over the years for many discussions.

Go to:
Footnotes
Funding.
Numenta is a privately held company.
Its funding sources are independent investors and venture capitalists.

Go to:
Supplementary material
The Supplementary Material for this article can be found online at: URLリンク(www.frontiersin.org)
Click here for additional data file.(7.0M, MP4)
Click here for additional data file.(2.8M, PDF)
Go to:
References

606:yamaguti
18/09/20 00:00:59.64 94opR8z06 BE:19581942-2BP(3)
* Ahmad S., Hawkins J. (2016).
How do neurons operate on sparse distributed representations? A mathematical theory of sparsity, neurons and active dendrites.
arXiv:1601.00720 [q-NC].
* Ahveninen J., Jääskeläinen I. P., Raij T., Bonmassar G., Devore S., Hämäläinen M., et al. (2006).
Task-modulated “what” and “where” pathways in human auditory cortex.
Proc. Natl. Acad. Sci. U.S.A. 103, 14608–14613. 10.1073/pnas.0510480103 [PMC free article] [PubMed] [Cross Ref]
* Andersen R. A., Snyder L. H., Li C. S., Stricanne B.
(1993). Coordinate transformations in the representation of spatial information.
Curr. Opin. Neurobiol. 3, 171–176. 10.1016/0959-4388(93)90206-E [PubMed] [Cross Ref]
* Bastos A. M., Usrey W. M., Adams R. A., Mangun G. R., Fries P., Friston K. J. (2012).
Canonical microcircuits for predictive coding.
Neuron 76, 695–711. 10.1016/j.neuron.2012.10.038 [PMC free article] [PubMed] [Cross Ref]
* Billaudelle S., Ahmad S. (2015).
Porting htm models to the heidelberg neuromorphic computing platform.
arXiv:1505.02142 [q-NC].
* Binzegger T., Douglas R. J., Martin K. A. C. (2004).
A quantitative map of the circuit of cat primary visual cortex.
J. Neurosci. 24, 8441–8453. 10.1523/JNEUROSCI.1400-04.2004 [PubMed] [Cross Ref]
* Bolognini N., Maravita A. (2007). Proprioceptive alignment of visual and somatosensory maps in the posterior parietal cortex.
Curr. Biol. 17, 1890–1895. 10.1016/j.cub.2007.09.057 [PubMed] [Cross Ref]
* Bremmer F. (2000).
Eye position effects in macaque area V4.
Neuroreport 11, 1277–1283. 10.1097/00001756-200004270-00027 [PubMed] [Cross Ref]
* Brotchie P. R., Andersen R. A., Snyder L. H., Goodman S. J. (1995).
Head position signals used by parietal neurons to encode locations of visual stimuli.
Nature 375, 232–235. 10.1038/375232a0 [PubMed] [Cross Ref]

607:yamaguti
18/09/20 00:02:06.44 94opR8z06 BE:88118249-2BP(3)
* Brotchie P. R., Lee M. B., Chen D. Y., Lourensz M., Jackson G., Bradley W. G. (2003).
Head position modulates activity in the human parietal eye fields.
Neuroimage 18, 178–184. 10.1006/nimg.2002.1294 [PubMed] [Cross Ref]
* Bureau I., Shepherd G. M. G., Svoboda K. (2004).
Precise development of functional and anatomical columns in the neocortex.
Neuron 42, 789–801. 10.1016/j.neuron.2004.05.002 [PubMed] [Cross Ref]
* Buxhoeveden D. P. (2002).
The minicolumn hypothesis in neuroscience.
Brain 125, 935–951. 10.1093/brain/awf110 [PubMed] [Cross Ref]
* Chapin J. K. (1986).
Laminar differences in sizes, shapes, and response profiles of cutaneous receptive fields in the rat SI cortex.
Exp. Brain Res. 62, 549–559. 10.1007/BF00236033 [PubMed] [Cross Ref]
* Chklovskii D. B., Mel B. W., Svoboda K. (2004).
Cortical rewiring and information storage.
Nature 431, 782–788. 10.1038/nature03012 [PubMed] [Cross Ref]
* Constantinople C. M., Bruno R. M. (2013).
Deep cortical layers are activated directly by thalamus.
Science 340, 1591–1594. 10.1126/science.1236425 [PMC free article] [PubMed] [Cross Ref]
* Craft E., Schutze H., Niebur E., von der Heydt R. (2007).
A neural model of figure-ground organization.
J. Neurophysiol. 97, 4310–4326. 10.1152/jn.00203.2007 [PubMed] [Cross Ref]
* DeSouza J. F., Dukelow S. P., Vilis T. (2002).
Eye position signals modulate early dorsal and ventral visual areas.
Cereb. Cortex 12, 991–997. 10.1093/cercor/12.9.991 [PubMed] [Cross Ref]
* Douglas R. J., Martin K. A. (1991).
A functional microcircuit for cat visual cortex.
J. Physiol. 440, 735–769. 10.1113/jphysiol.1991.sp018733 [PMC free article] [PubMed] [Cross Ref]
* Douglas R. J., Martin K. A. (2004).
Neuronal circuits of the neocortex.
Annu. Rev. Neurosci. 27, 419–451. 10.1146/annurev.neuro.27.070203.144152 [PubMed] [Cross Ref]

608:yamaguti
18/09/20 00:05:57.40 94opR8z06 BE:85669875-2BP(3)
* Duhamel J., Colby C. L., Goldberg M. E. (1992).
The updating of the representation of visual representation visual space in parietal cortex by intended eye movements.
Science 255, 90–92. 10.1126/science.1553535 [PubMed] [Cross Ref]
* Feldmeyer D., Brecht M., Helmchen F., Petersen C. C. H., Poulet J. F. A., Staiger J. F., et al. . (2013).
Barrel cortex function.
Prog. Neurobiol. 103, 3–27. 10.1016/j.pneurobio.2012.11.002 [PubMed] [Cross Ref]
* Feldmeyer D., Lübke J., Silver R. A., Sakmann B. (2002).
Synaptic connections between layer 4 spiny neurone-layer 2/3 pyramidal cell pairs in juvenile rat barrel cortex: physiology and anatomy of interlaminar signalling within a cortical column.
J. Physiol. 538, 803–822. 10.1113/jphysiol.2001.012959 [PMC free article] [PubMed] [Cross Ref]
* Gavornik J. P., Bear M. F. (2014).
Learned spatiotemporal sequence recognition and prediction in primary visual cortex. Nat.
Neurosci. 17, 732–737. 10.1038/nn.3683 [PMC free article] [PubMed] [Cross Ref]
* Gilbert C. D. (1977).
Laminar differences in receptive field properties of cells in cat primary visual cortex.
J. Physiol. 268, 391–421. 10.1113/jphysiol.1977.sp011863 [PMC free article] [PubMed] [Cross Ref]
* Goodale M. A., Milner A. D. (1992).
Separate visual pathways for perception and action.
Trends Neurosci. 15, 20–25. 10.1016/0166-2236(92)90344-8 [PubMed] [Cross Ref]
* Graziano M. S., Gross C. G. (1998).
Spatial maps for the control of movement.
Curr. Opin. Neurobiol. 8, 195–201. 10.1016/S0959-4388(98)80140-2 [PubMed] [Cross Ref]
* Graziano M. S., Hu X. T., Gross C. G. (1997).
Visuospatial properties of ventral premotor cortex.
J. Neurophysiol. 77, 2268–2292. [PubMed]
* Guillery R. W., Sherman S. M. (2011).
Branched thalamic afferents: what are the messages that they relay to the cortex?
Brain Res. Rev. 66, 205–219. 10.1016/j.brainresrev.2010.08.001 [PMC free article] [PubMed] [Cross Ref]

609:yamaguti
18/09/20 00:07:28.88 94opR8z06 BE:58745546-2BP(3)
* Guy J., Staiger J. F. (2017).
The functioning of a cortex without layers.
Front. Neuroanat. 11:54. 10.3389/fnana.2017.00054 [PMC free article] [PubMed] [Cross Ref]
* Haeusler S., Maass W. (2007).
A statistical analysis of information-processing properties of lamina-specific cortical microcircuit models.
Cereb. Cortex 17, 149–162. 10.1093/cercor/bhj132 [PubMed] [Cross Ref]
* Hafting T., Fyhn M., Molden S., Moser M.-B., Moser E. I. (2005).
Microstructure of a spatial map in the entorhinal cortex.
Nature 436, 801–806. 10.1038/nature03721 [PubMed] [Cross Ref]
* Hawkins J., Ahmad S. (2016).
Why neurons have thousands of synapses, a theory of sequence memory in neocortex.
Front. Neural Circuits 10:23. 10.3389/fncir.2016.00023 [PMC free article] [PubMed] [Cross Ref]
* Helmstaedter M., Sakmann B., Feldmeyer D. (2009).
Neuronal correlates of local, lateral, and translaminar inhibition with reference to cortical columns.
Cereb. Cortex 19, 926–937. 10.1093/cercor/bhn141 [PubMed] [Cross Ref]
* Hill S., Tononi G. (2004).
Modeling sleep and wakefulness in the thalamocortical system.
J. Neurophysiol. 93, 1671–1698. 10.1152/jn.00915.2004 [PubMed] [Cross Ref]
* Horton J. C., Adams D. L. (2005).
The cortical column: a structure without a function.
Philos. Trans. R. Soc. Lond. B Biol. Sci. 360, 837–862. 10.1098/rstb.2005.1623 [PMC free article] [PubMed] [Cross Ref]
* Hubel D., Wiesel T. N. (1962).
Receptive fields, binocular interaction and functional architecture in the cat's visual cortex.
J. Physiol. 160, 106–154. 10.1113/jphysiol.1962.sp006837 [PMC free article] [PubMed] [Cross Ref]
* Hunt J. J., Bosking W. H., Goodhill G. J. (2011).
Statistical structure of lateral connections in the primary visual cortex.
Neural Syst. Circuits 1:3. 10.1186/2042-1001-1-3 [PMC free article] [PubMed] [Cross Ref]

610:yamaguti
18/09/20 00:08:48.18 94opR8z06 BE:39163182-2BP(3)
* Iyer R., Mihalas S. (2017).
Cortical circuits implement optimal context integration.
bioRxiv. 10.1101/158360 [Cross Ref]
* Jones E. G. (2000).
Microcolumns in the cerebral cortex.
Proc. Natl. Acad. Sci. U.S.A. 97, 5019–5021. 10.1073/pnas.97.10.5019 [PMC free article] [PubMed] [Cross Ref]
* Kakei S., Hoffman D. S., Strick P. L. (2003).
Sensorimotor transformations in cortical motor areas.
Neurosci. Res. 46, 1–10. 10.1016/S0168-0102(03)00031-2 [PubMed] [Cross Ref]
* Kim E. J., Juavinett A. L., Kyubwa E. M., Jacobs M. W., Callaway E. M. (2015).
Three types of cortical layer 5 neurons that differ in brain-wide connectivity and function.
Neuron 88, 1253–1267. 10.1016/j.neuron.2015.11.002 [PMC free article] [PubMed] [Cross Ref]
* Kim J., Matney C. J., Blankenship A., Hestrin S., Brown S. P. (2014).
Layer 6 corticothalamic neurons activate a cortical output layer, layer 5a.
J. Neurosci. 34, 9656–9664. 10.1523/JNEUROSCI.1325-14.2014 [PMC free article] [PubMed] [Cross Ref]
* Kropff E., Carmichael J. E., Moser M.-B., Moser E. I. (2015).
Speed cells in the medial entorhinal cortex.
Nature 523, 419–424. 10.1038/nature14622 [PubMed] [Cross Ref]
* LeCun Y., Bengio Y., Hinton G. (2015).
Deep learning.
Nature 521, 436–444. 10.1038/nature14539 [PubMed] [Cross Ref]
* Lee S., Carvell G. E., Simons D. J. (2008).
Motor modulation of afferent somatosensory circuits.
Nat. Neurosci. 11, 1430–1438. 10.1038/nn.2227 [PMC free article] [PubMed] [Cross Ref]
* Lefort S., Tomm C., Floyd Sarria J.-C., Petersen C. C. H. (2009).
The excitatory neuronal network of the C2 barrel column in mouse primary somatosensory cortex.
Neuron 61, 301–316. 10.1016/j.neuron.2008.12.020 [PubMed] [Cross Ref]

611:yamaguti
18/09/20 00:10:34.84 94opR8z06 BE:58744883-2BP(3)
* Lemmon V., Pearlman A. L. (1981).
Does laminar position determine the receptive field properties of cortical neurons? a study of corticotectal cells in area 17 of the normal mouse and the reeler mutant. J.
Neurosci. 1, 83–93. [PubMed]
* Li N., DiCarlo J. J. (2008).
Unsupervised natural experience rapidly alters invariant object representation in visual cortex.
Science 321, 1502–1507. 10.1126/science.1160028 [PMC free article] [PubMed] [Cross Ref]
* Lohmann H., Rörig B. (1994).
Long-range horizontal connections between supragranular pyramidal cells in the extrastriate visual cortex of the rat.
J. Comp. Neurol. 344, 543–558. 10.1002/cne.903440405 [PubMed] [Cross Ref]
* Losonczy A., Makara J. K., Magee J. C. (2008).
Compartmentalized dendritic plasticity and input feature storage in neurons.
Nature 452, 436–441. 10.1038/nature06725 [PubMed] [Cross Ref]
* Lubke J., Egger V., Sakmann B., Feldmeyer D. (2000).
Columnar organization of dendrites and axons of single and synaptically coupled excitatory spiny neurons in layer 4 of the rat barrel cortex.
J. Neurosci. 20, 5300–5311. [PubMed]
* Luhmann H. J., Singer W., Martínez-Millán L. (1990).
Horizontal interactions in cat striate cortex: I. Anatomical substrate and postnatal development.
Eur. J. Neurosci. 2, 344–357. 10.1111/j.1460-9568.1990.tb00426.x [PubMed] [Cross Ref]
* Maass W. (1997).
Networks of spiking neurons: the third generation of neural network models.
Neural Netw. 10, 1659–1671. 10.1016/S0893-6080(97)00011-7 [Cross Ref]
* Mangini N. J., Pearlman A. L. (1980).
Laminar distribution of receptive field properties in the primary visual cortex of the mouse.
J. Comp. Neurol. 193, 203–222. 10.1002/cne.901930114 [PubMed] [Cross Ref]

612:yamaguti
18/09/20 00:12:24.26 94opR8z06 BE:44059829-2BP(3)
* Markov N. T., Ercsey-Ravasz M., Van Essen D. C., Knoblauch K., Toroczkai Z., Kennedy H. (2013).
Cortical high-density counterstream architectures.
Science 342:1238406. 10.1126/science.1238406 [PMC free article] [PubMed] [Cross Ref]
* Markram H., Muller E., Ramaswamy S., Reimann M. W., Abdellah M., Sanchez C. A., et al. . (2015).
Reconstruction and simulation of neocortical microcircuitry.
Cell 163, 456–492. 10.1016/j.cell.2015.09.029 [PubMed] [Cross Ref]
* Martin K. A. C., Schröder S. (2013).
Functional heterogeneity in neighboring neurons of cat primary visual cortex in response to both artificial and natural stimuli.
J. Neurosci. 33, 7325–7344. 10.1523/JNEUROSCI.4071-12.2013 [PubMed] [Cross Ref]
* McGuire B. A., Hornung J. P., Gilbert C. D., Wiesel T. N. (1984).
Patterns of synaptic input to layer 4 of cat striate cortex.
J. Neurosci. 4, 3021–3033. [PubMed]
* Meyer H. S., Schwarz D., Wimmer V. C., Schmitt A. C., Kerr J. N. D., Sakmann B., et al. . (2011).
Inhibitory interneurons in a cortical column form hot zones of inhibition in layers 2 and 5A.
Proc. Natl. Acad. Sci. U.S.A. 108, 16807–16812. 10.1073/pnas.1113648108 [PMC free article] [PubMed] [Cross Ref]
* Moser E. I., Kropff E., Moser M.-B. (2008).
Place cells, grid cells, and the brain's spatial representation system.
Annu. Rev. Neurosci. 31, 69–89. 10.1146/annurev.neuro.31.061307.090723 [PubMed] [Cross Ref]
* Moser E. I., Roudi Y., Witter M. P., Kentros C., Bonhoeffer T., Moser M.-B. (2014).
Grid cells and cortical representation.
Nat. Rev. Neurosci. 15, 466–481. 10.1038/nrn3766 [PubMed] [Cross Ref]
* Moser M.-B., Rowland D. C., Moser E. I. (2015).
Place cells, grid cells, and memory.
Cold Spring Harb. Perspect. Biol. 7:a021808. 10.1101/cshperspect.a021808 [PMC free article] [PubMed] [Cross Ref]

613:yamaguti
18/09/20 00:15:44.06 94opR8z06 BE:14686823-2BP(3)
* Mountcastle V. (1978).
An organizing principle for cerebral function: the unit model and the distributed system, in The Mindful Brain, eds Edelman G., Mountcastle V., editors.
(Cambridge, MA: MIT Press; ), 7–50.
* Mountcastle V. B. (1997).
The columnar organization of the neocortex.
Brain 120, 701–722. 10.1093/brain/120.4.701 [PubMed] [Cross Ref]
* Movshon J. A., Thompson I. D., Tolhurst D. J. (1978).
Receptive field organization of complex cells in the cat's striate cortex.
J. Physiol. 283, 79–99. 10.1113/jphysiol.1978.sp012489 [PMC free article] [PubMed] [Cross Ref]
* Nakamura K., Colby C. L. (2002).
Updating of the visual representation in monkey striate and extrastriate cortex during saccades.
Proc. Natl. Acad. Sci. U.S.A. 99, 4026–4031. 10.1073/pnas.052379899 [PMC free article] [PubMed] [Cross Ref]
* Pouget A., Snyder L. H. (2000).
Computational approaches to sensorimotor transformations.
Nat. Neurosci. 3, 1192–1198. 10.1038/81469 [PubMed] [Cross Ref]
* Raizada R. D. S., Grossberg S. (2003).
Towards a theory of the laminar architecture of cerebral cortex: computational clues from the visual system.
Cereb. Cortex 13, 100–113. 10.1093/cercor/13.1.100 [PubMed] [Cross Ref]
* Ramaswamy S., Markram H. (2015).
Anatomy and physiology of the thick-tufted layer 5 pyramidal neuron.
Front. Cell. Neurosci. 9:233. 10.3389/fncel.2015.00233 [PMC free article] [PubMed] [Cross Ref]
* Reimann M. W., Anastassiou C. A., Perin R., Hill S. L., Markram H., Koch C. (2013).
A biophysically detailed model of neocortical local field potentials predicts the critical role of active membrane currents.
Neuron 79, 375–390. 10.1016/j.neuron.2013.05.023 [PMC free article] [PubMed] [Cross Ref]

614:yamaguti
18/09/20 00:16:50.75 94opR8z06 BE:58745546-2BP(3)
* Reyes-Puerta V., Kim S., Sun J.-J., Imbrosci B., Kilb W., Luhmann H. J. (2015a).
High stimulus-related information in barrel cortex inhibitory interneurons.
PLoS Comput. Biol. 11:e1004121. 10.1371/journal.pcbi.1004121 [PMC free article] [PubMed] [Cross Ref]
* Reyes-Puerta V., Sun J.-J., Kim S., Kilb W., Luhmann H. J. (2015b).
Laminar and columnar structure of sensory-evoked multineuronal spike sequences in adult rat barrel cortex in vivo.
Cereb. Cortex 25, 2001–2021. 10.1093/cercor/bhu007 [PubMed] [Cross Ref]
* Rizzolatti G., Cattaneo L., Fabbri-Destro M., Rozzi S. (2014).
Cortical mechanisms underlying the organization of goal-directed actions and mirror neuron-based action understanding.
Physiol. Rev. 94, 655–706. 10.1152/physrev.00009.2013 [PubMed] [Cross Ref]
* Rockland K. S. (2010).
Five points on columns. Front.
Neuroanat. 4:22. 10.3389/fnana.2010.00022 [PMC free article] [PubMed] [Cross Ref]
* Russo G. S., Bruce C. J. (1994).
Frontal eye field activity preceding aurally guided saccades.
J. Neurophysiol. 71, 1250–1253. [PubMed]
* Rust N. C., DiCarlo J. J. (2010).
Selectivity and tolerance (“invariance”) both increase as visual information propagates from cortical area V4 to IT.
J. Neurosci. 30, 12978–12995. 10.1523/JNEUROSCI.0179-10.2010 [PMC free article] [PubMed] [Cross Ref]
* Sarid L., Bruno R., Sakmann B., Segev I., Feldmeyer D. (2007).
Modeling a layer 4-to-layer 2/3 module of a single column in rat neocortex: interweaving in vitro and in vivo experimental observations.
Proc. Natl. Acad. Sci. U.S.A. 104, 16353–16358. 10.1073/pnas.0707853104 [PMC free article] [PubMed] [Cross Ref]
* Schnepel P., Kumar A., Zohar M., Aertsen A., Boucsein C. (2015).
Physiology and impact of horizontal connections in rat neocortex.
Cereb. Cortex 25, 3818–3835. 10.1093/cercor/bhu265 [PubMed] [Cross Ref]

615:yamaguti
18/09/20 00:18:30.52 94opR8z06 BE:36716235-2BP(3)
* Sherman S. M., Guillery R. W. (2011). Distinct functions for direct and transthalamic corticocortical connections. J. Neurophysiol. 106, 1068–1077. 10.1152/jn.00429.2011 [PubMed] [Cross Ref]
* Shipp S. (2007).
Structure and function of the cerebral cortex.
Curr. Biol. 17, R443–R449. 10.1016/j.cub.2007.03.044 [PubMed] [Cross Ref]
* Spruston N. (2008).
Pyramidal neurons: dendritic structure and synaptic integration.
Nat. Rev. Neurosci. 9, 206–221. 10.1038/nrn2286 [PubMed] [Cross Ref]
* St. John-Saaltink E., Kok P., Lau H. C., de Lange F. P. (2016).
Serial dependence in perceptual decisions is reflected in activity patterns in primary visual cortex.
J. Neurosci. 36, 6186–6192. 10.1523/JNEUROSCI.4390-15.2016 [PubMed] [Cross Ref]
* Stettler D. D., Das A., Bennett J., Gilbert C. D. (2002).
Lateral connectivity and contextual interactions in macaque primary visual cortex.
Neuron 36, 739–750. 10.1016/S0896-6273(02)01029-2 [PubMed] [Cross Ref]
* Stuart G. J., Häusser M. (2001).
Dendritic coincidence detection of EPSPs and action potentials.
Nat. Neurosci. 4, 63–71. 10.1038/82910 [PubMed] [Cross Ref]
* Thomson A. M. (2010). Neocortical layer 6, a review.
Front. Neuroanat.
4:13. 10.3389/fnana.2010.00013 [PMC free article] [PubMed] [Cross Ref]
* Thomson A. M., Bannister A. P. (2003).
Interlaminar connections in the neocortex.
Cereb. Cortex 13, 5–14. 10.1093/cercor/13.1.5 [PubMed] [Cross Ref]
* Thomson A. M., Lamy C. (2007).
Functional maps of neocortical local circuitry.
Front. Neurosci. 1, 19–42. 10.3389/neuro.01.1.1.002.2007 [PMC free article] [PubMed] [Cross Ref]
* Traub R. D., Contreras D., Cunningham M. O., Murray H., LeBeau F. E. N., Roopun A., et al. . (2004).
Single-column thalamocortical network model exhibiting gamma oscillations, sleep spindles, and epileptogenic bursts.
J. Neurophysiol. 93, 2194–2232. 10.1152/jn.00983.2004 [PubMed] [Cross Ref]


次ページ
最新レス表示
レスジャンプ
類似スレ一覧
スレッドの検索
話題のニュース
おまかせリスト
オプション
しおりを挟む
スレッドに書込
スレッドの一覧
暇つぶし2ch