I am in this online course, an extension of a MOOC, called E-Learning 3.0, hosted by Stephen Downes. Over 10 weeks (12 if you count the warm-up) we look at the technical and social sides of where learning online (edtech?) is going, or at least where it is right now. 

MOOCS have been closely associated with Connected Learning over the last 10 years, especially for Stephen and a group of thinkers “connected” to him. I, for example have been “connected” since the first MOOC in 2008, and since then in a couple of other online events. Building a personal learning envirionment (PLE) or similar is expanding your connections to other resources and people, thus the name “Connected Learning”. But others have taken that idea and refined it so that it could be considered an alternative to Constructivist (think Piaget), Constructionist (think Papert) (disambiguation) or Behaviorist (think Skinner).  

Connectivism came about as a result of the environment. The web was maturing, and the web is based on nodes with anchors, links and targets to other nodes. Brain science (OK, neuroscience) was going great gangbusters with a new tool called FMRI, discovering all these links between neurons. I was reading Linked by Albert-Lazlo Barabasi. It was only natural that we try to apply these advances to learning (and by extension, to teaching, and finally to education). 

Back to the present. In our 3rd week of #el30 we are looking at some highly technical roots of connectionism, mostly mathematical concepts that underlie how tech works, how we work with tech, and how we work with each other.  Last week we talked about tree structures, which look like the sentence diagrams we wrote at kids when Chomsky was applied to everything. It also looks like the sports league championship diagrams. 

But this week we move from trees to graphs. All trees are graphs (a subset), but graphs can be more like networks, with multiple connections in all directions, without–and this is crucial–a center. From there the thinking widens to neural networks and machine learning. Note again that these can be applied to networks of machines or of people. It is a way to look at the world, a way to see that the connections are just as important as the nodes of content. I can see how this can even get philosophical. 

I don’t understand much of this. When I studied this stuff in the ’90s, about Speech Recognition, there was the Markov Model and not much else. It has blossomed as I have ignored it. My silly prediction that SR would be viable was premature by at least a decade. But now we have SR, many of us use it every day, and it is based on these ideas of graph theory. This is my corner of the connected part of this course. You can jump in any time.