Sunday, July 1, 2018

Emergence

The last of the topics I’ve devoted time to these last months is the subject of emergence; sometimes referred to as emergent behavior/phenomena/systems. It’s an area of inquiry that crosses boundaries from philosophy and theology all the way to physics, chemistry, and biology.

This additional area of interest is the natural continuation of my interests in systems of distributed intelligence and cellular automata as computational systems. That is, both of these areas of inquiry take you right to the heart of emergent behaviors. To jumpstart my research in this area, I’ve started reading the book “Re-emergence of Emergence, The Emergentist Hypothesis from Science to Religion”, edited by Philip Clayton and Paul Davies.

So far this book seems to be a very good introduction to the depth, breadth and history of the cross discipline discussions on what constitutes emergence.

To go along with this added dimension to my interests, I’ve now added the keywords emergent-systems underneath the blog header.

Friday, June 29, 2018

Whither goes K-12 STEM education? Is it time to bring back shop classes?

The second of three topics that occupied my thoughts these last few months was the question, where stands STEM education in K-12?

After spending several days trying to put my thoughts into words, I had to give up. There’s no way to engage in this topic without finding oneself drawn into the politics of K-12 education. And the last thing I want is for politics to show up in my blog. So, I’ve abandoned any attempt at a discussion of the subject.

The only aspect of my thoughts, that I think I can express without getting drawn into a political discussion, is to note that the robotics/technology side of STEM education would be far better served if it were taught within a traditional shop class format. But, the traditional shop class is now inextricably associated with the older practice of tracking; a practice which has become much maligned within today’s educational communities.

So I’ll just put the question out there and move on to other things.

Wednesday, June 27, 2018

Willow Garage’s Legacy, a Blessing or a Decade’s Long Detour in the Evolution of Robotics?

One result of my not being able to work on electronics for the last year was that I had a chance to ponder other questions. One of these questions was wondering about, "whatever happened to Willow Garage?" There was a time about a decade ago it was heralded as one of the great innovators of the robotics community. Then it just sort of disappeared. It had a number of spinoffs; but none of them ever turned into, what you might call an above average commercial success. It seems that the software innovation ROS has been left as Willow Garage’s only remaining legacy to the robotics community.

Despite WG’s universal acclaim, there was always something about that operation that bothered me. I was never able to quite put my finger on it. So with time on my hands, I “googled” Willow Garage to see if I could find any links to posts that might have made critical comments about it; to see if anyone else might have picked up on what I might be sensing.

Nothing!

As critical and contrary-for-the-sake-of-contrariness as some people can be on the Internet, you would’ve thought that there would have been at least a few critical posts or articles to be found.

So what was I seeing in WG that everyone else seems to have missed? I think I finally have an answer that I can articulate. So here it goes.

The formation of Willow Garage brought together some of Silicon Valley’s top-tier talent. Not only was WG’s initial formation generously self-funded, but over time it was able to attract even more venture capital to fund its ambitious creative efforts. The enthusiasm that WG brought to the robotics community attracted a cadre of dedicated and very talented engineers and programmers. You might be forgiven if you started to see WG as a sort of modern-day robotics Camelot.

But here’s the nagging question; if this is the level of funding and talent it takes to do robotics, then how will robotics ever be able to leave the engineering lab and move out to the farm field, the construction site, or the logging or mining operation?

For example, whatever commercial value an agricultural robotic-field-worker might have to a farmer, it must compete with its $25K a year human counterpart. This basic cost of employment puts a severe cost-constraint on any robotics system intended to be used in the field.

The second and more critical issue is that the people who will be selling, operating, servicing and maintaining field-deployed robots in the future, will by necessity be the same people that are doing those jobs now as regards to farm, construction, logging, or mining equipment. In other words, any robotic system deployed in the field, that requires the additional technical support of a team of Stanford University engineering graduate students, is a nonstarter.

To put it in another way, WG’s approach to robotics completely bypassed the questions of cost, manufacturing, operation, service, and maintenance; all absolutely critical elements for any robotic system to be commercially viable in the field.

What a field-deployed robot needs to be is modular. Its mechanical construction needs to be based on interchangeable subassemblies. And its computational architecture should come in the form of pre-programmed bricks or modules connected together using a single shared serial interface to form a system of distributed intelligence.

This form of construction allows for easy manufacture, easy maintenance, and easy service. Programming is not part of this paradigm. If one wants to change some functionality in a robot, just swap in a different module. The upside of this kind of construction is that this is the level of service, maintenance, and rebuild competency that already exists within the workforce currently employed in the industries of farming, construction, logging, and mining.

This last observation returns us to the question of ROS as being a useful addition to the robotics community’s programming toolkit. Sadly, to run ROS is to become dependent on a particular type of supporting hardware architecture; a computational architecture which is the complete antithesis of what needs to be implemented before robots will leave the engineering lab and proceed out to the field.

So this is my pondering, will ROS, rather than the boon to industry it was held out to be, in the years ahead, turn out to be a decade’s long detour in the evolution of field-deployable robotics?

Tuesday, June 26, 2018

Last Year’s Hiatus

A year ago, last June; I was diagnosed with dilated cardiomyopathy. In hindsight, I can see that its onset was probably around March of that year. At that time, I was running 30 to 40 miles a week on the trails at a local state park. But I began to notice my usual 6 or 9 mile runs were getting slower and slower. Then my runs got shorter and shorter. By June, I couldn’t climb to the top of the stairs here at the house without getting out of breath.

It’s not certain what caused this condition. My cardiologist’s best guess is a possible viral infection. I seem to have fallen victim to a condition that usually hits younger and healthier people. Whatever it was that damaged my heart muscle, its onset was most likely around February/March of last year. But being in such good shape to start with, it took several more months for my condition to deteriorate to the point that it was no longer ignorable.

On the downside, there’s nothing that can be done to help except putting me on blood thinners and blood pressure medications. On the upside though, as my cardiologist has told me, “…except for a weakened heart muscle, [I’m] as healthy as a horse.” An angiogram showed my heart arteries wide open and clear and I have no other signs of cardiovascular disease anywhere in my body. I guess a lifetime of endurance-level physical activity had left me, at the age of 65, in exceptionally good health.

So the good news was, I can’t possibly have a heart attack. A defibrillator was implanted last November. With that addition I felt braver and started working out again.

Then, just when I thought I was getting better, I had a TIA (transient ischemic attack), a mini-stroke. I’ve fully recovered, but it left me fatigued again. Hence my further delay getting back to blogging.

I’m one of those people that, in order for my brain to work, my body must be physically engaged as well. Some people call this being a kinesthetic learner. When I was a kid, I was just called fidgety.

Making a long story short, finally getting back to being physical activity again is enabling me to be mentally active again, too.

So time to restart this blog, and finish up where I was last February.

Thursday, February 23, 2017

Langton's Ant, Part 1

Good news is that a full simulation routine, written in C, and running under Linux is working. As a validation test for my coding I successfully ran Langton’s Ant, the results were animated, a video made, and posted to YouTube. 



The first pass of my simulation routine used Ncurses for text graphics display.

After I had debugged my code using a simple text graphics display, I redid the code to generate binary output data-files. Those files were in turn converted into bitmap graphic file format using a program I’ve written in LabVIEW. Lastly, with those files in turn, I could then animate using the software package Frames.

Frames might not be the best way to generate animations, but it was left over from our family’s home-schooling days when my son was working with stop motion animation. It’s a frustrating package to use, but I have it and it works… if you’re patient enough. After the animation was done, the actual video was put together using Adobe Premier Elements: version 9. Again, a software package left over from my son’s home-schooling days.

The actual simulation program was running 2 weeks ago, but it took me a week of messing around to get the animation sequence into a form I was satisfied with. I haven’t used these animation tools in five years. It took several days just for me to get back up to speed with using my old software.

The next step for me and this project is to create a Langton’s Ant program that starts out already in its stable “highway” configuration. Since the “highway” is a 104 step repeating pattern, one would expect that there will be countless ways to seed a Langton’s Ant program in this way. The question though, is there a minimal set? Or is that even a meaningful question? The next step in this direction is to recast Langton’s Ant as a two-dimensional Turing machine and analyze the system from that point of view.

For anyone finding this blog via the YouTube video post, I think I should emphasize again that Langton’s Ant is not my goal. My interests are in the underlying hardware that it will take to create a large asynchronous array of simple processors. I’m just using Langton’s Ant at this point as a simple and accessible test example.

I have recently come across the work of Dave Ackley. I’ve watched all of his posted YouTube videos and have downloaded a couple of his published papers. This paper in particular is what I’m working through right now: “A Movable Architecture for Robust Spatial Computing”, David H. Ackley*, Daniel C. Cannon and Lance R. Williams  If this is any indication, I have a lot of work ahead to catch up to the current state of the art in this field.

One way to characterize this project of mine is to see it as trying to develop the underlying silicon hardware, FPGA or ASIC, that you would need to re-create in hardware what Professor Ackley is trying to do in software. But I can’t speak for him about that. Maybe someday I’ll have a chance to talk to him and find out. 




On the more academic side, I’m steadily working my way through Hamilton’s book. I’m struggling a little more with the video lecture series; mostly because I find it very hard to sit still and watch them on the computer screen. It’s clear my brain has gotten pretty rusty over the last 25 years since I left grad school. My goal at this point is, over the next six months, to come up to speed on Turing machines and all the mathematics that goes behind them.

Sunday, January 29, 2017

Array Processor Simulation: Current Progress

The first pass of a simulation routine, written in LabVIEW, is working. I probably spent more time debugging the handshaking routine than anything else, but it’s working now. I can generate a seed code-worm that starts at cell (0, 0) then travels across to a destination cell.

In this debug stage, I need to display all the registers so I can watch what is happening within each of the individual cells. This verbose display mode means that I can only fit a 4x4 display array on my monitor. But a 4x4 array, at this first pass stage, was good enough to get things working.

I have an old Windows-XP legacy system that I kept available for periodic maintenance work on projects that I did for clients years earlier. Now that I’m retied and don’t need it anymore; time to turn it into Linux system. I’ve ordered a new hard drive and will upgrade this old XP system to Linux running Debian. The hard drive should arrive next week. At which point I’ll start the second phase of the array processor simulation project.

On the cellular automata front, it only took a little bit of research to find out that Stephen Wolfram, of Mathematica fame, has already done a significant amount of work in this area. I checked his book out from the library, “A New Kind of Science” and have started reading it.

My intent is to start with duplicating some of the classic cellular automata featured on YouTube, like the Game of Life and Langton’s Ant. The challenge is going to be how to visualize the time evolution of such systems; especially in the case of arrays on the order of 100x100 and bigger. Needing to display the progress of such a large array on a standard monitor means I can’t use much more than a 10x10 pixel area per display cell. Outputting numbers in this case is out of the question. So, what I have to explore next is ways to use colors to encode what each cell is doing.

For the simple examples I will be starting with, just displaying either a light or dark shaded area will be sufficient. But as I progress to more complicated relationships between the cells I’ll need to try something different. One obvious trick is to use the assembly language “org” instruction to partition various sections of each cell’s program-code into different segments of program-memory. Then assign different colors to each of these code segments; that way encoding the program-counter as a color will indicate which subroutine is active at any one time. What other cell-variables will be relevant to visualize, and how to visualize them with color-encoding is something I need to work on in the weeks ahead.

Next, is to start making “evolution” videos of these two-dimensional cellular automata and watch to see if any patterns appear to emerge. And regarding this effort, something worth noting is that when it comes to pattern recognition, the human brain is the best visual processing system we have access to. Even Google, Facebook, Stanford, MIT and the rest, even with their unique access to massive amounts of computational horsepower, and even using advanced techniques like deep learning, are still struggling to visually recognize a cat in a picture; something that humans can do quite easily.

I need to remember that my goal is not to just recreate work others have already done, but to explore the possibilities of using code-worms as a way of stabilizing the evolution of cellular automata so that they will stay stationary and not move off screen. If I can get that far, then the next question to explore will be whether these stationary automata can be used as computational engines to process an input to an output stream? Specifically, I’m curious if an automata could be used to function as a Szilard’s Engine or a Maxwell’s demon.

On a second note: It has been over 20 years since I left physics and went into engineering. I’m afraid my math skills have gotten quite rusty. So, in an effort to get my skills backup to par, I’ll be working through my old textbook on math logic “Logic for Mathematicians” by A. G. Hamilton. and starting to work through the YouTube video series on the “Theory of Computation.”

Sunday, January 22, 2017

Array Processor Simulation: First Pass

My first pass at a simulation program is mostly done now. All I need to do next is to start running it, debug any problems with the coding, and see what it does. This first “proof-of-principle” simulation program I’ve done in LabVIEW. Yes I know, LabVIEW? The reasons are that it’s a software tool I have, I’m very accustomed to using it, and it’s a great tool to put together a Windows program very quickly. This first pass is only intended to get a feel for the direction I’m trying to take this programming.

There are a number of initial choices about the construction of an asynchronous-array-of-simple-processors that I don’t have any good answers for. I’ll just have to try different things out first and see if they work or not. And that sort of effort is best done with the simplest programming environment you have to work with.

One of those choices is the machine instruction set I’ll need to give each of the simple processor cells. There is also the choice of how the I/O handshaking works between each of the neighboring cells.

For my case, I’m opting for a stack-based architecture using a dual stack arrangement with the program memory and data memory mapped to the same physical block of RAM. Mapping data space and program space together will allow the program code to modify itself on the fly.

If it all works, then the next step is to go to a simulation routine written in C running on a Linux desktop system. At that point, I’ll be able to take some screenshots and make source code available on my webpage.

The one thing that became obvious very quickly, once I started coding, was that whatever machine instruction set I give each simple processor cell, it will have to be both rotationally and translationally symmetric with respect to right/left and up/down directions. This forces some constraints on the design. One of the interesting implications of this requirement for symmetry is that the individual cells in such a processor array can’t have absolute array-location addresses. And any stable programming that can exist in this array has to be relative and not absolute in its addressing of neighboring cells. Anyway, fun stuff to think about.

Writing the simulation routine is only half of the effort. The second part is writing the programs that will run in the individual processor cells. But in this regard, most of the work has already been done. I already have a generic assembler that I created years ago that takes a file written in assembly code format, along with a list of mnemonic/machine-code associations, then outputs a machine code file that I can import into each processor cell’s program memory.

Over the years, I’ve written in Verilog a number of small processor cores that were embedded into the FPGA designs I've worked on. In these design situations, I needed such a generic assembler to create the code files to run in these minimal cores.

Just as an aside, it’s fun to work at the HDL-Verilog level of programming. You can create custom processor cores that have just the machine instructions that you want; no more, no less. This really lets you dial-in the performance of your FPGA part!

Slightly off-topic now… Why my interest in asynchronous array processors as a likely candidate for a learning system? There have been a number of attempts to create chips that mimic neural networks at the silicon level. But these attempts to mimic the human brain fall short in one important aspect, the nerve cells that form your brain can grow and/or prune the neural connections between them. A neural network built on an IC chip can’t do this. Whatever connections there are between individual cells, these connections are permanent and cannot be modified later via software.

But there is a trick in asynchronous arrays that can get around this hardware level limitation. While the individual cells within an asynchronous array only connect to their nearest neighbors, there is a concept, what you might call a “code-worm”, where a data packet can originate in one cell and travel across the array to a destination cell that is not a nearest neighbor. These code-worms can be spawned or pruned on a real-time basis and can be used to reproduce the functionality of the axons forming the neural connections in the brain.

So that was my first realization, that you can use an asynchronous array like you would a neural network with code-worms acting like the axon connections between the individual neurons. So not only can you train such a neural network using the standard methods, but a neural network, constructed as an asynchronous array, would also be able to change connections between the cells as well. It would seem that a neural network built this way would be much closer to how our brains function than the techniques currently used.

So this is the goal of the first proof-of-principle simulation program; to see if I can get this code-worm concept to go.

The idea of using an asynchronous array as an ecosystem for cellular automata came later to me. But it’s also an extremely fascinating one which I’d like to pursue for its own sake.