TBD. Posted as a placeholder for now. Will fill in soon.
Wild Iris Discovery
Agricultural Robotics, Asynchronous Processor Arrays, Cellular Automata, Machine Learning, Artificial Intelligence, Emergent Systems
Thursday, September 8, 2022
RoboDog, First Steps
TBD. Posted as a placeholder for anyone arriving here to read about RoboDog project.
Thursday, June 18, 2020
Notes in the Margins: “The Value Learning Problem”, Nate Soares, 2016
This post will initiate a different style of posting. The idea is to capture my thoughts as I’m reading through various technical papers; the kind of notes that you would write in the margins of a journal article or paper as you’re reading it. I need to emphasize that these kinds of notes mostly reference an article’s content, but can also take the form of loosely associated thoughts and questions triggered by some point made within the article.
• You will also notice from this post how scattered my thought process can be as I’m reading through technical papers like this. I’ve been accused of having an amazing power of association, given how far afield my mind will wander in the process of trying to understand the core ideas of what an author in a paper is trying to communicate.
This post is on a technical paper referenced by Robert Miles in one of his recent YouTube videos.
• “The Value Learning Problem”, Nate Soares, 2016.
• A lengthy list of specification gaming examples can be found here.
• The “Robert Miles” YouTube channel devoted to AI questions.
It is my typical reading style, when it comes to reading a journal paper like this, is to first skim over it multiple-times until I start to get a feel for the flow of the ideas the author(s) are attempting to communicate. Then, I start reading through it in detail. Until about the 20th or 30th time through, at which point I usually start to feel like I have some understanding of what the paper is trying to say. But in the case of this article, each new time I read through it, I became more confused. After the fourth time through, I found myself thinking, “What a word salad!”
• The first thing that came to mind as I was reading this paper was, I noticed how all of the examples given were hypothetical. But every AI system will have to be instantiated in some physical entity, at which point it is no longer a hypothetical system, and the physical limits that are built into it will preclude the very hypothetical situations that the paper relies upon to make its points.
Another way of saying this is that the hypothetical examples presume that AI systems have some level of omnipotence; that if the AI system proceeds to game its specifications, then there is no speedbump for that to happen. But in practice, the laws of physics will put hard boundaries on what an AI system is capable of doing, regardless of the nonintuitive solutions that might be possible given an unconstrained hypothetical situation to start with.
• A common thread in many of the AI dilemmas is the assumption that the AI system under consideration controls its own reward function. But this case occurs in human societies as well, and takes the form of a despot, monarch, tyrant. In fact, many government bureaucracies and regulatory agencies follow this pattern of being able to control politically their own reward functions. This phenomenon might also be related to psychological personality disorders like narcissistic, sociopathic, psychopathic. In the real world, though, members of society depend on each other to be part of their feedback loop. Or more generally, many of the hypothetical dilemmas proposed in this paper would go away if the AI entity itself did not have any control over its reward feedback loop.
• One objection to this point about an AI having to have a physical instantiation, is the possibility that an AI program can live in the Internet as an independent entity. Not tied to any specific piece of computational hardware, but rather infecting itself like a virus across some distributed system of processors. This is an interesting possibility to consider. It’s a trope that appears in many science-fiction films and novels. The best exploration of this possibility, that I’ve found, is in the anime series "Ghost in the Shell", where it takes the form of what it means to be a stand-alone complex. At first pass it appears that for an AI program to spread itself out across a distributed system of processors is a possibility that faces certain practical problems; problems that make it difficult if not impossible for such a situation to actually occur in practice
• I noticed that the hypothetical examples often begin with a machine-level misunderstanding of a problem stated in natural language format. But what if it’s possible to talk to an AI using a formal verbal language, rather than a natural language? For example, a programming language like Forth? If humans were constrained, when programming AI systems, to use a formal language rather than a natural language, would this preclude some of the hypothetical problems discussed in this paper?
• It comes to mind that many/most/all(?) of the hypothetical examples given would have been much better approached using expert systems rather than AI. What’s the point of trying to use AI to solve a problem, when an expert system would have been a much better approach to the task at hand?
• Question: Whatever happened to the concept of expert systems to begin with? It used to be a term in common usage, but as I think about it, I don’t recall seeing it used in publications anymore.
Imagine the task of designing a welding robot to work in a shipyard. Why spend time designing an AI system to teach itself to be able to generate quality welds, when one can simply go and talk to actual experienced welders? Often, it seems that AI is just a lazy programmer’ s excuse or way of avoiding having to work with experienced skilled labor to develop a proper expert system for the task at hand. After all, if a robot can teach itself to master a specific task, then that avoids all of the work necessary to interface, both personally and technically, with the people who already know how to do that task.
• Why ask an AI system to figure out what it has to do based on some kind of machine learning process, when it would have been easier to just program its basic tasks, by hand, to start with? Again, I question the usefulness of such a publication when the author did not first consider the option of tackling these problems using an expert systems approach. It strikes me that a paper like this would be much more useful if it chose as its hypothetical examples problems that cannot be solved better using expert systems.
• The usefulness of an AI system over an expert system is that the AI should theoretically be able to teach itself how to do something that there is no human expertise available to do. But this raises another question, what kind of hypothetical examples should the author of this paper have used?
• Then there is the fundamental moral question, why are we, in the first place, expecting AI to make decisions for us that carry with them a moral component? Shouldn’t we as humans be reserving for ourselves such decisions? Is the desire to make an AI system capable of driving a car, for example, at its core a desire to help your fellow man commute more safely and efficiently? Or is it, rather, a path of moral cowardice to offload the responsibility of being a safe and capable driver to some third-party entity?
• Every stable system requires negative feedback loops; and societies are no exception to this fact. Part of a society’s dynamic that enables these required negative feedback loops arises when the individuals forming the society hold each other accountable/responsible for the outcomes of the decisions they make. The more we offload our responsibility for making morally correct decisions to some nonhuman AI, the less we as individuals will need to interact with each other; a sure recipe for the collapse of a society.
• Another way to say this is that the author’s hypothetical examples envision AI systems that do not have to pay some kind of a “personal” price for any bad decisions they make. An AI system may decide that the best way to bring peace on earth is to kill all humans. But after doing that, how would the AI system maintain its physical self? It couldn’t, and it would die. The ability to make moral decisions is a property again of living systems only.
• Consider the observation that true AGI is a property of a living system. Any hypothetical example of true AGI has to include the constraints of self-organization, self-preservation and self-reproduction. And if these three constraints are included in all of the author’s hypothetical examples, would they still hold up as useful thought experiments?
• I wrote in a past blog about the difference between a CNC system and a robot. My observation then applies in many ways to the distinction between expert systems and AI today. The general pattern seems to be that AI refers to speculative possibilities, whereas expert systems encompasses doable projects. That is, once something in AI becomes doable in a computational, algorithmic, practical manner, then it ceases to be AI and becomes lumped in with expert systems.
• Another random thought was that many of these AI dilemmas are not actual problems to be found in practice, but rather take the form of archetypal stories; that is, parables.
• The rambling point here is that within the current state-of-the-art, machine learning is considered part of AI. But I’m beginning to form the opinion that machine learning should more properly be considered another aspect of expert systems.
• You will also notice from this post how scattered my thought process can be as I’m reading through technical papers like this. I’ve been accused of having an amazing power of association, given how far afield my mind will wander in the process of trying to understand the core ideas of what an author in a paper is trying to communicate.
This post is on a technical paper referenced by Robert Miles in one of his recent YouTube videos.
• “The Value Learning Problem”, Nate Soares, 2016.
• A lengthy list of specification gaming examples can be found here.
• The “Robert Miles” YouTube channel devoted to AI questions.
It is my typical reading style, when it comes to reading a journal paper like this, is to first skim over it multiple-times until I start to get a feel for the flow of the ideas the author(s) are attempting to communicate. Then, I start reading through it in detail. Until about the 20th or 30th time through, at which point I usually start to feel like I have some understanding of what the paper is trying to say. But in the case of this article, each new time I read through it, I became more confused. After the fourth time through, I found myself thinking, “What a word salad!”
• The first thing that came to mind as I was reading this paper was, I noticed how all of the examples given were hypothetical. But every AI system will have to be instantiated in some physical entity, at which point it is no longer a hypothetical system, and the physical limits that are built into it will preclude the very hypothetical situations that the paper relies upon to make its points.
Another way of saying this is that the hypothetical examples presume that AI systems have some level of omnipotence; that if the AI system proceeds to game its specifications, then there is no speedbump for that to happen. But in practice, the laws of physics will put hard boundaries on what an AI system is capable of doing, regardless of the nonintuitive solutions that might be possible given an unconstrained hypothetical situation to start with.
• A common thread in many of the AI dilemmas is the assumption that the AI system under consideration controls its own reward function. But this case occurs in human societies as well, and takes the form of a despot, monarch, tyrant. In fact, many government bureaucracies and regulatory agencies follow this pattern of being able to control politically their own reward functions. This phenomenon might also be related to psychological personality disorders like narcissistic, sociopathic, psychopathic. In the real world, though, members of society depend on each other to be part of their feedback loop. Or more generally, many of the hypothetical dilemmas proposed in this paper would go away if the AI entity itself did not have any control over its reward feedback loop.
• One objection to this point about an AI having to have a physical instantiation, is the possibility that an AI program can live in the Internet as an independent entity. Not tied to any specific piece of computational hardware, but rather infecting itself like a virus across some distributed system of processors. This is an interesting possibility to consider. It’s a trope that appears in many science-fiction films and novels. The best exploration of this possibility, that I’ve found, is in the anime series "Ghost in the Shell", where it takes the form of what it means to be a stand-alone complex. At first pass it appears that for an AI program to spread itself out across a distributed system of processors is a possibility that faces certain practical problems; problems that make it difficult if not impossible for such a situation to actually occur in practice
• I noticed that the hypothetical examples often begin with a machine-level misunderstanding of a problem stated in natural language format. But what if it’s possible to talk to an AI using a formal verbal language, rather than a natural language? For example, a programming language like Forth? If humans were constrained, when programming AI systems, to use a formal language rather than a natural language, would this preclude some of the hypothetical problems discussed in this paper?
• It comes to mind that many/most/all(?) of the hypothetical examples given would have been much better approached using expert systems rather than AI. What’s the point of trying to use AI to solve a problem, when an expert system would have been a much better approach to the task at hand?
• Question: Whatever happened to the concept of expert systems to begin with? It used to be a term in common usage, but as I think about it, I don’t recall seeing it used in publications anymore.
Imagine the task of designing a welding robot to work in a shipyard. Why spend time designing an AI system to teach itself to be able to generate quality welds, when one can simply go and talk to actual experienced welders? Often, it seems that AI is just a lazy programmer’ s excuse or way of avoiding having to work with experienced skilled labor to develop a proper expert system for the task at hand. After all, if a robot can teach itself to master a specific task, then that avoids all of the work necessary to interface, both personally and technically, with the people who already know how to do that task.
• Why ask an AI system to figure out what it has to do based on some kind of machine learning process, when it would have been easier to just program its basic tasks, by hand, to start with? Again, I question the usefulness of such a publication when the author did not first consider the option of tackling these problems using an expert systems approach. It strikes me that a paper like this would be much more useful if it chose as its hypothetical examples problems that cannot be solved better using expert systems.
• The usefulness of an AI system over an expert system is that the AI should theoretically be able to teach itself how to do something that there is no human expertise available to do. But this raises another question, what kind of hypothetical examples should the author of this paper have used?
• Then there is the fundamental moral question, why are we, in the first place, expecting AI to make decisions for us that carry with them a moral component? Shouldn’t we as humans be reserving for ourselves such decisions? Is the desire to make an AI system capable of driving a car, for example, at its core a desire to help your fellow man commute more safely and efficiently? Or is it, rather, a path of moral cowardice to offload the responsibility of being a safe and capable driver to some third-party entity?
• Every stable system requires negative feedback loops; and societies are no exception to this fact. Part of a society’s dynamic that enables these required negative feedback loops arises when the individuals forming the society hold each other accountable/responsible for the outcomes of the decisions they make. The more we offload our responsibility for making morally correct decisions to some nonhuman AI, the less we as individuals will need to interact with each other; a sure recipe for the collapse of a society.
• Another way to say this is that the author’s hypothetical examples envision AI systems that do not have to pay some kind of a “personal” price for any bad decisions they make. An AI system may decide that the best way to bring peace on earth is to kill all humans. But after doing that, how would the AI system maintain its physical self? It couldn’t, and it would die. The ability to make moral decisions is a property again of living systems only.
• Consider the observation that true AGI is a property of a living system. Any hypothetical example of true AGI has to include the constraints of self-organization, self-preservation and self-reproduction. And if these three constraints are included in all of the author’s hypothetical examples, would they still hold up as useful thought experiments?
• I wrote in a past blog about the difference between a CNC system and a robot. My observation then applies in many ways to the distinction between expert systems and AI today. The general pattern seems to be that AI refers to speculative possibilities, whereas expert systems encompasses doable projects. That is, once something in AI becomes doable in a computational, algorithmic, practical manner, then it ceases to be AI and becomes lumped in with expert systems.
• Another random thought was that many of these AI dilemmas are not actual problems to be found in practice, but rather take the form of archetypal stories; that is, parables.
• The rambling point here is that within the current state-of-the-art, machine learning is considered part of AI. But I’m beginning to form the opinion that machine learning should more properly be considered another aspect of expert systems.
Saturday, May 30, 2020
The T2-Tile Project, Artificial Life Form Simulations, An Interesting Proposal
This post is in reference to a comment thread on the gitter.im/t2tile/hardware webpage.
This post will also be a work in progress. What I’m proposing within this post is a difficult assertion to make, and more importantly, to make clear. I expect as I get feedback, I will continue to edit this post until it seems that I’m getting my point across reliably.
It seemed that the forum’s discussion contained a thread of frustration with the lack of progress that A-Life simulations seem to be having. Simulations, as run, don’t seem to express the kind of spontaneous jumps to higher levels of complexity that one would’ve hoped to see. After decades of research in this area, the inability for such simulations to mimic what we see in Mother Nature has taken away any academic and research interest in further pursuit of these subjects. This is probably also the reason that there are so few people pursuing research in ISAAC’s: since they would have been the natural platforms to enable simulations to run in real time and on real hardware.
There is a sense that there is a puzzle piece missing. No one knows what that missing piece is so they wait for someone else to produce the answer. What I want to suggest is that people in the field already know what that missing piece is, and it’s called intelligent choice.
Unfortunately, the topic of intelligent choice has been twisted into an intelligence => design => designer argument which has been glommed onto by various religious groups who use this line of reasoning to justify their concepts of what God might be. This has turned the topic of intelligent choice into an academic, no-go, toxic wasteland. Anyone attempting to follow this line of thinking into their A-life research risks sinking into the tar pit of intelligent design => designer arguments; which would be a career-ending move.
But suppose there were a way to introduce intelligent choice into A-life simulations that would not only keep one from getting caught up in the intelligent design => designer tar pit, but also, at the same time, strike a fatal blow to those very arguments?
/******************/
It took about 2 billion years before the first eukaryote cells appeared; cells that are capable of forming multi-cellular lifeforms. Another billion years (roughly speaking) passed before the resultant composite lifeforms became rigid enough in their structure to leave a fossil record. Considering how long it took Mother Nature to make these simple evolutionary steps, it should not be surprising if our own numerical models might take “forever” as well.
But I think the problem goes deeper than this. If you look at the world of bacteria today, you will see the same forwards and backwards evolutionary progressions that you see in A-Life simulations. The process of evolution towards ever more complex lifeforms seems to correlate with the appearance of the first creatures that can move in a self-directed manner; to swim, rather than just float around with the sea currents like a jellyfish.
The better response to the lament about the progress of A-Life simulations would have been to note that, since simulations are not reproducing what we see in the natural world around us, then Mother Nature must be doing something that is not being captured in simulation. A suggestion for what that might be, as noted above, is the appearance of the ability to choose. Something a higher lifeform can do that an agent in a simulation can’t do, is to say to itself, “Screw this. I’m sick and tired of this game. I’m going to leave and go somewhere else.” Note, this choice can only be made for agents that have the ability/option to get up and move away.
This is where A-Life simulations will always stall out. They take place within a bounded arena, with no frontiers to go to and with agents having no ability to step outside the simulation. There is no reward within the simulation for an agent to make the kind of fundamental evolutionary step these simulations are trying to recreate in the first place.
Randomizing an agent’s behavior within a simulation only leads to the phenomenon of “drift to the mean.” Whatever spawns an agent’s evolutionary steps toward higher complexity can’t, therefore, be based on random choice alone. The attribute which needs to be added to A-Life simulations to finally allow for that spontaneous evolutionary jump to increasingly complex ordering is the option of intelligent choice.
Returning for a moment to contrast my approach to ISAAC design versus that of Dave Ackley is to note that the Atoms in my case carry their own programming with them, while the Atoms in the T2-Tile Project rely on an external set of pre-programmed routines, shared by all of the other atoms in common, and which are indexed by a lookup table that in turn points to a common program memory.
There is no way within the T2-Tile Project approach for a single Atom to spontaneously reprogram itself and go off in a new direction. The reason I’m taking this particular approach to ISAAC design, is that, by having each Atom contain all of its own programming, independent of all other atoms, would allow for individual mutations to occur. Subsequently, those mutations can be shared with other Atoms by a process akin to sexual reproduction.
/******************/
Now back to the topic at hand. The hint for me as to how to proceed with the quest of introducing intelligent choice into A-life simulations began with this simple paper, "What does Maxwell's demon want from life? When information becomes functional and physical.” Author: J. H. van Hateren.
Within the discipline of physics, Maxwell’s demon is the archetypal lifeform; that is, an intelligent free-willed agent that can not only observe the world around it, but also interact with it in a way that allows it to extract energy from its environment; energy which can then be used to do the useful work of sustaining that intelligent agent’s existence.
But can a Maxwell’s demon actually exist? The existence of such an agent, at first pass, seems to violate the second law of thermodynamics. There have been various attempts through history to explain the paradox of Maxwell’s Demon. Current thinking is that Landauer’s erasure principal seems to have finally offered a reasonable explanation. But as the paper above argues, Landauer’s principal is still not sufficient yet to resolve the Maxwell’s demon paradox.
It has become my opinion, after several decades of pondering this physics problem, that the source of the paradox for Maxwell’s demon begins with the fact that there is no derivation or explanation within the laws of physics to allow for the demon’s existence. And for this reason, the demon never gets folded into the physics of the experiment’s description to start with. And since it never shows up in the experiment’s construction, its existence always remains an outside element without resolution.
There is a way out of this paradox, and this is the humble proposal I want to put up for consideration.
The essential nature of Maxwell’s demon is one of intelligence and free will, but there is no place in physics for the concept of free will. No one can prove that free will is a property of intelligent life forms, nor can anyone prove that such is not the case. Faced with such a situation in mathematics, if a statement can neither be proven true nor false, then one is free to take it or reject it as an axiom and develop one’s mathematics from there.
The assertion I would like to make is that free will, as a property of intelligent agents, should simply be taken as an axiom within the laws of physics. Then we shall see what theoretically arises out of such an assertion.
Dear readers, please note, I have no intention or desire to become an apostle or apologist for some new way of thinking. All I want to do is propose a new idea as a subject of exploration.
So how does one embed the concept of free will into the laws of physics? First step is to strip the term free will of all its historical, philosophical, theological baggage and see what’s left at its core. That is, what needs to be added to the laws of physics that will allow for something like free will to arise from and be logically compatible with the already existing known laws of physics? Here is my humble proposal.
/******************/
What one needs to do is introduce a fifth law of thermodynamics, one which would state that something like “choice” exists, and which could then play the role of an anti-entropic force. This proposed fifth law would seem to be sufficient to allow for the breaking of strict determinism within the laws of physics. How this would work is the discussion that follows.
Start by taking a page from the field of mathematics: constructability is to mathematics what determinism is to physics. What broke the constraint of constructability in mathematics was the introduction of the Axiom of Choice. With the Axiom of Choice, you can now prove the existence of sets which are not constructible in any manner. Something that a mathematician will take for granted, but will seem foreign to a physicist, is the fact that the Axiom of Choice can’t tell you “how”; it only gives you “permission.”
It is been shown mathematically that the Axiom of Choice produces no contradictions with any of the preceding rules for set theory and math logic. It can be taken as true, after which one gets one logically consistent mathematics. Or it can be rejected, which then generates an alternate but still logically consistent mathematics.
In some ways, the Axiom of Choice is a lot like Euclid’s Parallel Postulate which can be taken as either true or false with no contradiction to any of the preceding geometric postulates. And then, how you take this Parallel Postulate, generates either Euclidian or Non-Euclidian geometries.
In a similar way of thinking, in order for intelligent choice to exist, determinism has to be broken within the laws of physics. So, what could be the equivalent in physics to the Axiom of Choice in mathematics?
What first needs to happen within the discipline of physics is we need some kind of rule that will allow us to replace the constraint “derivable-from” with the less restrictive constraint “compatible-with.” In some sense, this is what Stephen Wolfram has suggested in his book “A New Kind of Science.”
It would seem at first pass that the introduction of intelligent choice into the laws of thermodynamics would produce an immediate contradiction to the second law. But it turns out that such would not be the case. The second law is an outside observer’s black-box view of a thermodynamic system. It can only make global statements about a thermodynamic system. It makes no specific statements about what can, or cannot, go on internally within such a system.
By contrast, this proposed new fifth law makes only local statements about what can happen within a thermodynamic system. As long as its application does not change the outside view of the system, then the second law is not violated. A useful analogy might be Heisenberg’s uncertainty principle, which allows for the violation of energy conservation; but a violation which is allowed only locally in space-time.
If one assumes such a fifth law of thermodynamics, then one can simply introduce into an A-life simulation, without needing to justify its presence on any physical grounds, an intelligent choice function, and do so without fear of the specter of Maxwell’s demon showing up and calling into question your results. Remembering, like the Axiom of Choice, this fifth law doesn’t tell you how to create an intelligent agent within your simulation, it only gives you permission to do so.
Again, what this fifth law effectively does in practice is that it frees one from having first to demonstrate a strict derivability from existing laws of physics before one introduces a particular intelligent choice function into a A-Life simulation. All one needs to demonstrate is that the resultant outcomes are merely consistent with the laws of physics. Or to say it another way, this fifth law breaks the equivalence between “not-provable-from” and “in-contradiction-to” when discussing topics in physics in general.
/******************/
One could waste a lifetime debating the philosophical merits of such a proposal, so I won’t. But just say, for the sake of discussion, that this proposal is taken as a given. What happens then, if this extra faculty of choice, along with the addition of some kind of frontier region, outside of the simulation’s boundaries, that an agent can remove itself to, is folded in an A-Life’s simulation programming. Would it finally start to reflect the behavior of real evolutionary systems that you’re hoping to find?
This post will also be a work in progress. What I’m proposing within this post is a difficult assertion to make, and more importantly, to make clear. I expect as I get feedback, I will continue to edit this post until it seems that I’m getting my point across reliably.
It seemed that the forum’s discussion contained a thread of frustration with the lack of progress that A-Life simulations seem to be having. Simulations, as run, don’t seem to express the kind of spontaneous jumps to higher levels of complexity that one would’ve hoped to see. After decades of research in this area, the inability for such simulations to mimic what we see in Mother Nature has taken away any academic and research interest in further pursuit of these subjects. This is probably also the reason that there are so few people pursuing research in ISAAC’s: since they would have been the natural platforms to enable simulations to run in real time and on real hardware.
There is a sense that there is a puzzle piece missing. No one knows what that missing piece is so they wait for someone else to produce the answer. What I want to suggest is that people in the field already know what that missing piece is, and it’s called intelligent choice.
Unfortunately, the topic of intelligent choice has been twisted into an intelligence => design => designer argument which has been glommed onto by various religious groups who use this line of reasoning to justify their concepts of what God might be. This has turned the topic of intelligent choice into an academic, no-go, toxic wasteland. Anyone attempting to follow this line of thinking into their A-life research risks sinking into the tar pit of intelligent design => designer arguments; which would be a career-ending move.
But suppose there were a way to introduce intelligent choice into A-life simulations that would not only keep one from getting caught up in the intelligent design => designer tar pit, but also, at the same time, strike a fatal blow to those very arguments?
/******************/
It took about 2 billion years before the first eukaryote cells appeared; cells that are capable of forming multi-cellular lifeforms. Another billion years (roughly speaking) passed before the resultant composite lifeforms became rigid enough in their structure to leave a fossil record. Considering how long it took Mother Nature to make these simple evolutionary steps, it should not be surprising if our own numerical models might take “forever” as well.
But I think the problem goes deeper than this. If you look at the world of bacteria today, you will see the same forwards and backwards evolutionary progressions that you see in A-Life simulations. The process of evolution towards ever more complex lifeforms seems to correlate with the appearance of the first creatures that can move in a self-directed manner; to swim, rather than just float around with the sea currents like a jellyfish.
The better response to the lament about the progress of A-Life simulations would have been to note that, since simulations are not reproducing what we see in the natural world around us, then Mother Nature must be doing something that is not being captured in simulation. A suggestion for what that might be, as noted above, is the appearance of the ability to choose. Something a higher lifeform can do that an agent in a simulation can’t do, is to say to itself, “Screw this. I’m sick and tired of this game. I’m going to leave and go somewhere else.” Note, this choice can only be made for agents that have the ability/option to get up and move away.
This is where A-Life simulations will always stall out. They take place within a bounded arena, with no frontiers to go to and with agents having no ability to step outside the simulation. There is no reward within the simulation for an agent to make the kind of fundamental evolutionary step these simulations are trying to recreate in the first place.
Randomizing an agent’s behavior within a simulation only leads to the phenomenon of “drift to the mean.” Whatever spawns an agent’s evolutionary steps toward higher complexity can’t, therefore, be based on random choice alone. The attribute which needs to be added to A-Life simulations to finally allow for that spontaneous evolutionary jump to increasingly complex ordering is the option of intelligent choice.
Returning for a moment to contrast my approach to ISAAC design versus that of Dave Ackley is to note that the Atoms in my case carry their own programming with them, while the Atoms in the T2-Tile Project rely on an external set of pre-programmed routines, shared by all of the other atoms in common, and which are indexed by a lookup table that in turn points to a common program memory.
There is no way within the T2-Tile Project approach for a single Atom to spontaneously reprogram itself and go off in a new direction. The reason I’m taking this particular approach to ISAAC design, is that, by having each Atom contain all of its own programming, independent of all other atoms, would allow for individual mutations to occur. Subsequently, those mutations can be shared with other Atoms by a process akin to sexual reproduction.
/******************/
Now back to the topic at hand. The hint for me as to how to proceed with the quest of introducing intelligent choice into A-life simulations began with this simple paper, "What does Maxwell's demon want from life? When information becomes functional and physical.” Author: J. H. van Hateren.
Within the discipline of physics, Maxwell’s demon is the archetypal lifeform; that is, an intelligent free-willed agent that can not only observe the world around it, but also interact with it in a way that allows it to extract energy from its environment; energy which can then be used to do the useful work of sustaining that intelligent agent’s existence.
But can a Maxwell’s demon actually exist? The existence of such an agent, at first pass, seems to violate the second law of thermodynamics. There have been various attempts through history to explain the paradox of Maxwell’s Demon. Current thinking is that Landauer’s erasure principal seems to have finally offered a reasonable explanation. But as the paper above argues, Landauer’s principal is still not sufficient yet to resolve the Maxwell’s demon paradox.
It has become my opinion, after several decades of pondering this physics problem, that the source of the paradox for Maxwell’s demon begins with the fact that there is no derivation or explanation within the laws of physics to allow for the demon’s existence. And for this reason, the demon never gets folded into the physics of the experiment’s description to start with. And since it never shows up in the experiment’s construction, its existence always remains an outside element without resolution.
There is a way out of this paradox, and this is the humble proposal I want to put up for consideration.
The essential nature of Maxwell’s demon is one of intelligence and free will, but there is no place in physics for the concept of free will. No one can prove that free will is a property of intelligent life forms, nor can anyone prove that such is not the case. Faced with such a situation in mathematics, if a statement can neither be proven true nor false, then one is free to take it or reject it as an axiom and develop one’s mathematics from there.
The assertion I would like to make is that free will, as a property of intelligent agents, should simply be taken as an axiom within the laws of physics. Then we shall see what theoretically arises out of such an assertion.
Dear readers, please note, I have no intention or desire to become an apostle or apologist for some new way of thinking. All I want to do is propose a new idea as a subject of exploration.
So how does one embed the concept of free will into the laws of physics? First step is to strip the term free will of all its historical, philosophical, theological baggage and see what’s left at its core. That is, what needs to be added to the laws of physics that will allow for something like free will to arise from and be logically compatible with the already existing known laws of physics? Here is my humble proposal.
/******************/
What one needs to do is introduce a fifth law of thermodynamics, one which would state that something like “choice” exists, and which could then play the role of an anti-entropic force. This proposed fifth law would seem to be sufficient to allow for the breaking of strict determinism within the laws of physics. How this would work is the discussion that follows.
Start by taking a page from the field of mathematics: constructability is to mathematics what determinism is to physics. What broke the constraint of constructability in mathematics was the introduction of the Axiom of Choice. With the Axiom of Choice, you can now prove the existence of sets which are not constructible in any manner. Something that a mathematician will take for granted, but will seem foreign to a physicist, is the fact that the Axiom of Choice can’t tell you “how”; it only gives you “permission.”
It is been shown mathematically that the Axiom of Choice produces no contradictions with any of the preceding rules for set theory and math logic. It can be taken as true, after which one gets one logically consistent mathematics. Or it can be rejected, which then generates an alternate but still logically consistent mathematics.
In some ways, the Axiom of Choice is a lot like Euclid’s Parallel Postulate which can be taken as either true or false with no contradiction to any of the preceding geometric postulates. And then, how you take this Parallel Postulate, generates either Euclidian or Non-Euclidian geometries.
In a similar way of thinking, in order for intelligent choice to exist, determinism has to be broken within the laws of physics. So, what could be the equivalent in physics to the Axiom of Choice in mathematics?
What first needs to happen within the discipline of physics is we need some kind of rule that will allow us to replace the constraint “derivable-from” with the less restrictive constraint “compatible-with.” In some sense, this is what Stephen Wolfram has suggested in his book “A New Kind of Science.”
It would seem at first pass that the introduction of intelligent choice into the laws of thermodynamics would produce an immediate contradiction to the second law. But it turns out that such would not be the case. The second law is an outside observer’s black-box view of a thermodynamic system. It can only make global statements about a thermodynamic system. It makes no specific statements about what can, or cannot, go on internally within such a system.
By contrast, this proposed new fifth law makes only local statements about what can happen within a thermodynamic system. As long as its application does not change the outside view of the system, then the second law is not violated. A useful analogy might be Heisenberg’s uncertainty principle, which allows for the violation of energy conservation; but a violation which is allowed only locally in space-time.
If one assumes such a fifth law of thermodynamics, then one can simply introduce into an A-life simulation, without needing to justify its presence on any physical grounds, an intelligent choice function, and do so without fear of the specter of Maxwell’s demon showing up and calling into question your results. Remembering, like the Axiom of Choice, this fifth law doesn’t tell you how to create an intelligent agent within your simulation, it only gives you permission to do so.
Again, what this fifth law effectively does in practice is that it frees one from having first to demonstrate a strict derivability from existing laws of physics before one introduces a particular intelligent choice function into a A-Life simulation. All one needs to demonstrate is that the resultant outcomes are merely consistent with the laws of physics. Or to say it another way, this fifth law breaks the equivalence between “not-provable-from” and “in-contradiction-to” when discussing topics in physics in general.
/******************/
One could waste a lifetime debating the philosophical merits of such a proposal, so I won’t. But just say, for the sake of discussion, that this proposal is taken as a given. What happens then, if this extra faculty of choice, along with the addition of some kind of frontier region, outside of the simulation’s boundaries, that an agent can remove itself to, is folded in an A-Life’s simulation programming. Would it finally start to reflect the behavior of real evolutionary systems that you’re hoping to find?
Wednesday, April 1, 2020
John Deere Tractors and the Right-to-Repair
First, for anyone within the robotics community unfamiliar with this controversy, before reading further, they should enter this section’s title into their favorite search engine and start reading through the information found there. While at this point in time, this Right-to-Repair controversy is only affecting farmers who own John Deere equipment, it is only going to become more acute as robots leave the engineering lab and factory floor and move out into the field.
As a note, while my blog writing focuses on agricultural robotics, and I will be using the term farmer exclusively, the issues discussed here will be common to any/all field-deployed robots, whether that field of operation be farming, logging, mining and/or construction.
Sophisticated electronic systems are now ubiquitous in the cabs of modern farm equipment. That by itself is not the problem. The problem is that equipment manufacturers like John Deere have increasingly integrated the mechanical functioning of their machines with the internal control electronics. Now even a minor mechanical repair requires a factory technician to come out and reset the on-board computer system. For the farmer this turns a two hour and $50 repair job into one dragging out a day or two and costing another $500 to $1000 extra. You can appreciate why the farmers are upset about this situation. And it gets worse. The penalty for not calling in the factory technician to properly reset electronics is that the farm equipment won’t run; effectively John Deere is holding a farmer’s tractor hostage.
What we are seeing played out with the Right-to-Repair controversy is a clash of two incompatible economic models, along with the clash of two different design philosophies. Specifically:
• Every single engineer I’ve encountered and interacted with, who was also involved with robotics, sees a robot as primarily a computational system with mechanical subsystems tacked on to the periphery. But for the farmer, a robot is a mechanical system that has an embedded computer system for its operation.
• Unfortunately, in the high-tech world, engineers see a robot as a design challenge. The more complex the solutions, the more job satisfaction your typical engineer will get. While for the farmer, simplicity and reliability in construction, operation and maintenance is what is of paramount importance.
• For equipment manufacturers, economically speaking, it is to their advantage if they can turn an equipment purchase into an ongoing income stream. This is usually accomplished by some form of service contract servitude.
• For the farmer, the economic situation is the exact contradiction of this. The farmer needs to be as free as possible from any corporate constraints so that they can make proper use of their equipment; use that depends unforgivingly on the ever-changing and unpredictable day-to-day conditions that mother nature and the economy throw at them. For the farmer, with the loss of control over their own equipment, they effectively have lost control over their own farming operation. Less reliance on the original equipment manufacturer and the greater ability to rely on their own resources is the economic model the farmer wants and needs.
/******************/
Farming is an endeavor which, in exchange for uncertain weather and market conditions, the offerings in return are nothing but headaches, along with very low, to sometimes losing, profit margins. The only way for a farmer to function in any kind of economically sustainable fashion is to maintain very tight control over expenses and operations. An engineer in Silicon Valley might not think about it this way, but for the farmer, this Right-to-Repair issue becomes yet another uncontrolled factor, like the weather, that can make or break them economically.
I find myself almost taking it personal sometimes when I encounter the casualness which many within the engineering community approach this problem of not only Right-to-Repair, but most importantly Ability-to-Repair; an ability that for a farmer can be the difference between keeping the family farming operation running and having to sell out to one of the bigger corporate farms.
The way the robotics industry functions now, what is best economically for the engineering designers and manufacturers creating and producing the next generation of field-deployed robots, has become incompatible with the economic realities of what farmers, loggers, miners and construction site foreman need.
There is no solution to this conflict that will make everyone happy. So, each member of the community of engineers devoted to robot design will have to individually make their decision as to which side of this conflict they want to stand on. Will they put their creative effort into making a product that works for the farmer? Or will they devote their creative energies to product design that enables a high-tech corporate domination of the field-deployed robot market?
/******************/
A classic historical analogy was the original IBM PC against Apple and others in the marketplace. While its competitors maintained closed and proprietary designs, the IBM PC’s architecture was open; as such, it provided a universal platform that third-party developers could build their own applications upon. And so, it became the go-to platform for anyone wanting to use a PC as an intelligent controller for whatever their product idea might be. As consumer demand went up, competition in the PC market kicked into gear. Because the IBM PC’s architecture was open, it was easy for third parties to copy it. As the market became big enough, manufacturing went offshore; prices dropped, ultimately forcing IBM out of the PC marketplace.
In the end, what was best for the consumer turned out to be a losing proposition for IBM. And although IBM ultimately lost in the marketplace, the open architecture it had introduced is the reason home and desktop PCs are so ubiquitous today.
This historical outcome will repeat again for field-deployed robots, provided that some manufacturer makes the bold creative step to give the world a non-proprietary, open and modular architecture that would be simple in construction, reliable in use, and easily manufactured, serviced and repaired. But most importantly, an open architecture that would allow third party, aftermarket additions, modifications and enhancements to be created; thus, becoming a channel for the creative efforts of a much larger field of entrepreneurs. Whichever equipment manufacturer becomes the first to do this, might ultimately lose in the marketplace, but their creative efforts will survive into posterity
/******************/
(*) As a hardware designer, it’s impossible for me to upgrade a part value or an IC specification remotely over the Internet. So as a hardware designer, when my creative work is done, it’s done. The only way to change or upgrade my work is via a product-wide recall costing my corporate employer a major financial hit. But software lends itself to remote upgrades. And because it’s easy to do, the temptation to do so becomes overwhelming. Therefore, software development often devolves from a creative effort to merely a rent-seeking endeavor, turning product purchasers into ongoing income streams via the offer of future software upgrades with the co-commitment mechanism of service contract servitude.
One of the occupational fallouts that will come with a modular form of construction will be that software developers will now be in the same boat that hardware developers currently are; that is, they will have to get it right the first time since, once their coding leaves the factory, they can’t access it again to fix any of the mistakes they made. If the reader senses a bit of Schadenfreude in this attitude, well, they would be correct.
As a note, while my blog writing focuses on agricultural robotics, and I will be using the term farmer exclusively, the issues discussed here will be common to any/all field-deployed robots, whether that field of operation be farming, logging, mining and/or construction.
Sophisticated electronic systems are now ubiquitous in the cabs of modern farm equipment. That by itself is not the problem. The problem is that equipment manufacturers like John Deere have increasingly integrated the mechanical functioning of their machines with the internal control electronics. Now even a minor mechanical repair requires a factory technician to come out and reset the on-board computer system. For the farmer this turns a two hour and $50 repair job into one dragging out a day or two and costing another $500 to $1000 extra. You can appreciate why the farmers are upset about this situation. And it gets worse. The penalty for not calling in the factory technician to properly reset electronics is that the farm equipment won’t run; effectively John Deere is holding a farmer’s tractor hostage.
What we are seeing played out with the Right-to-Repair controversy is a clash of two incompatible economic models, along with the clash of two different design philosophies. Specifically:
• Every single engineer I’ve encountered and interacted with, who was also involved with robotics, sees a robot as primarily a computational system with mechanical subsystems tacked on to the periphery. But for the farmer, a robot is a mechanical system that has an embedded computer system for its operation.
• Unfortunately, in the high-tech world, engineers see a robot as a design challenge. The more complex the solutions, the more job satisfaction your typical engineer will get. While for the farmer, simplicity and reliability in construction, operation and maintenance is what is of paramount importance.
• For equipment manufacturers, economically speaking, it is to their advantage if they can turn an equipment purchase into an ongoing income stream. This is usually accomplished by some form of service contract servitude.
• For the farmer, the economic situation is the exact contradiction of this. The farmer needs to be as free as possible from any corporate constraints so that they can make proper use of their equipment; use that depends unforgivingly on the ever-changing and unpredictable day-to-day conditions that mother nature and the economy throw at them. For the farmer, with the loss of control over their own equipment, they effectively have lost control over their own farming operation. Less reliance on the original equipment manufacturer and the greater ability to rely on their own resources is the economic model the farmer wants and needs.
/******************/
Farming is an endeavor which, in exchange for uncertain weather and market conditions, the offerings in return are nothing but headaches, along with very low, to sometimes losing, profit margins. The only way for a farmer to function in any kind of economically sustainable fashion is to maintain very tight control over expenses and operations. An engineer in Silicon Valley might not think about it this way, but for the farmer, this Right-to-Repair issue becomes yet another uncontrolled factor, like the weather, that can make or break them economically.
I find myself almost taking it personal sometimes when I encounter the casualness which many within the engineering community approach this problem of not only Right-to-Repair, but most importantly Ability-to-Repair; an ability that for a farmer can be the difference between keeping the family farming operation running and having to sell out to one of the bigger corporate farms.
The way the robotics industry functions now, what is best economically for the engineering designers and manufacturers creating and producing the next generation of field-deployed robots, has become incompatible with the economic realities of what farmers, loggers, miners and construction site foreman need.
There is no solution to this conflict that will make everyone happy. So, each member of the community of engineers devoted to robot design will have to individually make their decision as to which side of this conflict they want to stand on. Will they put their creative effort into making a product that works for the farmer? Or will they devote their creative energies to product design that enables a high-tech corporate domination of the field-deployed robot market?
/******************/
A classic historical analogy was the original IBM PC against Apple and others in the marketplace. While its competitors maintained closed and proprietary designs, the IBM PC’s architecture was open; as such, it provided a universal platform that third-party developers could build their own applications upon. And so, it became the go-to platform for anyone wanting to use a PC as an intelligent controller for whatever their product idea might be. As consumer demand went up, competition in the PC market kicked into gear. Because the IBM PC’s architecture was open, it was easy for third parties to copy it. As the market became big enough, manufacturing went offshore; prices dropped, ultimately forcing IBM out of the PC marketplace.
In the end, what was best for the consumer turned out to be a losing proposition for IBM. And although IBM ultimately lost in the marketplace, the open architecture it had introduced is the reason home and desktop PCs are so ubiquitous today.
This historical outcome will repeat again for field-deployed robots, provided that some manufacturer makes the bold creative step to give the world a non-proprietary, open and modular architecture that would be simple in construction, reliable in use, and easily manufactured, serviced and repaired. But most importantly, an open architecture that would allow third party, aftermarket additions, modifications and enhancements to be created; thus, becoming a channel for the creative efforts of a much larger field of entrepreneurs. Whichever equipment manufacturer becomes the first to do this, might ultimately lose in the marketplace, but their creative efforts will survive into posterity
/******************/
(*) As a hardware designer, it’s impossible for me to upgrade a part value or an IC specification remotely over the Internet. So as a hardware designer, when my creative work is done, it’s done. The only way to change or upgrade my work is via a product-wide recall costing my corporate employer a major financial hit. But software lends itself to remote upgrades. And because it’s easy to do, the temptation to do so becomes overwhelming. Therefore, software development often devolves from a creative effort to merely a rent-seeking endeavor, turning product purchasers into ongoing income streams via the offer of future software upgrades with the co-commitment mechanism of service contract servitude.
One of the occupational fallouts that will come with a modular form of construction will be that software developers will now be in the same boat that hardware developers currently are; that is, they will have to get it right the first time since, once their coding leaves the factory, they can’t access it again to fix any of the mistakes they made. If the reader senses a bit of Schadenfreude in this attitude, well, they would be correct.
Widjets is Running Again, So what’s next?
For the last three months, life’s complications and health concerns have kept me from getting any writing done. My hope was to be able to post at least once a week. I’ll be trying better as time goes forward.
But despite not getting any writing done, progress on the hardware side has gone forward satisfactorily. All of my old Widjets hardware has been moved from boxes on the shelf and is up and running. I’ve installed new versions of LabVIEW and my Verilog EDA tools.
I’ve produced two new boards for the project; a 4-port hub and an 8-port hub as expansions for my WSB serial bus. I was also able to rework an old stepper motor drive card using the new programming format, so I now have a collection of dual H-bridge driver cards.
The original control board hardware I’m working with dates back to 2009 and used Lattice-Semi XP FPGA parts. These are now obsolete and no longer obtainable, so I also updated the control box design targeting a current FPGA part, generated new revised schematics, and even completed the artwork for a new PCB.
Everything was ready to start building upon, but what’s the next direction to go?
Ultimately the purpose of building any further hardware was for it to act as a showcase for what I’ve come to call the Widjets-concept or the wordier Widjets-design-paradigm; that is, a computational architecture based on a system of distributed processing, built up from task-specific preprogrammed modules, connected together by a common serial interface, and programmable in a verbal manner by users not necessarily computer literate.
As a way to showcase this alternate paradigm for robot construction, I thought that reproducing the functionality of the robots used in the NASA Swarmathon competition would be the best way to go about this. But as I went through the details of such a design effort, I realized I was not going to have the financial resources to finish. Further, I can’t see any way, within my financial resources, that I will be able to build any kind of robot, which would be sufficiently complicated in its functionality, to take advantage of this alternate paradigm of distributed processing. At this point, further progress seems to have come to an end; which has forced me to think about what exactly I’m trying to do with this Widjets-concept.
At this point, I must confess, my ultimate goal is to write a science fiction story. And the underlying motivation for exploring the possibility of my proposed alternate robot design paradigm was to prove to myself that it would actually work in practice.
Though I will not be able to finish the actual building of a system of robots based on this concept, I’ve done enough prototype development so far that I am entirely confident that such a system would work in practice. The next blog post will be exploring how this works out.
But for now, it appears that further work on the Widjets hardware has come to its end. I’m not saying that I will never get back to it again – just not until I win the lottery, or something, and have the financial resources to do so
But despite not getting any writing done, progress on the hardware side has gone forward satisfactorily. All of my old Widjets hardware has been moved from boxes on the shelf and is up and running. I’ve installed new versions of LabVIEW and my Verilog EDA tools.
I’ve produced two new boards for the project; a 4-port hub and an 8-port hub as expansions for my WSB serial bus. I was also able to rework an old stepper motor drive card using the new programming format, so I now have a collection of dual H-bridge driver cards.
The original control board hardware I’m working with dates back to 2009 and used Lattice-Semi XP FPGA parts. These are now obsolete and no longer obtainable, so I also updated the control box design targeting a current FPGA part, generated new revised schematics, and even completed the artwork for a new PCB.
Everything was ready to start building upon, but what’s the next direction to go?
Ultimately the purpose of building any further hardware was for it to act as a showcase for what I’ve come to call the Widjets-concept or the wordier Widjets-design-paradigm; that is, a computational architecture based on a system of distributed processing, built up from task-specific preprogrammed modules, connected together by a common serial interface, and programmable in a verbal manner by users not necessarily computer literate.
As a way to showcase this alternate paradigm for robot construction, I thought that reproducing the functionality of the robots used in the NASA Swarmathon competition would be the best way to go about this. But as I went through the details of such a design effort, I realized I was not going to have the financial resources to finish. Further, I can’t see any way, within my financial resources, that I will be able to build any kind of robot, which would be sufficiently complicated in its functionality, to take advantage of this alternate paradigm of distributed processing. At this point, further progress seems to have come to an end; which has forced me to think about what exactly I’m trying to do with this Widjets-concept.
At this point, I must confess, my ultimate goal is to write a science fiction story. And the underlying motivation for exploring the possibility of my proposed alternate robot design paradigm was to prove to myself that it would actually work in practice.
Though I will not be able to finish the actual building of a system of robots based on this concept, I’ve done enough prototype development so far that I am entirely confident that such a system would work in practice. The next blog post will be exploring how this works out.
But for now, it appears that further work on the Widjets hardware has come to its end. I’m not saying that I will never get back to it again – just not until I win the lottery, or something, and have the financial resources to do so
Saturday, January 4, 2020
The Economic Reality of Agricultural Robotics and the Return of Widjets, Part 4 of 4
Which finally brings us to the topic of Widjets; here are the background references
• WIDJETS and LEGO-LOGO
• My Home Brew Robotics Project
• My Home Brew Robotics Project, Embedded Controller Based Peripheral Devices
Widjets first started out in the mid-1990s is a concept to compete with the STEM educational kit LEGO-LOGO. But that dream came to a slow and financially draining end. My wife and I had to learn the hard way that no matter how new and clever your idea might be, or how much better you think your concept is than any currently existing product on the market, if your idea doesn’t fit into the economic realities of the market you want to sell to, then it’s not going anywhere.
I posted about this chapter of my life here: • WIDJETS, A Postmortem.
In short, we did not understand how much the STEM grant process dictated which products would be considered eligible for purchase by schools and institutions, and which would not. And that, unless we made the extra and very costly effort to get into the STEM grant pipeline, our Widjets concept had no chance of commercial success. It’s this experience with my own startup venture that has left me sensitive on this subject; that is, the absolute necessity of paying attention to the question of its economic viability before trying to turn an idea into a business venture.
To make a long story short, when our personal financial resources for such a venture ran out, the Widjets project ceased and went into boxes on the shelf. The Widjets concept seemed an obvious one to me and therefore I always expected to see it rediscovered and developed independently by someone else who had the financial resources to play the STEM grant game. But the idea has never shown up commercially; no one seems to have landed on it. And because I’ve never seen anyone else develop it, I’ve never had a good excuse to just finally let it go. So, over the following years the project has gone on and off the shelf as time and life would permit me the opportunities to work on it.
But going back to 1995, almost as soon as I started working on Widjets, it became apparent that I had created a system of distributed intelligence, inter-connected by a common star-tiered serial interface, that could also be programmed by users not necessarily possessing a technical background. This was all of the three attributes a field-deployed robot would have to possess before it ever has a chance at commercial viability
The first two attributes listed above are hardware aspects and are solved problems as far as the Widjets design goes. But the third piece of the puzzle, the requirement that, however one ends up programming a field-deployed robot, it must be in a way that is accessible to those already working in the farming industry. That is, by a system of verbal programming commands.
So, starting with the existing Widjets component PCB’s as a development platform, the challenge for the next few months will be to see if this type of architecture will lend itself to some kind of verbal programming modality.
So here at the start of 2020, I’m retired, and I finally have the time to turn Widjets into a proper robot programming system
• WIDJETS and LEGO-LOGO
• My Home Brew Robotics Project
• My Home Brew Robotics Project, Embedded Controller Based Peripheral Devices
Widjets first started out in the mid-1990s is a concept to compete with the STEM educational kit LEGO-LOGO. But that dream came to a slow and financially draining end. My wife and I had to learn the hard way that no matter how new and clever your idea might be, or how much better you think your concept is than any currently existing product on the market, if your idea doesn’t fit into the economic realities of the market you want to sell to, then it’s not going anywhere.
I posted about this chapter of my life here: • WIDJETS, A Postmortem.
In short, we did not understand how much the STEM grant process dictated which products would be considered eligible for purchase by schools and institutions, and which would not. And that, unless we made the extra and very costly effort to get into the STEM grant pipeline, our Widjets concept had no chance of commercial success. It’s this experience with my own startup venture that has left me sensitive on this subject; that is, the absolute necessity of paying attention to the question of its economic viability before trying to turn an idea into a business venture.
To make a long story short, when our personal financial resources for such a venture ran out, the Widjets project ceased and went into boxes on the shelf. The Widjets concept seemed an obvious one to me and therefore I always expected to see it rediscovered and developed independently by someone else who had the financial resources to play the STEM grant game. But the idea has never shown up commercially; no one seems to have landed on it. And because I’ve never seen anyone else develop it, I’ve never had a good excuse to just finally let it go. So, over the following years the project has gone on and off the shelf as time and life would permit me the opportunities to work on it.
But going back to 1995, almost as soon as I started working on Widjets, it became apparent that I had created a system of distributed intelligence, inter-connected by a common star-tiered serial interface, that could also be programmed by users not necessarily possessing a technical background. This was all of the three attributes a field-deployed robot would have to possess before it ever has a chance at commercial viability
The first two attributes listed above are hardware aspects and are solved problems as far as the Widjets design goes. But the third piece of the puzzle, the requirement that, however one ends up programming a field-deployed robot, it must be in a way that is accessible to those already working in the farming industry. That is, by a system of verbal programming commands.
So, starting with the existing Widjets component PCB’s as a development platform, the challenge for the next few months will be to see if this type of architecture will lend itself to some kind of verbal programming modality.
So here at the start of 2020, I’m retired, and I finally have the time to turn Widjets into a proper robot programming system
Subscribe to:
Posts (Atom)