Monday, June 29, 2009

My Failure

Although in my years on this planet I have attempted many things of various gradations of difficulty, and have failed at many along the way, there are a few failures which bother me more than others, and one in particular that bothers me the most.

First of all, I failed high school. This was not a problem with intelligence, nor with schoolwork at all. It was merely because of problems at home, in a dysfunctional family fraught with distracting emotional turmoil and typical teenage anxiety on my part. This was mended later, after a long interruption.

Secondly, I failed in the US Navy. Although I always did my job well and was very bright and capable, I did not obey the primary law of the armed forces: no matter who is right -- I am wrong.

Having been removed from the Navy with an "honorable" but general discharge, I then set forth to mend the first problem. I took and passed tests which allowed me to enter college for a degree in engineering. But, sure enough, I eventually failed that as well.

Due to issues with living in Lubbock, Texas, problems with a failed marriage, and problems with a tornado that destroyed my job -- I just left for California without ever getting a degree. These are not so much excuses -- merely facts. I was to blame for most of those problems as well. I cannot be blamed for the tornado, however.

In the years that followed there were many other ups and downs, but there were great successes in my life, finally. I was able to learn about computers to a very great depth, including digital electronics, systems engineering and software engineering. This also occurred during a time of explosive growth in the use of computers for everything from space, military, business, art, music and robotics to medicine and home recipes, which was all very fortunate for my career.

The one category, robotics, was my most favored, and the one in which I buried myself the furthest. Sadly, however, the USA was not so interested in robots in comparison to Japan or other countries. So this was not a sustainable career choice.

And therein lies my most dismal failure. I had intended to develop an "autonomous being", a partly robotic, partly computational "animal" -- and thought that certainly during my life that if I worked hard enough and studied all the sciences necessary, that I could surely accomplish this feat. It did not necessarily need to be human-like, but certainly most people would identify with such a "being" more than any other.

As time went on, and the number of disciplines I found necessary to study began to mount and the interruptions from the necessity in pursuit of a living, there were glimpses of the failure I would someday feel so sadly about. For one thing, I could not merely depend on knowing electronics, which is in itself a complexity which can consume one's mind entirely. The details for creating integrated circuitry, with the myriad molecular surface interactions, electron tunneling, metallurgical and chemical effects and so forth, involve entire fields of science unto themselves.

I could not depend on my knowledge of physics, which was also in depth, but certainly only a minute fraction of the amount I would need to know if I was to truly learn the secrets to creating an "autonomous being".

Even to the degree to which electronics and physics overlapped, at the junction between leptons and quarks of quantum mechanics, was a problem so difficult that even Einstein faced failure. And I am certainly no Einstein.

But despite those issues and many others regarding the sheer number of scientific disciplines I would need to master, there was the lack of understanding, generally, of what provides animals and especially humans with their psychological and physiological "computational brain" abilities at all. What gave them their autonomy, their consciousnesses and sensory faculties? It was possible to trace out neuronal pathways, nerve endings and all that, but was there also some "magical" substance that could not be generated mechanically or biochemically?

Great arguments along this line persist and they overlap many philosophies, sciences and religions. What constitutes life and bodies and minds? Is life something that can be "designed" by creatures as limited as ourselves? Or does it require supernatural Gods? Or is it only something that emerges from the muck -- completely unguided, completely by accident? I don't know, although it seems to be the "accidental" one.

Anyway I failed. I am old enough now to know I never will accomplish that lifelong goal. I shall never devise any such thing as an "autonomous being". And what is worse is that I may never even understand what such a thing really is. The complexity is just too great for my inadequate mind. Perhaps it is too great for any mind.

I did create many "self-organized" programs. They perhaps touch upon certain tiny pieces of something that could emerge as an "autonomous being", but certainly they were too simple in themselves to count. Maybe if I wrote a million more such programs, and let them fight it out in the cybernetic arena, just by accident, and perhaps only for a few milleseconds -- I might have provided for the existence of "autonomous beings." But that would not be a success. That would merely be an accident.


I failed, yes. But then all of our existence as true autonomous beings could be merely an accident. We are a kind of failure of the universe. The universe -- just for a little while -- failed! It failed to exhibit the usual, normally expected, increasing disorder. It didn't "do entropy" correctly. Not for the last 3 or 4 billion years, at least.

If the universe failed in this, it means that something, far in the distant past, failed even more so. Because at some point in time, whether at the point of the "Big Bang" or in some other "Little Bangs", the universe was suddenly very orderly (so there is something from which disorder is being made). And by creating "autonomous beings" like ourselves, it made a puzzlingly profound order from the chaos.

But, don't you worry. We shall make up for this lack of entropy by manufacturing an extra amount. We always have.

Thursday, June 4, 2009

Product Ideas

Press on any image to see a larger version.


The Tcx Client program is only half of a program. You would have to imagine the other half of this program -- the Tcx Server, because it is invisible and there might be hundreds of them. Tcx Client automatically handles multiple Fuzzy Text Search requests to as many servers as are available -- in parallel. This was written in a wxWidgets C++ gui builder, using Code::Blocks and wxSmith. Tcx stands for Text Content Indexing.


The Image Work program was built using QtCreator and private C libraries. (There is also a wxWidget version.) The program allows experimenting with various image processes and contains a built-in HTML browser that Qt provides. Image Work also stores neural encoded features about each image, so that the program recognizes images which are similar to the currently displayed image.

I imagine there are dozens of programs with the main idea behind this one. But the recognition system behind this one does work. The wxWidgets version of this same program is actually more capable in many areas, but the Qt version is more attractive for this picture...


ImageDisp was an earler, less complicated program which just displayed images, and that's about it. The good part about this and all these programs is that they all work on Linux and Windows (Vista too). They will supposedly work on Macs, but they aren't tested on one as of yet, besides, Macs are so perfect they don't need software.

This early program was merely a tutorial for me while learning wxWidget's imaging abilities. It is handy enough, though, so I use it on all my machines. There is an identical Qt version, but not for commercial purposes, which is Qt's fly in the ointment.


wxWave simply loads .WAV sound files and allows analyzing their spectral signatures.

It is the precursor to a program that will recognize speech or other sounds in the same way that the above Image Work program recognizes images. Although still images are more complex than speech, they are somewhat easier to manage because they are "still". You can never have "still" speech. It is always moving. But the idea is somewhat similar: find neural features within sounds and store them for later recognition.

Other than the intended recognition part, there are far more interesting forms of this type of program out there, but none of them did what I want to do. This technology will also be part of a more ambitious attempt to index both movies and sounds -- something my assortment of home computers will struggle with. I think an Intel i7 would be necessary, at least.


This is "ImageDisp with Video Capture" (featuring yours truly.) It also has the ability to process video in real time, so far as the underlying hardware is capable of that. This program is an experiment using wxWidgets + OpenCV (Intel's open sourced Computer Vision algorithms.) My own personal set of C/C++ image processing algorithms are simultaneously usable as well, although the OpenCV versions are often higher performance, if not so easy to use, mainly because of their ability (in a few cases) to use specialized Intel hardware tricks.

This video display is using a function which performs image transformations with a more artistic bend (the Plasticize effect) rather than anything utilitarian, just for the display.

There is also another part to the Gui to select options and parameters for things like that.

Qt versus wxWidgets (update)

I have tried OpenCV with Qt, which I suspected might have Windows DLL problems, but it does work. So both the wxWidgets and Qt systems seem about the same performance using OpenCV on Intel hardware. I have used the same code on both Xp and Linux.

Gui Builders

I only bother with these GUI builders to make sure the programs are completely portable AND high quality. Otherwise I would just choose whichever was the easiest and be done with it.

Qt takes about 10 times as much space on disk (or installation blob) as wxWidgets. I'm not sure why, exactly, although in some cases Qt has better (or at least prettier) versions of things, such as the QWebKit stuff. There may be differences in the compiler that accounts for the tremendous difference in code size -- for instance embedded debugging information, or translations to other languages, etc. I haven't figured it out, but I intend to.

Another problem with Qt is that it is commercial, and licensing must be bought if Qt is used to make commercial products. I wouldn't mind paying for it, so long as I was being paid for my work, too. That gets harder and harder as time goes on in today's world.

There may be questions like, "Why don't I also have a .NET version of this if I'm going to have 2 or 3 versions of things anyway?" Because the .NET or even MFC environment makes things very difficult for getting down to the nitty gritty and incorporating random C or C++ stuff into them. But I can overcome those difficulties. It is 99% because they are non-portable, and all the considerable work I would expend for them would need to be re-engineered for another machine or operating system.

I will say that I like the Vc++ compiler and DevEnv debugger the best of all, but even those fail in some places where g++ and gdb survive. My nastiest bugs took both worlds of debugging tools to cross laser beams on the problem. Besides, I can still use those tools mixed in with wxWidgets anyway.

One More Thing...

I have recently tried using a few web based tools (in addition to many that I already used...), and have experimented with MySQL, controlled from a webpage with PHP. It works pretty good, really, although the combinations of Apache2.2, PHP and MySQL all at once on my Xp system seems to have jumped memory usage up a good notch -- like several hundred megabytes.

Here is the web page screenshot. (There is no real link... only on private network.)

Tuesday, June 2, 2009

Robot with Rat Brain Neurons

I excerpted this from an Internet Article because it was a few months old and I was afraid it might disappear after not too long. It may still disappear, at least the picture.

I am always impressed with the sorrow that man cannot make robots anything like what we envisioned them to be during the last century. We must still depend on the brain designs that nature provided.

Meet Gordon, probably the world's first robot controlled exclusively by living brain tissue. Stitched together from cultured rat neurons, Gordon's primitive grey matter was designed at the University of Reading by scientists who unveiled the neuron-powered machine on Wednesday.

Their groundbreaking experiments explore the vanishing boundary between natural and artificial intelligence, and could shed light on the fundamental building blocks of memory and learning, one of the lead researchers told AFP.

"The purpose is to figure out how memories are actually stored in a biological brain," said Kevin Warwick, a professor at the University of Reading and one of the robot's principle architects.

Observing how the nerve cells cohere into a network as they fire off electrical impulses, he said, may also help scientists combat neurodegenerative diseases that attack the brain such as Alzheimer's and Parkinson's.

"If we can understand some of the basics of what is going on in our little model brain, it could have enormous medical spinoffs," he said.

Looking a bit like the garbage-compacting hero of the blockbuster animation "Wall-E", Gordon has a brain composed of 50,000 to 100,000 active neurons.

Once removed from rat foetuses and disentangled from each other with an enzyme bath, the specialised nerve cells are laid out in a nutrient-rich medium across an eight-by-eight centimetre (five-by-five inch) array of 60 electrodes.

This "multi-electrode array" (MEA) serves as the interface between living tissue and machine, with the brain sending electrical impulses to drive the wheels of the robots, and receiving impulses delivered by sensors reacting to the environment.

Because the brain is living tissue, it must be housed in a special temperature-controlled unit -- it communicates with its "body" via a Bluetooth radio link.

The robot has no additional control from a human or computer.

From the very start, the neurons get busy. "Within about 24 hours, they start sending out feelers to each other and making connections," said Warwick.

"Within a week we get some spontaneous firings and brain-like activity" similar to what happens in a normal rat -- or human -- brain, he added.

But without external stimulation, the brain will wither and die within a couple of months.

"Now we are looking at how best to teach it to behave in certain ways," explained Warwick.

To some extent, Gordon learns by itself. When it hits a wall, for example, it gets an electrical stimulation from the robot's sensors. As it confronts similar situations, it learns by habit.

To help this process along, the researchers also use different chemicals to reinforce or inhibit the neural pathways that light up during particular actions.

Gordon, in fact, has multiple personalities -- several MEA "brains" that the scientists can dock into the robot.

"It's quite funny -- you get differences between the brains," said Warwick. "This one is a bit boisterous and active, while we know another is not going to do what we want it to."

Mainly for ethical reasons, it is unlikely that researchers at Reading or the handful of laboratories around the world exploring the same terrain will be using human neurons any time soon in the same kind of experiments.

But rats brain cells are not a bad stand-in: much of the difference between rodent and human intelligence, speculates Warwick, could be attributed to quantity not quality.

Rats brains are composed of about one million neurons, the specialised cells that relay information across the brain via chemicals called neurotransmitters.

Humans have 100 billion.

"This is a simplified version of what goes on in the human brain where we can look -- and control -- the basic features in the way that we want. In a human brain, you can't really do that," he said.

For colleague Ben Whalley, one of the fundamental questions facing scientists today is how to link the activity of individual neurons with the overwhelmingly complex behaviour of whole organisms.

"The project gives us a unique opportunity to look at something which may exhibit complex behaviours, but still remain closely tied to the activity of individual neurons," he said.